BACK TO INDEX

Publications from 2012
Theses
  1. Ryan Wegner. Multi-Agent Malicious Behaviour Detection. PhD thesis, Department of Computer Science, University of Manitoba, Winnipeg, Canada, September 2012.
    Abstract:
    This research presents a novel technique termed Multi-Agent Malicious Behaviour Detection. The goal of Multi-Agent Malicious Behaviour Detection is to provide infrastructure to allow for the detection and observation of malicious multi-agent systems in computer network environments. This research explores combinations of machine learning techniques and fuses them with a multi-agent approach to malicious behaviour detection that effectively blends human expertise from network defenders with modern articial intelligence. Detection in this approach focuses on identifying multiple distributed malicious software agents cooperating to achieve a malicious goal in a complex dynamic environment. A signicant portion of this approach involves developing Multi-Agent Malicious Behaviour Detection Agents capable of supporting interaction with malicious multi-agent systems, while providing network defenders a mechanism for improving detection capability through interaction with the Multi-Agent Malicious Behaviour Detection system. Success of the approach depends on the Multi-Agent Malicious Behaviour Detection system's capability to adapt to evolving malicious multi-agent system communications, even as the malicious software agents in network environments vary in their degree of autonomy and intelligence. The Multi-Agent Malicious Behaviour Detection system aims to take advantage of detectable behaviours that individual malicious software agents as well as malicious multi-agent systems are likely to exhibit, including: beaconing, denying, propagating, ex-filtrating, updating and mimicking. This thesis research involves the design of this framework, its implementation into a working tool, and its evaluation using network data generated by an enterprise class network appliance to simulate both a standard educational network and an educational network containing malware traffic.

    @phdthesis{WegnerThesis,
    author = {Ryan Wegner},
    title = {Multi-Agent Malicious Behaviour Detection},
    school = {Department of Computer Science, University of Manitoba},
    year = {2012},
    address = {Winnipeg, Canada},
    month = {September},
    abstract = {This research presents a novel technique termed Multi-Agent Malicious Behaviour Detection. The goal of Multi-Agent Malicious Behaviour Detection is to provide infrastructure to allow for the detection and observation of malicious multi-agent systems in computer network environments. This research explores combinations of machine learning techniques and fuses them with a multi-agent approach to malicious behaviour detection that effectively blends human expertise from network defenders with modern articial intelligence. Detection in this approach focuses on identifying multiple distributed malicious software agents cooperating to achieve a malicious goal in a complex dynamic environment. A signicant portion of this approach involves developing Multi-Agent Malicious Behaviour Detection Agents capable of supporting interaction with malicious multi-agent systems, while providing network defenders a mechanism for improving detection capability through interaction with the Multi-Agent Malicious Behaviour Detection system. Success of the approach depends on the Multi-Agent Malicious Behaviour Detection system's capability to adapt to evolving malicious multi-agent system communications, even as the malicious software agents in network environments vary in their degree of autonomy and intelligence. The Multi-Agent Malicious Behaviour Detection system aims to take advantage of detectable behaviours that individual malicious software agents as well as malicious multi-agent systems are likely to exhibit, including: beaconing, denying, propagating, ex-filtrating, updating and mimicking. This thesis research involves the design of this framework, its implementation into a working tool, and its evaluation using network data generated by an enterprise class network appliance to simulate both a standard educational network and an educational network containing malware traffic.},
    pdf = {http://aalab.cs.umanitoba.ca/%7eandersj/Publications/pdf/WegnerPh.D.Thesis.pdf} 
    }
    


Journal Articles/Book Chapters
  1. Jeff Allen, John Anderson, and Jacky Baltes. Vision-Based Imitation Learning in Heterogeneous Multi-Robot Systems: Varying Physiology and Skill. International Journal of Automation and Smart Technology, 2(12):147-161, 2012.
    Abstract:
    Imitation learning enables a learner to improve its abilities by observing others. Most robotic imitation learning systems only learn from demonstrators that are similar physically and in terms of skill level. In order to employ imitation learning in a heterogeneous multi-agent environment, we must consider both differences in skill, and physical differences (physiology, size). This paper describes an approach to imitation learning from heterogeneous demonstrators, using global vision. It supports learning from physiologically different demonstrators (wheeled and legged, of various sizes), and self-adapts to demonstrators with varying levels of skill. The latter allows different parts of a task to be learned from different individuals (that is, worthwhile parts of a task can still be learned from a poorly-performing demonstrator). We assume the imitator has no initial knowledge of the observable effects of its own actions, and train a set of Hidden Markov Models create an understanding of the imitator's own abilities. We then use a combination of tracking sequences of primitives and predicting future primitives from existing combinations using forward models to learn abstract behaviours from demonstrations. This approach is evaluated using a group of heterogeneous robots that have been previously used in RoboCup soccer competitions.

    @article{ImitationIJAST,
    author = {Jeff Allen and John Anderson and Jacky Baltes},
    title = {Vision-Based Imitation Learning in Heterogeneous Multi-Robot Systems: Varying Physiology and Skill},
    journal = {International Journal of Automation and Smart Technology},
    year = {2012},
    volume = {2},
    number = {12},
    pages = {147--161},
    pdf = {http://aalab.cs.umanitoba.ca/%7eandersj/Publications/pdf/AllenAndersonBaltes12.pdf},
    abstract = { Imitation learning enables a learner to improve its abilities by observing others. Most robotic imitation learning systems only learn from demonstrators that are similar physically and in terms of skill level. In order to employ imitation learning in a heterogeneous multi-agent environment, we must consider both differences in skill, and physical differences (physiology, size). This paper describes an approach to imitation learning from heterogeneous demonstrators, using global vision. It supports learning from physiologically different demonstrators (wheeled and legged, of various sizes), and self-adapts to demonstrators with varying levels of skill. The latter allows different parts of a task to be learned from different individuals (that is, worthwhile parts of a task can still be learned from a poorly-performing demonstrator). We assume the imitator has no initial knowledge of the observable effects of its own actions, and train a set of Hidden Markov Models create an understanding of the imitator's own abilities. We then use a combination of tracking sequences of primitives and predicting future primitives from existing combinations using forward models to learn abstract behaviours from demonstrations. This approach is evaluated using a group of heterogeneous robots that have been previously used in RoboCup soccer competitions. } 
    }
    


Conference Articles
  1. Meng Cheng Lau, Jacky Baltes, John Anderson, and Stephane Durocher. A Portrait Drawing Robot Using A Geometric Graph Approach: Furthest Neighbour Theta-Graphs. In Proceedings of the 11th IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM 2012), Kaohsiung, Taiwan, July 2012.
    Abstract:
    We examine the problem of estimating ideal edges joining points in a pixel reduction image for an existing point-to-point portrait drawing humanoid robot, Betty. To solve this line drawing problem we present a modified Theta-graph, called Furthest Neighbour Theta-graph, which we show is computable in O(n(log n)/theta) time, where theta is a fixed angle in the graph's definition. Our results show that the number of edges in the resulting drawing is significantly reduced without degrading the detail of the final output image.

    @inproceedings{AIM2012Lau,
    author = {Meng Cheng Lau and Jacky Baltes and John Anderson and Stephane Durocher},
    title = {A Portrait Drawing Robot Using A Geometric Graph Approach: Furthest Neighbour Theta-Graphs},
    booktitle = {Proceedings of the 11th IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM 2012)},
    address = {Kaohsiung, Taiwan},
    month = {July},
    pdf = {http://aalab.cs.umanitoba.ca/%7eandersj/Publications/pdf/AIM2012_MCLau.pdf},
    year = {2012},
    abstract = {We examine the problem of estimating ideal edges joining points in a pixel reduction image for an existing point-to-point portrait drawing humanoid robot, Betty. To solve this line drawing problem we present a modified Theta-graph, called Furthest Neighbour Theta-graph, which we show is computable in O(n(log n)/theta) time, where theta is a fixed angle in the graph's definition. Our results show that the number of edges in the resulting drawing is significantly reduced without degrading the detail of the final output image.} 
    }
    



BACK TO INDEX




Disclaimer:

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons accessing this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.




Last modified: Mon Jan 29 11:54:48 2024