A Virtual Environment with Multi-Robot Navigation, Analytics, and Decision Support for Critical Incident Investigation

06/12/2018 ∙ by David L. Smyth, et al. ∙ NUI Galway 0

Accidents and attacks that involve chemical, biological, radiological/nuclear or explosive (CBRNE) substances are rare, but can be of high consequence. Since the investigation of such events is not anybody's routine work, a range of AI techniques can reduce investigators' cognitive load and support decision-making, including: planning the assessment of the scene; ongoing evaluation and updating of risks; control of autonomous vehicles for collecting images and sensor data; reviewing images/videos for items of interest; identification of anomalies; and retrieval of relevant documentation. Because of the rare and high-risk nature of these events, realistic simulations can support the development and evaluation of AI-based tools. We have developed realistic models of CBRNE scenarios and implemented an initial set of tools.



There are no comments yet.


page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Background and Related Research

This demonstration presents a new software system that combines a range of artificial intelligence techniques for planning, analysis and decision support for investigation of hazardous crime scenes. It is linked to a new virtual environment model of a critical incident. While our work uses a virtual environment as a testbed for AI tools, virtual environments have been more widely used to train responder personnel in near-realistic yet safe conditions. As noted by Chroust and Aumayr chroust2017resilience, virtual reality can support training by allowing simulations of potential incidents as well as the consequences of various courses of action in a realistic way. Mossel

et al. mossel2017requirements provide an analysis of the state of the art in virtual reality training systems which focuses on CBRN disaster preparedness. Use of virtual worlds, such as Second Life and Open Simulator [Cohen et al.2013b, Cohen et al.2013a], have led to improvements in preparation and training for major incident responses. Gautam et al. gautam2017human studied the potential of simulation-based training for emergency response teams in the management of CBRNE victims, and focused on the Human Patient Simulator.
CBRNE incident assessment is a critical task which can pose dangers to human life. For this reason, many research projects focus on the use of robots such as Micro Unmanned Aerial Vehicles (MUAV) to carry out remote sensing in such hazardous environments [Marques et al.2017, Baums2017, Daniel et al.2009]. Others include CBRNE mapping for first responders [Jasiobedzki et al.2009] and multi-robot reconnaissance for detection of threats [Schneider et al.2012].

2 Decision Support for Critical Incidents

This work was undertaken as part of a project called ROCSAFE (Remotely Operated CBRNE Scene Assessment and Forensic Examination), see Drury et al. rocsafe. As described in the following sections, we have implemented a baseline set of decision support systems and developed 3D world models of CBRNE incidents using a physics-based game engine to model Robotic Aerial Vehicles (RAVs).

The operator can issue a command to have the RAVs survey the scene. The RAVs operate as a multi-agent robot swarm to divide up work between them, and relay information from their sensors and cameras to a central hub. There, our Image Analysis module uses a Deep Neural Network (DNN) to detect and identify relevant objects in images taken by RAV cameras. It also uses a DNN to perform pixel-level semantic annotation of the terrain, to support subsequent route-planning for Robotic Ground-based Vehicles (RGVs). Our Probabilistic Reasoning module assesses the likelihood of different threats, as information arrives from the scene commander, survey images and sensor readings. Our Information Retrieval module ranks documentation, using TF-IDF, by relevance to the incident. All interactions are managed by our purpose-built JSON-based communications protocol, which is also supported by real-world RAVs, cameras and sensor systems. This keeps the system loosely coupled, and will support future testing in real-world environments.

2.1 Contributions

The key contributions of this work are: (1) an integrated system for critical incident decision support, incorporating a range of different AI techniques; (2) an extensible virtual environment model of an incident; and (3) a communications protocol for information exchange between subsystems. These tools are open-source and are being released publicly. This will enable other researchers to integrate their own AI modules (e.g. for image recognition, routing, or probabilistic reasoning) within this overall system, or to model other scenarios and evaluate the existing AI modules on them.

2.2 3D Model of Criticial Incident

To facilitate the development and testing of the software system, we have designed and publicly released a virtual environment [Smyth et al.2018a]. It is built with Unreal Engine, a suite of integrated tools for building simulations with photo-realistic visualizations and accurate real-world physics. UE is open source, scalable and supports plugins that allow the integration of RAVs and RGVs into the environment. We chose an operational scenario to model that consists of a train carrying radioactive material in a rural setting. This is shown in Figure 1. We used Microsoft’s AirSim [Shah et al.2017] plugin to model the RAVs. AirSim exposes various APIs to allow fine-grain control of RAVs, RGVs and their associated components. We have replicated a number of APIs from real-world RAV and RGV systems to facilitate the application of our AI tools to real-world critical incident use-cases in the future, after testing in the virtual envionment.

Figure 1: RAV flight paths for surveying and data collection.

2.3 Communications

We developed a purpose-built JSON-format protocol for all communications between subsystems. There are a relatively small number of messages which are sent at pre-defined intervals, so we use a RESTful API [Richardson et al.2013]. Our communications protocol is designed to support various vehicles with significant autonomy and is flexible enough to integrate with various components, using different standards, protocols and data types. In this demonstration, we concentrate on RAVs. Since decision making may happen within each RAV’s single-board computer, we have also facilitated direct communication between the RAVs.

2.4 Autonomous Surveying and Image Collection

Our multi-agent system supports autonomous mapping of the virtual environment. This involves discretizing a rectangular region of interest into a set of grid points. At each of the points, the RAV records a number of images and metadata relating to those images. Four bounding GPS coordinates, that form the corner points of a rectangle, can be passed in through a web-based interface.
Our initial route planning algorithm develops agent routes at a centralized source and distributes the planned routes to each agent in the multi-agent system [Smyth et al.2018b]

. Our current implementation uses a greedy algorithm, which generates subsequent points in each agent’s path by minimizing the distance each agent needs to travel to an unvisited grid point. Current state of the art multi-agent routing algorithms use hyper-heuristics, which out-perform algorithms that use any individual heuristic

[Wang et al.2017]

. We intend to integrate this approach with learning algorithms such as Markov Decision Processes

[Ulmer et al.2017] in order to optimize the agent routes in a stochastic environment, for example where RAVs can fail and battery usage may not be fully known.

2.5 Image Processing and Scene Analysis

Our Central Decision Management (CDM) system uses a DNN to identify relevant objects in images taken by the RAV cameras. Specifically, we have fine-tuned the object detection model Mask R-CNN [He et al.2017], using annotated synthetic images that we collected from our virtual scene. This kind of training on a synthetic dataset has been shown to transfer well to real world data: in self-driving cars [Pan et al.2017], pedestrian detection [Hattori et al.2015], 2D-3D alignment [Aubry et al.2014] and object detection [Tian et al.2018].
Mask R-CNN is an extension of Faster R-CNN [Ren et al.2015] and is currently a state-of-the-art object detection DNN model that not only detects and localizes objects with bounding boxes, but also overlays instance segmentation masks on top, to show the contours of the objects within the boxes. The goal of this is to draw the crime scene investigator’s attention to objects of interest within the scene, even if there are overlapping objects, and the object labels are an input into the probabilistic reasoning module. We plan to further enhance the performance of this technique by retraining/fine-tuning the network on other relevant datasets, for example, Object deTection in Aerial (DOTA) images [Xia et al.2017].

2.6 Reasoning and Information Retrieval

To synthesize data and reason about threats over time, we have developed a probabilistic model in BLOG [Milch et al.2007]

. Its goal is to estimate the probabilities of different broad categories of threat (chemical, biological, or radiation/nuclear) and specific threat substances, as this knowledge will affect the way that the scene is assessed. For example, initially, a first responder with a hand-held instrument may detect evidence of radiation in some regions of the scene. Subsequent RAV images may show damaged vegetation in those and other regions, which could be caused by radiation or chemical substances. RAVs may be dispatched with radiation sensors to fly low over those regions, subsequently detecting a source in one region. Using keywords that come from sources such as the object detection module, the probabilistic reasoning module, and the crime scene investigators, the CDM retrieves documentation such as standard operating procedures and guidance documents from a knowledge base. It ranks them in order of relevance to the current situation, using Elastic Search and a previously-defined set of CBRNE synonyms. These documents are re-ranked in real-time as new information becomes available.


This research is funded by the European Union’s Horizon 2020 Programme under grant agreement No. 700264.


  • [Aubry et al.2014] Mathieu Aubry, Daniel Maturana, Alexei A Efros, Bryan C Russell, and Josef Sivic. Seeing 3d chairs: exemplar part-based 2d-3d alignment using a large dataset of cad models. In

    Conference on computer vision and pattern recognition

    , pages 3762–3769, 2014.
  • [Baums2017] A Baums. Response to cbrne and human-caused accidents by using land and air robots. Automatic Control and Computer Sciences, 51(6):410–416, 2017.
  • [Chroust and Aumayr2017] Gerhard Chroust and Georg Aumayr. Resilience 2.0: Computer-aided disaster management. Journal of Systems Science and Systems Engineering, 26(3):321–335, 2017.
  • [Cohen et al.2013a] Daniel Cohen, Nick Sevdalis, Vishal Patel, Michael Taylor, Henry Lee, Mick Vokes, Mick Heys, David Taylor, Nicola Batrick, and Ara Darzi. Tactical and operational response to major incidents: feasibility and reliability of skills assessment using novel virtual environments. Resuscitation, 84(7):992–998, 2013.
  • [Cohen et al.2013b] Daniel Cohen, Nick Sevdalis, David Taylor, Karen Kerr, Mick Heys, Keith Willett, Nicola Batrick, and Ara Darzi. Emergency preparedness in the 21st century: training and preparation modules in virtual environments. Resuscitation, 84(1):78–84, 2013.
  • [Daniel et al.2009] Kai Daniel, Bjoern Dusza, Andreas Lewandowski, and Christian Wietfeld. Airshield: A system-of-systems muav remote sensing architecture for disaster response. In ”3rd Annual IEEE Systems conference”, pages 196–200, 2009.
  • [Drury et al.2017] Brett M Drury, Nazli B Karimi, and Michael G Madden. ROCSAFE: Remote forensics for high risk incidents. In First International Workshop on Artificial Intelligence in Security. IJCAI, 2017.
  • [Gautam et al.2017] Sima Gautam, Navneet Sharma, Rakesh Kumar Sharma, and Mitra Basu. Human patient simulator based cbrn casualty management training. Defence Life Science Journal, 2(1):80–84, 2017.
  • [Hattori et al.2015] Hironori Hattori, Vishnu Naresh Boddeti, Kris Kitani, and Takeo Kanade. Learning scene-specific pedestrian detectors without real data. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3819–3827, 2015.
  • [He et al.2017] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask R-CNN. In Conference on Computer Vision (ICCV), pages 2980–2988, 2017.
  • [Jasiobedzki et al.2009] Piotr Jasiobedzki, Ho-Kong Ng, Michel Bondy, and CH McDiarmid. C2SM: a mobile system for detecting and 3d mapping of chemical, radiological, and nuclear contamination. In Sensors, and Command, Control, Communications, and Intelligence (C3I), 2009.
  • [Marques et al.2017] Mario Monteiro Marques, Rodolfo Santos Carapau, Alexandre Valério Rodrigues, V Lobo, Júlio Gouveia-Carvalho, Wilson Antunes, Tiago Gonçalves, Filipe Duarte, and Bernardino Verissimo. Gammaex project: A solution for cbrn remote sensing using unmanned aerial vehicles in maritime environments. In OCEANS–Anchorage, pages 1–6. IEEE, 2017.
  • [Milch et al.2007] Brian Milch, Bhaskara Marthi, Stuart Russell, David Sontag, Daniel L Ong, and Andrey Kolobov. BLOG: Probabilistic models with unknown objects. Statistical relational learning, page 373, 2007.
  • [Mossel et al.2017] Annette Mossel, Andreas Peer, Johannes Göllner, and Hannes Kaufmann. Requirements analysis on a virtual reality training system for cbrn crisis preparedness. In 59th Annual Meeting of the ISSS, volume 1, pages 928–947, 2017.
  • [Pan et al.2017] Xinlei Pan, Yurong You, Ziyan Wang, and Cewu Lu. Virtual to real reinforcement learning for autonomous driving. arXiv:1704.03952, 2017.
  • [Ren et al.2015] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In Neural Information Processing Systems, pages 91–99, 2015.
  • [Richardson et al.2013] Leonard Richardson, Mike Amundsen, and Sam Ruby. RESTful Web APIs. O’Reilly Media, Inc., 2013.
  • [Schneider et al.2012] Frank E Schneider, Jochen Welle, Dennis Wildermuth, and Markus Ducke. Unmanned multi-robot cbrne reconnaissance with mobile manipulation system description and technical validation. In 13th International Carpathian Control Conference (ICCC), pages 637–642. IEEE, 2012.
  • [Shah et al.2017] Shital Shah, Debadeepta Dey, Chris Lovett, and Ashish Kapoor. Airsim: High-fidelity visual and physical simulation for autonomous vehicles. In Field and Service Robotics, pages 621–635, 2017.
  • [Smyth et al.2018a] David L Smyth, Frank G Glavin, and Michael G Madden. UE4 Virtual Environment: Rural Rail Radiation Scenario. In https://github.com/ROCSAFE/CBRNeVirtualEnvMultiRobot/releases, 2018.
  • [Smyth et al.2018b] David L Smyth, Frank G Glavin, and Michael G Madden. Using a game engine to simulate critical incidents and data collection by autonomous drones. In IEEE Games, Entertainment and Media, 2018.
  • [Tian et al.2018] Yonglin Tian, Xuan Li, Kunfeng Wang, and Fei-Yue Wang. Training and testing object detectors with virtual images. Journal of Automatica Sinica, 5(2):539–546, 2018.
  • [Ulmer et al.2017] Marlin W Ulmer, Justin C Goodson, Dirk C Mattfeld, and Barrett W Thomas. Route-based markov decision processes for dynamic vehicle routing problems. Technical report, Braunschweig, 2017.
  • [Wang et al.2017] Yue Wang, Min-Xia Zhang, and Yu-Jun Zheng. A hyper-heuristic method for uav search planning. In Advances in Swarm Intelligence, pages 454–464, 2017.
  • [Xia et al.2017] Gui-Song Xia, Xiang Bai, Jian Ding, Zhen Zhu, Serge Belongie, Jiebo Luo, Mihai Datcu, Marcello Pelillo, and Liangpei Zhang. DOTA: A large-scale dataset for object detection in aerial images. arXiv:1711.10398, 2017.