Towards Explainability for a Civilian UAV Fleet Management using an Agent-based Approach

09/22/2019 ∙ by Yazan Mualla, et al. ∙ Universit 0

This paper presents an initial design concept and specification of a civilian Unmanned Aerial Vehicle (UAV) management simulation system that focuses on explainability for the human-in-the-loop control of semi-autonomous UAVs. The goal of the system is to facilitate the operator intervention in critical scenarios (e.g. avoid safety issues or financial risks). Explainability is supported via user-friendly abstractions on Belief-Desire-Intention agents. To evaluate the effectiveness of the system, a human-computer interaction study is proposed.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

With the rapid increase of the world’s urban population, the infrastructure of the constantly expanding metropolitan areas is subject to immense pressure. To meet the growing demand for sustainable urban environments and improve the quality of life for citizens, municipalities will increasingly rely on novel transport solutions. In particular, Unmanned Aerial Vehicles (UAVs), commonly known as drones, are expected to have a crucial role in future smart cities thanks to relevant features such as autonomy, flexibility, mobility, adaptive altitude, and small dimensions [mualla2019between]. Therefore, over the past few years, an increasing number of public and private research laboratories have been working on civilian, small, and human‐friendly drones.

Still, several concerns exist regarding the possible consequences of introducing UAVs in crowded urban areas, especially regarding people’s safety, e.g. if a mechanical failure causes a crash. To guarantee it is safe for UAVs to fly close to human crowds and to reduce costs, different scenarios must be modeled and tested. Yet, regulations restrict the use of UAVs in cities. Additionally, to perform tests with real UAVs, one needs access to expensive hardware. Moreover, field tests usually consume a considerable amount of time and require trained and skilled people to pilot and maintain the UAVs. Furthermore, on the field, it may also be hard to reproduce the same scenario several times [lorig2015measuring]. In this context, the development of simulation frameworks that allow transferring real world scenarios into executable models using computer simulation frameworks, i.e. simulating UAVs activities in a digital environment are highly relevant [mualla2019agent][mualla2018comparison]. However, simulation frameworks have their own drawbacks, i.e. it is impossible to fully reproduce the real environment.

Intelligent software agents have been established as a suitable technique for implementing autonomous control and decision making in computer systems [wooldridge1995intelligent]. It is also used for different simulation applications in general [najjar2017aquaman, mualla2018agentoil, najjar2017aquamanWI] and for semi-autonomous systems in different domains [muallaOilCPS19] in particular. The use of Agent-Based Simulation (ABS) frameworks in UAVs is gaining more interest in complex civilian application scenarios where coordination and cooperation are necessary [mualla2018comparison]. ABS models a set of interacting intelligent entities that reflect, within an artificial environment, the relationships in the real world [wooldridge1995intelligent]. Due to operational costs, safety concerns, and legal regulations, ABS is commonly used to implement models and conduct tests. This has resulted in a range of research works addressing ABS in UAVs [mualla2019agent]. The results make ABS a natural step forward towards better understanding and managing the complexity of today’s business and social systems.

However, as UAVs are like any other robot, communication with humans is a challenge, since the human user is not by default capable of understanding the robot’s State of Mind (SoM) [hellstrom2018understandable].This problem is even more accentuate in the case of UAV since–as has been confirmed by recent studies in the literature [bainbridge2008effect, hastie2017trust]

–remote robots tend to instill less trust than those co-located. For this reason, working with remote robots is more challenging task specially in high-stakes scenarios such as flying UAVs in the urban environment. To overcome this challenge, this paper relies on the recent advances of the domain of eXplainable Artificial Intelligence (XAI)

[guidotti2019survey], [XAISLR], [calvaresi2019XAI] in order to trace the decisions of the agents, and enable the validation of their behaviors when they are applied in a fleet of civilian UAVs that are interacting with other objects in the air or in the smart city. In this paper, we present a conceptual design of a MultiAgent System (MAS) that simulate the civilian UAVs’ fleet, and provides tools for building the explainability of the system.

The rest of this paper is structured as follows. Firstly an overview of related works is given. Then, the explainable MAS for aerial transportation is highlighted and a use case in a smart city is presented. Finally, a conclusion and future research directions are presented.

2 Related Work

2.1 The rise of XAI

In the last couple of years, work on XAI is gaining momentum both in research and industry. Primarily, this surge is explained by the success of black-box machine learning mechanisms whose inner workings are incomprehensible by human users 

[gunning2017explainable]

. Therefore, XAI aims to ”open“ the black-box and explain the sometimes intriguing results of its mechanisms e.g. a Deep Neural Network (DNN) mistakenly classifying a tomato as a dog 

[szegedy2013intriguing]. In contrast to this data-driven explainability [XAISLR], more recently, this tendency has been extended to explain the complex behavior of goal-driven systems such as robots/agents [XAISLR][hellstrom2018understandable] since: (i) as has been shown in the literature, humans tend to assume that these agents & robots have their own SoM [hellstrom2018understandable] and that with the absence of a proper explanation, the user will come up with an explanation that might be flawed or erroneous, (ii) these agents/robots are expected to be omnipresent in the daily lives of their users (e.g. social assistive robots and virtual assistants).

2.2 Explainability of UAVs

Recently, both data-driven and goal-driven explainability have been introduced in UAVs. In the former case, XAI aims to interpret

the opaque machine learning mechanisms used by those UAV to analyze the input originating from their rich data streams. One ongoing work investigates how to interpret the decisions of convolution neural networks analyzing aviation related images as inputs 

[dolph2018towards]. Explainability is explored by reviewing three feature visualization methods in a layer-by-layer approach. In the latter case, explainability aims to make the autonomous behavior of the UAV understandable. One work is based on fuzzy logic: the explainable model is presented on a visual platform in the format of if-then rules derived from the fuzzy inference model [keneni2019evolving]. Our approach belongs to the goal-driven case and is different than other related work as it relies on a decentralized solution using MAS. This choice is supported by the fact that the management of a UAV fleet must consider the physical distance between UAVs and other actors in the system. Additionally, autonomous agents represent in our opinion an adequate implementation of the autonomy of UAVs. The choice of Belief-Desire-Intention (BDI) [bratman1987intention] model is to support the explainability in the UAV fleet as detailed in the next section.

3 Explainable MAS for Aerial Transportation

Agent architectures, like BDI model, are frequently applied to equip UAVs with greater autonomy. By designing proactive agents that control UAVs, the latter become capable of autonomously managing their actions and behavior to reach their goals [arokiasami2016interoperable].

As shown by Padgham et al. [Padgham2011IntegratingBR], BDI models can facilitate explainable agency, as they provide a well-structured snapshot of agent internals at any point of time. The BDI paradigm is frequently applied in ABS. Adam and Gaudou [adam2016bdi] present an extensive analysis and evaluation of approaches for integrating BDI models in ABS and highlight the previously mentioned benefits of BDI models as a way to implement descriptive agents that use richer and–from a human perspective– better interpretable abstractions than purely reactive agents.

Figure 1 outlines the contextual model of our work; in the bottom left, we see the technical requirements and infrastructure in a smart city, into which the aerial transportation management system (middle left) is embedded. In the top, the reactive agents guarantee autonomy when performing the normal tasks of an intelligent transport system, while there is a need for explainability that is associated with BDI agents and allows for human-in-the-loop control (top right). The UAVs in the fleet explain to the human their autonomous behavior and decisions along with any deviation from the planned mission.

Figure 1: Contextual Model
Beliefs package_1: { to: Townhouse_27a}
package_2: { to: School_1 }
Townhouse_27a: { time: 20min, charger: no }
School_1: { time: 20min, charger: yes }
batteryTime: 22min
Desires deliver: package_1
deliver: package_2
Intentions deliver: package_2
Plans moveTo(School_1)
Table 1: A high-level perspective of an agent’s BDI snapshot.

Table 1 shows a potential example of a high-level perspective of an agent’s BDI snapshot.

4 DroneAgent-Delivery: a use case of human or package transportation in a smart city

The use case is about investigating the role of XAI in the communication between drones and humans in the context of human or package transportation in a smart city.

In the scenario, one operator is in charge of several drones that will provide transportation services to clients. These drones will autonomously conduct tasks and take decisions when needed. Additionally they need to communicate and discuss with each other and may cooperate to complete a specific task. The drones will explain to the Operator Assistant Agent (OAA) the progress of the mission including the unexpected events along with the decisions made by them.

Figure 2: Interaction of Actors in the System

Figure 2 shows the interaction between the actors in the proposed use case. In the following, the steps of the use case are detailed:

  1. When a client puts a request for transporting a package/passenger, a notification is sent to the OAA.

  2. The OAA will send it to all drones, so all drones are connected with each other and with the OAA using an assumed reliable network.

  3. Drones that are near, with a specific radius, to the package/passenger will coordinate to complete the transportation mission. The decentralized coordination (without the approval of the operator) can be for several reasons:

    • Who will deliver the package/passenger according to constraints: actual distance to the package/passenger, battery size, having other packages/passengers in hand, having a mission with a near destination, etc.

    • The load is heavy, and it needs several drones working as a swarm to lift it.

    • There is a need to cooperate to deliver the package/passenger between several drones, where each drone delivers the load part of the way and then hands it to another drone.

  4. Every drone will explain its own arguments to the OAA. At the end, the result of the coordination discussion will be sent to the OAA that will show it with/without filtration to the operator.

  5. If the package/passenger is picked up by a drone from a competitor (external events), we have two situations:

    • The client sends to the OAA that the package or passenger is picked up, and the assistant agent will inform the assigned drone to stop the mission;

    • The client does not send a notification that the package/passenger is picked up (because of selfishness or laziness). In this situation, the drone will go to the place and observe the absence of the package/passenger, and it needs to explain this to the OAA.

  6. The explanation needed from the drone is generally about the mission progress, its decisions and its status, e.g. the drone needs charging and that is why it ignores a nearby package/passenger, etc. Other kinds of explanation needed from the drones are the unexpected events, e.g. the drone arrives at the package location and see that is it damaged, or not according to the description (maybe heavier).

  7. the OAA may filter the explanations received from the drones to give a summary of the most important explanations to avoid overwhelming the operator with a lot of details.

  8. The operator at any time can either look at full explanations from all the drones or only the results filtered by the OAA.

4.1 Evaluation of the autonomy and explainability

For evaluation, there is a need for an ABS to simulate an application of drones’ autonomy and explainability. The evaluation is performed as a human-computer interaction study. The participants will try the simulation and fill out a questionnaire to discover if a human user (as the operator) can understand the explanations provided by the UAVs. The questionnaire will be built specifically according to the XAI metrics provided in the literature [hoffman2018metrics].

The evaluation will have three levels:

  • First level: we test the explainability aspect. The experiment requires dividing the participants into three groups. The first, second and third groups will try the simulation without any explanation, with full explanation, and with filtered explanation, respectively.

  • Second level: we test the shared autonomy aspect. We ask the participants to perform a tedious task like filling an excel sheet while the simulation is running in two modes (First mode: The operator only looks at the full explanations by all drones. Second mode: The operator is looking at only the filtered explanations from the OAA). Then we measure in what mode the participants completed more percentage of the tedious task assigned to them.

  • Third level: we get the results and statistics from the simulation, e.g. how many packages delivered, if a drone did not deliver any package, the distance moved by all drones, etc. The operator benefits from these results to change the initial parameters of the simulation for the second run and check if the results are better.

5 Conclusion and future work

This paper presented a concept and a specification for an agent-based civilian UAV fleet management approach with a focus on explainability. The presented work is an initial step towards the goal of providing agent-based tools that allow for a human-in-the-loop approach supporting semi-autonomous UAV fleet management. In particular, the following work is of importance:

Engineering research

should be conducted to design an explainable multiagent management system whose architecture is sufficiently generic to be applied for use cases beyond UAV fleet management.

Empirical assessment

should evaluate the effectiveness of the approach in human-computer interaction studies.

6 Acknowledgements

This work is supported by the Regional Council of Bourgogne Franche-Comté (RBFC, France) within the project UrbanFly 20174-06234/06242. The first author thanks Cedric Paquet for his remarks regarding the evaluation.

References