Log In Sign Up

Apprenticeship Bootstrapping Via Deep Learning with a Safety Net for UAV-UGV Interaction

by   Hung Nguyen, et al.

In apprenticeship learning (AL), agents learn by watching or acquiring human demonstrations on some tasks of interest. However, the lack of human demonstrations in novel tasks where they may not be a human expert yet, or when it is too expensive and/or time consuming to acquire human demonstrations motivated a new algorithm: Apprenticeship bootstrapping (ABS). The basic idea is to learn from demonstrations on sub-tasks then autonomously bootstrap a model on the main, more complex, task. The original ABS used inverse reinforcement learning (ABS-IRL). However, the approach is not suitable for continuous action spaces. In this paper, we propose ABS via Deep learning (ABS-DL). It is first validated in a simulation environment on an aerial and ground coordination scenario, where an Unmanned Aerial Vehicle (UAV) is required to maintain three Unmanned Ground Vehicles (UGVs) within a field of view of the UAV 's camera (FoV). Moving a machine learning algorithm from a simulation environment to an actual physical platform is challenging because `mistakes' made by the algorithm while learning could lead to the damage of the platform. We then take this extra step to test the algorithm in a physical environment. We propose a safety-net as a protection layer to ensure that the autonomy of the algorithm in learning does not compromise the safety of the platform. The tests of ABS-DL in the real environment can guarantee a damage-free, collision avoidance behaviour of autonomous bodies. The results show that performance of the proposed approach is comparable to that of a human, and competitive to the traditional approach using expert demonstrations performed on the composite task. The proposed safety-net approach demonstrates its advantages when it enables the UAV to operate more safely under the control of the ABS-DL algorithm.


page 3

page 4


Intervention Aided Reinforcement Learning for Safe and Practical Policy Optimization in Navigation

Combining deep neural networks with reinforcement learning has shown gre...

Sample-Efficient Multi-Agent Reinforcement Learning with Demonstrations for Flocking Control

Flocking control is a significant problem in multi-agent systems such as...

Learning Sensor Placement from Demonstration for UAV networks

This work demonstrates how to leverage previous network expert demonstra...

Adaptive Genomic Evolution of Neural Network Topologies (AGENT) for State-to-Action Mapping in Autonomous Agents

Neuroevolution is a process of training neural networks (NN) through an ...

Learning Unmanned Aerial Vehicle Control for Autonomous Target Following

While deep reinforcement learning (RL) methods have achieved unprecedent...

Learning Monocular Reactive UAV Control in Cluttered Natural Environments

Autonomous navigation for large Unmanned Aerial Vehicles (UAVs) is fairl...

Reinforcement Learning for Shared Autonomy Drone Landings

Novice pilots find it difficult to operate and land unmanned aerial vehi...


Designing a reward function for a reinforcement learning agent could be a cumbersome task. Using human experts to demonstrate a task to an artificial agent to learn from could both speed up the learning process and equally reduces the burden of designing reward functions by hand. However, even this solution is not as simple as it may sound.

In recent surveys [Argall et al.2009], [Billing and Hellström2010], [Hussein et al.2017], the main challenges emanate from the problem of how to transfer human skills to agents or robots through demonstrations. When designing a new task for an autonomous system, particularly in complex situations or tasks, there is no guarantee that there is a human expert or, if so, that he/she is available to create a dataset for the robot.

The previous challenge called for designing a new learning scheme, called Apprenticeship Bootstrapping (ABS) for learning a composite task using human demonstrations of sub-tasks [Nguyen et al.2018, Nguyen, Garratt, and Abbass2018]. An ABS via inverse reinforcement learning algorithm (ABS-IRL) has shown success in overcoming that challenge. However, it is not suitable for continuous action spaces. This motivated us to propose a new ABS approach via deep learning, called ABS-DL, which is described in the next section.

The validation task is designed to mimic the simulated task in previous work on ABS, which was a ground-air interaction scenario [Nguyen et al.2017, Nguyen et al.2018, Nguyen, Garratt, and Abbass2018]. The aerial and ground coordination task is a challenge in order to control the UAV for the human operator. Therefore, the task is suitable to evaluate our ABS-DL when it is decomposed into sub-tasks and then the proposed ABS-DL algorithm is used to learn from these sub-tasks before application to a composite task.

However, when applying our ABS-DL algorithm for physical environments, it is challenging to overcome the safety concerns especially when there is no human involved in the operation. Therefore, in this paper, we propose a primary safety-net approach to limit the UAV behaviour produced by our ABS-DL algorithm.

We first present previous work on safety nets. This is then followed by a description of the proposed ABS-DL algorithm. The scenario used for evaluating the algorithm is then presented, followed by experiments in both the simulated and physical environments and results. Conclusions are then drawn.

Safety Nets for Learning Agents

When we apply learning algorithms in real-world operations, we cannot afford to overlook the safety issues such as damages to humans and systems in the environment caused by the errors of the algorithm. Most systems in the academic literature for autonomous systems, unmanned vehicles, and human-robot interactions rely entirely on the output generated by optimal control or machine learning algorithms without safety nets, which limits the applicability of those methods in practice [Chaulwar, Botsch, and Utschick2017, Geng et al.2018, Zhan et al.2017]. Recently, many researchers started to include constraints that are used to limit the action or behaviour produced by autonomy within a safe zone, so that the tests of the models in the real environments can guarantee a damage-free collision avoidance behaviour. These terms are frequently called safety nets, safety margins or safety constraints. While there are subtle differences in the meaning, we prefer Safety nets as they represent the overall architecture and system that contains the safety constraints and safety margins.

There are three types of safety nets that have been used in recent studies to limit the action or to guide the learning, including internal hard constraints, internal soft constraints, and external intervention protocols.

  • Internal hard constraints are the constraints in the methodology besides the algorithms themselves providing one or more safety margins that use rules to limit the output actions. These constraints are found in very simple rule-based form in many path planning and UAV applications. For instance, the hard constraints can be defined as the minimum distance to obstacles, and maximum velocity of the vehicles [Chae, Lee, and Yi2017, Chen et al.2017, Miraglia and Hook2017]. In [Raineri, Perri, and Bianco2017], small and large safety margins have been introduced to add extra layers of safety before the planning algorithms produce collision-free trajectories. In some circumstances, the internal hard constraints can be found in a form of predefined parameters which are later used to compute the dynamic safety margins [Mayer, Sonntag, and Sawodny2017, Suh, Chae, and Yi2018].

  • Internal soft constraints take forms of additional learning algorithms to compute the dynamic safety margins. Instead of defining a fixed constrained distance or kinematic-related parameters to deduce safety margins for collision avoidance, extra predictive models might be used to fuse the environmental and user-related information to compute dynamic safety constraints that regularize the output of the algorithms [Arbabzadeh and Jafari2018, Hubschneider et al.2017].

  • External intervention protocols are the ultimate safety nets that can be used by humans to intervene manually to avoid dangerous situations. The autonomous systems can be equipped with an active safety mode that can grant the human operators the overriding right to make appropriate decisions given the lack of autonomy to handle the situations [Khan2017, Punzo et al.2018].

Our experiments are performed in simulation as well as in physical environments, which consists of an UAV and multiple UGVs. To avoid any damage to the UAV and UGVs in the physical environment, which is not a significant issue when training in simulation, we adapt internal hard constraints and external intervention protocols as safety nets for our tests. Details of those safety nets are discussed below.

Safety nets for the UAV and the UGV

In this paper, to achieve safe experimental conditions, we introduce a double-layer safety net including motion hard constraints and external intervention protocols. Both of the constraints and protocols are used in our physical experiment. The safety net demonstrates its benefits when it helps the testing experiment of our ABS-DL algorithm on the physical environment to avoid damage for the UAV, the UGVs, and the surrounding environment and obstacles.

Let be the obstacle in a set of fixed obstacles . Internal hard constraints are enforced on the future positions of the UAV and UGVs according to the following equation:


where denote the x-coordinates of obstacle ’ left edge and right edge, respectively; denote the y-coordinates of obstacle ’ upper edge and lower edge. indicates the unsafe condition of the UAV or UGV position relative to obstacles, which activates the hover action of UAV or the immobilization of the UGV. determines the thickness of the safety margins.

There is also a constraint with thickness of

at four boundaries of the testing area. The distance between the UAV or any UGV and each boundary of the environment or any edge of an obstacle is estimated based on sensor data. If the safety margin is predicted to be crossed over by the next action of any vehicle, the violated vehicle will be forced to hover at one point, in the case of UAV, or to stop moving, in the case of UGV, to guarantee a collision-free trajectory (collision with the environment boundary). The vehicles then wait for the next non-violation action or the command of the human operator.

Another layer of the safety net is the manual mode that can be selected by the human operator to override and control the UAV and UGV in hazardous situations when the human notices any issue in taking off, landing, or any risk of collisions.

Apprenticeship Bootstrapping via Deep Learning for Robotics

A motion tracking system generally produces the desired movement via the robot’s motion capability on the horizontal and vertical axes by time. Thanks to this technique, the corresponding robot is driven to follow its own predefined trajectory until it reaches the target point. Then, the formation pattern between robots is generated and maintained during the transient process [Wenzel, Masselli, and Zell2011, Yu et al.2015]. However, the development of a robot motion planner is not an easy task since this problem involves many complicated and mixed steps. Firstly, system model identification for active UAV and UGV systems must be considered based on observing input-output signals from experimental data. This dynamic analysis of given systems provides controller designers with a better understanding of the system behaviour, notably the cause-effect relationships [Koszewnik2014, Bouabdallah and Siegwart2007, Phan and Liu2008]. After a proper model is determined, the linear and angular velocity tracking control diagrams on the longitudinal and vertical axes are designed and implemented in each robot with the aim of stabilizing the robot’s velocity by PID controllers as much as possible throughout its movement.

In this paper, to reduce the complexity of planning motions between UAV and UGVs, we develop an artificial intelligence (AI) controller for the UAV. Additionally, we integrate our proposed safety nets for the control of the UAV agent in this task. This integration allows the UAV avoid unexpected behaviours produced by the agent.

One approach to design the AI controller is to use human experts. However, in practice, a human expert may not be available because the tasks are new or it is expensive to access someone with the required skills to perform the task. By decomposing the skill into sub-skills that require less skilled humans, we can bootstrap the higher skills from these building blocks. This has been the primary motivation for ABS-DL.

The sub-skills represent a decomposition of the action space. Not all actions are needed for a sub-skill. It may also involve a decomposition of the state space since sub-skills are associated with simpler contexts that represent partial representations of the original context. Below, we will explain the above formally.

Define and to be the original complex state and action spaces of a complex task, respectively. Here, and represent the sub-state and sub-action spaces. Supposing that the composite task is divided into sub-tasks.

Let be a set of demonstrations of all sub-tasks, where is a set of sub-task demonstrations. Each set of sub-task demonstrations, , is comprised of state-action pairs . To form the set , in the sub-state space , the expert is required to perform actions in the sub-action space ; meanwhile is one of the sub-state spaces in the whole complex state space, and is one of sub-action spaces in the whole complex action space. It is important to emphasize that the sub-tasks are orthogonal.

In this learning approach, it is assumed that the number of dimensions of the states in all sub-state spaces is identical, and the composite action space is decomposed into different sub-action spaces. While performing the sub-task, the expert is required to focus on only the dimensions relevant to this sub-task, and use the corresponding sub-action space.

To create a composite set from the sub-task demonstrations, a straight-forward fusion of all sub-state spaces that have the identical number of dimensions, and possibly primitive actions of sub-action spaces are combined to produce a sufficient action space similar to the composite action space of the complex task.

A deep network is used to train the composite set. States of sub-state spaces are input for the network, and the sufficient action space covering all possible primitive actions is the network response. The high-level algorithmic description of ABS-DL is shown in Algorithm 1.

1:Sub-task demonstrations ; - the function fusing a sub-state into a composite state; - the function fusing a sub-action into a composite action
2:A DNN trained model outputting composite actions.
3:Initializing a composite set.
4:Initializing a DNN model.
5:for each sub-task demonstration (do
8:   Adding the composite demonstration () to the composite set.
9:end for
10:Training the DNN model using the composite set.
Algorithm 1 Apprenticeship Bootstrapping (ABS) via Deep Learning Algorithm.

Experimental Apparatus

UAV/UGVs Coordination Task

In this paper, the coordination task of an aerial-ground is identical to that in our previous research [Nguyen et al.2017, Nguyen, Garratt, and Abbass2018, Nguyen et al.2018]. There are four manoeuvres: Fixed-Altitude manoeuvre, Climb manoeuvre, Descend manoeuvre, and Combined manoeuvre. The first three are primitive manoeuvres and the fourth a composite and more complex one requiring switching among the three primitive ones. The six basic actions for controlling the UAV are: roll (move left and move right), pitch (move forward and move backward), and yaw (climb and descend). The formations of the UGVs in each of these three primitive manoeuvres are shown in Figure 1. In addition to the UGVs formation control approach, an obstacle avoidance algorithm is also applied to UGVs to avoid unexpected obstacles and to see the UAV’s behavior response while the UGV’s predefined formation is shurnk or extended. This paper focuses on designing the autonomous control of the UAV; thus, the obstacle avoidance algorithm for UGVs will be discussed deeply in another paper.

Figure 1: Formation of UGVs in three primitive scenarios.

The UAV is required to maintain three UGVs within its field of view (FoV) without missing any or creating a much larger FoV than needed to accommodate the manifold created by the UGVs. Then, the UAV’s task is decomposed into two objectives. Firstly, it needs to minimise the distance between its own centre of mass and that of the UGVs within its FoV. Secondly, it has to minimise the difference between the radius of its camera’s FoV and the ideal one required which is defined as the radius of the smallest circle to encapsulate the manifold formed by the UGVs. The Pinhole camera model [Sturm2014] is used to determine the central points and radius values.

Let and be the centre of masses of the UAV and the UGVs within the UAV’s FoV, respectively, and and the radius of the UAV camera’s FoV and the ideal one at time step . The first objective is to minimise the distance error given by Equation 2, where denotes the norm, and the second objective is to minimise the radius error expressed by Equation 3.


A pictorial representation of the overall architecture, where there are a human operator, an UAV, and there UGVs, is shown in Figure 2.

Figure 2: Model of coordination task for UAV and UGVs

Simulation Environment

The Gazebo simulator [Koenig and Howard2006] is used to design the task scenario and the drone simulator package in the Tum-Simulator [Huang and Sturm2014] is simulated for the Parrot AR. Drone 2.0.

The simulation system allows a human to operate the UAV using a joystick. The operator aims to keep all UGVs in the field of view of the UAV’s downward looking camera and centred in the image by watching the video telemetry continuously and making corrections with the joystick accordingly. Human demonstrations are collected and used as a data set for training our ABS-DL algorithm.

The control environment is shown in Figure 3 in which the red dot is the centre of the downwards looking camera image from the UAV, and the blue dot and blue circle are the centre of mass and the spread of the UGVs in the UAV’s image, respectively. The size of this environment is 10x10 m

Figure 3: Control Environment for UAV and UGVs

Physical Environment

Our experimental environment is an in-door UAV testing facility equipped with Vicon. The width of VICON area is approximately 6.6 m and its length is around 5 m. While the UAV and the UGVs travels within the VICON system space, their absolute positions are collected. The origin of coordinates is located at the center of VICON area.

One AR Drone 2.0 and three heterogeneous UGVs (Pioneer P3-AT and P3-DX) are used in our experiments. The agents’ posture are directly measured by a Vicon Motion Capture System which broadcasts this information continuously at a high frequency of 100 Hz via UDP protocol. The interaction protocol between the robots and between the ground station (GS) and each robot is achieved using the Robot Operating System (ROS). The wheels on each UGV are equipped with optical encoder sensors to estimate the linear velocity, moving distance and yaw angular rate. An Inertial Measurement Unit (IMU) on the UAV measures angular rates and orientation. Moreover, to guarantee the safety of physical systems, the test space is assigned within the safe area of -3.3 3.3 m width and -2.5 2.5 m length.

The control network architecture is designed to perform cooperative scenarios of the physical UGV-UAV system. The functionality and the task of each block are described in Figure 4. Each time step of the system is 10 ms including the time for information exchange, data processing and outputting a command to UAV.

Figure 4: Overall Architecture Diagram.

Because of the altitude limitation of the physical space, we just evaluate the control of our UAV agent on the x and y axes. The AI controller calculates the desired velocities on the x and y axes, and sends these directly to the UAV. Additionally, the camera of the AR-Drone was not used in the physical experiment, but instead the central points and radius values were calculated from VICON data directly.


In this paper, ABS-DL is evaluted on the UAV-UGVs coordination task. Firstly, the ABS-DL algorithm is evaluated in the simulation environment. Then, our trained DNN model of the ABS-DL algorithm is transferred to the physical environment. During testing the DNN model on the physical environment, the previously described primary safety-net approach is used to avoid dangerous operation of the UAV.

We compare three scenarios: human-combined, where the human performs the complex task, DNN-combined where a DNN is used to learn directly from the human-combined data, and primitive, where the ABS-DL is used to perform the aggregate task by bootstrapping from the sub-tasks. The first scenario is a baseline for human performance on the complex task. The second is a baseline for the AI agent if the data on the complex task could be collected from a human. The third scenario is the proposed ABS-DL where the complex task is bootstrapped from sub-tasks.

For the simulation environment, two setups for autonomous control (2 and 3) are required as described in Table 1. The first in which a DNN is trained based on human demonstrations of the composite task, called a DNN-combined setup. The second in which a DNN using human demonstrations of sub-tasks or primitive tasks is trained, and then it is tested on the composite task or Combined manoeuvre, called a primitive setup.

ID Name Meaning
1 Human-combined Direct human control of the UAV
2 DNN-combined Using UAV States Space and demonstration on composite task from human for the Combined manoeuvre
3 Primitive Using UAV States Space and primitive demonstrations from human for the Combined manoeuvre
Table 1: Experimental setups

For the physical environment, our trained primitive DNN model is tested under the control of our proposed safety-net approach.

The UAV’s action space consists of four continuous real valued actions representing the pitch, roll, altitude, and yaw, denoted as (

), respectively. The state vector of the environment is a 11-D tuple of the continuous variables presented in Table 


State ID State name State description
1-2 Centre of UAV from bottom camera
3-4 Centre of UGV Mass within the UAV image
5 UAV’s altitude in Gazebo model
6 Ideal radius of UGVs within image
7 Actual radius of UGVs within image
8-11 UAV’s velocity vector
Table 2: State space

In Table 2, are received from the UAV’s bottom camera, and , obtained from the Gazebo environment. , , and are calculated using the pinhole camera model. is the distance from to the furthest UGV position within the bottom image.

The DNN architecture used is illustrated in Figure 5. This network consists of an input layer, two fully-connected hidden layers, and a fully-connected output layer, each with 300 nodes. The state space defined in Table 2 are used as inputs, while the outputs, as discussed above, are the next continuous action represented by

. The hidden layers use a rectified linear unit (ReLU) as the activation function, whilst the output layer uses tanh. The Adam method 

[Kingma and Ba2014]

is used for optimization, and mean squared error (MSE) is used for the loss function. Tensorflow and Keras libaries 

[Chollet2015] used to design and train the DNN using a PC with an NVIDIA GeForce GTX 1080 GPU. The weight of layers is initialized at 0.0001.

Figure 5:

Structure of Deep neural network (DNN).

Demonstrations obtained from the human subject in the simulation environment are collected for 10 episodes in each manoeuvre, except for the Fixed-Altitude manoeuvre in which 5 episodes were performed because the operating path of the UGVs is significantly longer and it is necessary to balance the labels of the data needed for training DNN. In total, the four data sets have 5296, 4691, 4904, and 5464 instances for the Fixed-Altitude, Climb, Descend, and Combined manoeuvres, respectively. Meanwhile, each episode runs around 10 minutes for the Fixed-Altitude manoeuvre and 5 minutes for the remaining manoeuvres, and then each instance is approximate a 0.05sec step These three first data sets are integrated into a primitive data set based on Algorithm 1

. Both of the primitive and combined data set are split into training and validation data sets comprised of 67 percent and 33 percent of the total data. The DNN is trained for 10,000 epochs with a batch size equal to the number of data instances in each setup.

After training in the simulation environment, the DNN for each experiment is tested using testing paths and, in each experiment, the agent is tested ten times on randomly generated cases.

Results and Discussion

In the Simulation Environment

Figure 6 show the value of the MSE in each of the two scenarios for each manoeuvre. In both scenarios, the training is very fast (approximately 40 minutes in real time), with the errors ceasing to decline and becoming stable at approximately 2000 epochs.

Figure 6: Training Loss Measures of Mean Squared Error over Epochs in S2.

To evaluate performances, the two objectives described in the previous chapter are used. The first (Equation 2) is the distance between the UAV’s Center and the centre of UGVs’ mass , and the second (Equation 3) is the difference between the actual radius and ideal radius . In Table 3

, the average and standard deviation of these two metrics for the three setups (human combined, combined and primitive) are represented.

Experiment ID
Distance Errors Radius Errors
Human-combined 14.2 9 12.9 21.9
DNN-combined 12.6 8.1 26.2 19.1
Primitive (ABS-DL) 12.9 8.2 10.9 9.9
Table 3: Averages and Standard Deviations of Errors from Testing in Simulation. The differences are statistically significant at .

The results are interesting. It is evident that the trained DNN using demonstrations of sub-tasks (Primitive) performs much better than the trained DNN using that of the composite task (DNN-combined) regarding the radius error. These results show that our ABS-DL approach can produce policies equivalent or even more effective than the traditional approach in the aerial-ground coordination task.

It is worth mentioning that despite the variations discussed above, the DNN always retains the UGVs within the range of the camera in all manoeuvres and all test cases. To better understand the phenotypical differences between the human performance and DNN, the visualization of the behaviour of the UAV is shown in Figures 7 and 8 when it is under human control and compare it with the behaviour when it is under combined DNN and ABS-DL control.

Some general observations can be made based on these figures. Firstly, when tracking UGVs, the DNN seems to do this in a smoother manner, while the human appears to be attempting to track optimally at the cost of generating constant steering of the vehicle. Such a behaviour consumes more energy. Moreover, these figures show that the primitive DNN tracks more smoothly than the combined DNN.

(a) Human-combined
(b) DNN-combined
(c) Primitive
Figure 7: The Ideal and Actual UGVs Circle Trajectories on Horizontal Image in Lateral-Movements-With-Climb-Descend manoeuvre
(a) Human-combined
(b) DNN-combined
(c) Primitive
Figure 8: The Ideal and Actual UGVs Circle Trajectories on Vertical Image in Lateral-Movements-With-Climb-Descend manoeuvre

In the Physical Environment

In this paper, the trained primitive DNN model is tested under the control of our proposed safety-net approach. Figure 9 shows that the UAV is able to track the UGVs movement when all of the UGVs are within the bottom camera of the UAV.

(a) Real Positions
(b) Horizontal Image
(c) Vertical Image
Figure 9: UAV and UGVs Trajectories in Physical Environment

It is worth mentioning that the human intervention was minimum compared to flying time. The safety net ensured the UAV to operate within the environmental boundary of the testing facility. However, human intervention were necessary at points of time where the UAV overshots a position.

Conclusion and Future Work

In the simulation environment, results show that the ABS-DL algorithm is able to effectively solve the primary challenge of apprenticeship learning when it produces equivalent or even better policies than that provided by the human operator.

Moreover, when testing in the physical environment, the trained DNN primitive model transferred well and the proposed safety-net approach allowed performance to operate smoothly and for the UAV to track the UGVs movement successfully. These results show that the combination of ABS-DL and the safety-net model in the physical environment is practical and promising.

In the future work, we aim to test different safety-net models for our ABS-DL algorithm on various UAV-UGVs coordination tasks and to completely remove the external intervention.


This material is based upon work supported by the Air Force Office of Scientific Research under award number FA2386-17-1-4054.


  • [Arbabzadeh and Jafari2018] Arbabzadeh, N., and Jafari, M. 2018. A data-driven approach for driving safety risk prediction using driver behavior and roadway information data. IEEE Transactions on Intelligent Transportation Systems 19(2):446–460.
  • [Argall et al.2009] Argall, B. D.; Chernova, S.; Veloso, M.; and Browning, B. 2009. A survey of robot learning from demonstration. Robotics and autonomous systems 57(5):469–483.
  • [Billing and Hellström2010] Billing, E. A., and Hellström, T. 2010. A formalism for learning from demonstration. Paladyn 1(1):1–13.
  • [Bouabdallah and Siegwart2007] Bouabdallah, S., and Siegwart, R. Y. 2007. Full control of a quadrotor. In IEEE/RSJ International Conference on Intelligent Robots and Systems, 2007: IROS 2007; Oct. 29, 2007-Nov. 2, 2007, San Diego, CA, 153–158. Ieee.
  • [Chae, Lee, and Yi2017] Chae, H.; Lee, M.; and Yi, K. 2017. Probabilistic prediction based automated driving motion planning algorithm for lane change. In Control, Automation and Systems (ICCAS), 2017 17th International Conference on, 1640–1645. IEEE.
  • [Chaulwar, Botsch, and Utschick2017] Chaulwar, A.; Botsch, M.; and Utschick, W. 2017. A machine learning based biased-sampling approach for planning safe trajectories in complex, dynamic traffic-scenarios. In Intelligent Vehicles Symposium (IV), 2017 IEEE, 297–303. IEEE.
  • [Chen et al.2017] Chen, J.; Bai, T.; Huang, X.; Guo, X.; Yang, J.; and Yao, Y. 2017. Double-task deep Q-learning with multiple views. In

    Proceedings of the IEEE International Conference on Computer Vision

    , 1050–1058.
  • [Chollet2015] Chollet, F. 2015.

    Keras: Theano-based deep learning library.

    Code: https://github. com/fchollet. Documentation: http://keras. io.
  • [Geng et al.2018] Geng, G.; Wu, Z.; Jiang, H.; Sun, L.; and Duan, C. 2018. Study on path planning method for imitating the lane-changing operation of excellent drivers. Applied Sciences 8(5):814.
  • [Huang and Sturm2014] Huang, H., and Sturm, J. 2014. Tum simulator. ROS package at http://wiki. ros. org/tum_simulator.
  • [Hubschneider et al.2017] Hubschneider, C.; Bauer, A.; Doll, J.; Weber, M.; Klemm, S.; Kuhnt, F.; and Zöllner, J. M. 2017. Integrating end-to-end learned steering into probabilistic autonomous driving. In Intelligent Transportation Systems (ITSC), 2017 IEEE 20th International Conference on, 1–7. IEEE.
  • [Hussein et al.2017] Hussein, A.; Gaber, M. M.; Elyan, E.; and Jayne, C. 2017. Imitation learning: A survey of learning methods. ACM Computing Surveys (CSUR) 50(2):21.
  • [Khan2017] Khan, A. 2017. Autonomous vehicles: Reliability of their perception of the world around them and the role of human driver. In International Conference on Applied Human Factors and Ergonomics, 560–570. Springer.
  • [Kingma and Ba2014] Kingma, D. P., and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
  • [Koenig and Howard2006] Koenig, N., and Howard, A. 2006. Gazebo-3d multiple robot simulator with dynamics.
  • [Koszewnik2014] Koszewnik, A. 2014. The Parrot UAV controlled by PID controllers. acta mechanica et automatica 8(2):65–69.
  • [Mayer, Sonntag, and Sawodny2017] Mayer, A.; Sonntag, M.; and Sawodny, O. 2017. Planning near time-optimal trajectories in 3D. In Control Technology and Applications (CCTA), 2017 IEEE Conference on, 1613–1618. IEEE.
  • [Miraglia and Hook2017] Miraglia, G., and Hook, L. 2017. Dynamic geo-fence assurance and recovery for nonholonomic autonomous aerial vehicles. In Digital Avionics Systems Conference (DASC), 2017 IEEE/AIAA 36th, 1–7. IEEE.
  • [Nguyen et al.2017] Nguyen, H.; Garratt, M.; Bui, L.; and Abbass, H. 2017. Supervised deep actor network for imitation learning in a Ground-Air UAV-UGVs coordination task. In IEEE Symposium Series on Computational Intelligence (IEEE SSCI 2017).
  • [Nguyen et al.2018] Nguyen, H.; Garratt, M.; Bui, L.; and Abbass, H. 2018. Apprenticeship bootstrapping: Inverse reinforcement learning in multi-skill UAV-UGV tracking task. In Proceedings of The 17th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2018).
  • [Nguyen, Garratt, and Abbass2018] Nguyen, H.; Garratt, M.; and Abbass, H. 2018. Apprenticeship bootstrapping. In Proceedings of The International Joint Conference on Neural Networks (IJCNN 2018).
  • [Phan and Liu2008] Phan, C., and Liu, H. H. 2008. A cooperative UAV/UGV platform for wildfire detection and fighting. In System Simulation and Scientific Computing, 2008. ICSC 2008. Asia Simulation Conference-7th International Conference on, 494–498. IEEE.
  • [Punzo et al.2018] Punzo, G.; MacLeod, C.; Baumanis, K.; Summan, R.; Dobie, G.; Pierce, G.; and Macdonald, M. 2018. Bipartite guidance, navigation and control architecture for autonomous aerial inspections under safety constraints. Journal of Intelligent & Robotic Systems.
  • [Raineri, Perri, and Bianco2017] Raineri, M.; Perri, S.; and Bianco, C. G. L. 2017. Online velocity planner for laser guided vehicles subject to safety constraints. In Intelligent Robots and Systems (IROS), 2017 IEEE/RSJ International Conference on, 6178–6184. IEEE.
  • [Sturm2014] Sturm, P. 2014. Pinhole camera model. In Computer Vision. Springer. 610–613.
  • [Suh, Chae, and Yi2018] Suh, J.; Chae, H.; and Yi, K. 2018. Stochastic model predictive control for lane change decision of automated driving vehicles. IEEE Transactions on Vehicular Technology 67(6):4771.
  • [Wenzel, Masselli, and Zell2011] Wenzel, K. E.; Masselli, A.; and Zell, A. 2011. Automatic take off, tracking and landing of a miniature UAV on a moving carrier vehicle. Journal of intelligent & robotic systems 61(1-4):221–238.
  • [Yu et al.2015] Yu, H.; Meier, K.; Argyle, M.; and Beard, R. W. 2015. Cooperative path planning for target tracking in urban environments using unmanned air and ground vehicles. IEEE/ASME Transactions on Mechatronics 20(2):541–552.
  • [Zhan et al.2017] Zhan, W.; Chen, J.; Chan, C.-Y.; Liu, C.; and Tomizuka, M. 2017. Spatially-partitioned environmental representation and planning architecture for on-road autonomous driving. In Intelligent Vehicles Symposium (IV), 2017 IEEE, 632–639. IEEE.