Multi-Task Regression-based Learning for Autonomous Unmanned Aerial Vehicle Flight Control within Unstructured Outdoor Environments

07/18/2019 ∙ by Bruna G. Maciel-Pearson, et al. ∙ 2

Increased growth in the global Unmanned Aerial Vehicles (UAV) (drone) industry has expanded possibilities for fully autonomous UAV applications. A particular application which has in part motivated this research is the use of UAV in wide area search and surveillance operations in unstructured outdoor environments. The critical issue with such environments is the lack of structured features that could aid in autonomous flight, such as road lines or paths. In this paper, we propose an End-to-End Multi-Task Regression-based Learning approach capable of defining flight commands for navigation and exploration under the forest canopy, regardless of the presence of trails or additional sensors (i.e. GPS). Training and testing are performed using a software in the loop pipeline which allows for a detailed evaluation against state-of-the-art pose estimation techniques. Our extensive experiments demonstrate that our approach excels in performing dense exploration within the required search perimeter, is capable of covering wider search regions, generalises to previously unseen and unexplored environments and outperforms contemporary state-of-the-art techniques.



There are no comments yet.


page 1

page 4

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Autonomous exploration and navigation in unstructured environments remains an unsolved challenge mainly because flights in such areas carry a higher risk of collision and are further aggravated by the power consumption/availability per battery, which is usually limited to less than 20 minutes. In this context, unstructured areas correspond to environments such as disaster areas [1], icebergs [2], snowy mountains [3] and forests [4]. These environments tend to have exceedingly variable nature (Fig. 1) and are often affected by constant changes in local wind conditions. As a result, the mission planning process needs to take into consideration arbitrary, unknown environments and weather conditions [5]. Since it is not feasible to preconceive large quantities of unforeseen events that may occur during a mission, an intelligent UAV control system with the capability to generalise to other domains

is highly desirable. In this context, a new domain can be defined as an area previously unexplored by the UAV, which differs from the original environment on which the deep neural network (DNN) is trained.

Currently, the existing literature on autonomous flight tends to focus either on mapping the environment, where obstacle avoidance is readily achievable or on deploying models approximated by DNN [6]. The former is commonly achieved by techniques such as SLAM or Visual Odometry [7], which require prior knowledge of the camera intrinsic parameters, while the latter requires a substantial volume of data which is often intractable to obtain [6, 8].

Fig. 1: Exemplar imagery for autonomous flight and exploration through the AirSim simulator [9]. Images are from (A) the dense redwood forest, (B) snowy mountain and (C) the wind farm environments.

The focus of our research is navigation within unstructured environments, which is primarily achieved by autonomous exploration under the forest canopy. In this paper, we present a Multi-Task Regression-based Learning (MTRL) approach that allows a UAV to perceive obstacle-free areas while simultaneously predicting the orientation quaternions and positional waypoints in NED (North, East, Down) coordinates, required to explore the environment safely. Due to the nature of our tests, all the experiments are carried out in a virtual environment using the AirSim simulator [10]. As such, our approach uses the Software In The Loop (SITL) [8] stack, commonly used to simulate the flight control of a single or multi-agent UAV [11, 8]. The navigational approach presented in this paper is also independent of the Global Positioning System (GPS), mainly due to the low reliability of GPS signals under dense forest canopy [7]. The proposed learning-based approach utilises a very simple and light-weight network architecture111Dataset and source code is available at and robustly generalises to new unstructured and unseen environments.

Extensive evaluations point to the superiority of the proposed approach compared to contemporary state-of-the-art techniques [12, 13, 14]. In addition, flight behaviour is also assessed in a SITL environment with ground truth telemetry data gathered during a simulated flight. To the best of our knowledge, this is the first approach to autonomous flight and exploration under the forest canopy without path following or aid of additional sensors.

Ii Related Work

Current research on autonomous navigation for UAV can be divided into two groups based on whether path planning or waypoint navigation is the main objective [15, 4]. Path planning requires understanding the environment ahead and it is usually achievable by pre-mapping the environment or specifying a navigational area as the UAV flight takes place [16, 17, 18, 19], which means the UAV can operate at a constant speed for a set duration in a specific direction [20, 21]. Within the existing literature, various end-to-end learning-based approaches have been employed to derive a set of navigational parameters from a given image, allowing for obstacle avoidance [18, 16, 20, 22]. Additionally, the recent advances made in multi-task systems partially focusing on depth estimation [23, 24, 25, 26]

can also be potentially beneficial towards a successful obstacle avoidance and path planning approach. However, most existing approaches offers only three degrees of freedom navigation, which makes them unsuitable for autonomous flight in unstructured environments, where the UAV may require to change altitude to avoid certain obstacles. As such, varying height estimation and generalisation are fundamental and can best be achieved by defining waypoints.

Waypoints are usually dependant on the the Global Positioning System (GPS) and can be defined before or dynamically generated during the flight. Once the UAV reaches the first waypoint, usually positioned at a short distance away and in an obstacle-free area, the algorithm moves on to defining the next waypoint, and the flight controller makes the necessary adjustments in speed and direction to reach its goal [27, 28] safely. Alternatively, the flight speed can be estimated by a neural network allowing more generalised and dynamic navigation with six degrees of freedom [29, 27]. It is important to highlight that navigation using such GPS waypoints is usually limited in environments where the GPS signal is unreliable or unavailable [7].

Recently, significant results have been achieved by Deep Neural Networks (DNN) in the task of pose estimation based on monocular imagery. In this sense, the use of a Convolutional Neural Network (CNN)

[6] to learn and to match features, which aids in camera pose estimation, has become popular with the work of Kendall et al.[12] and more recently the work of Mueller et al.[30]

. However, both approaches rely on prior environmental knowledge before yielding an estimation of the camera position. Further enhancements in predicting camera position have been made possible by integrating a Long-Short Term Memory (LSTM) network into the process


. This recurrent neural network uses gates to handle the vanishing gradient problem, which is very common during back-propagation


In contrast, our approach uses Multi-Task Regression-based Learning to individually learn the position of waypoints in NED (North, East, Down) coordinates within the scene in addition to learning the rotational quaternions. As a result, the flight controller can dynamically change position and speed based on the output of our resulting multi-task regression network. Such an approach does not rely on GPS readings to navigate, which makes it suitable for operation in weak or denied GPS signal areas. Additionally, it does not require any knowledge of the camera intrinsics and operates with low-resolution images, which makes it ideal for varying payload UAV.

Iii Control System Integration

In this work, the development as well as testing of the proposed approach and the state-of-the-art comparators [14, 12, 13] is performed using the open-source AirSim simulator [10]. AirSim is built on the Unreal Engine [32] and offers physically and visually realistic scenarios for data-driven autonomous and intelligent systems. Due to the closeness of the simulated environment and the real world, the control system, which integrates the simulated flight controller into an autonomous navigation approach, will follow the same structure as the control system in a real UAV with on-board processing capabilities. In our work, each approach tested receives two sets of parameters with distinct measurement units. The first is denoted by the NED values noted in meters, whereas the second one is the rotation and orientation of the UAV established in orientation quaternions. Although quaternions are often a standard representation of attitude in graphical engines, particularly for three-dimensional computations [33], our main motivation for the use of quaternions is attributed to the fact that calculating quaternions requires a significantly smaller amount of memory than calculating rotational matrices, thereby making them more suitable for on-board processing in drones deployed in the real world.

A quartenion is a hyper complex number of rank 4, commonly used to avoid the inherent geometrical singularity characteristic of the Euler’s method [34], which leads to a loss of one degree of freedom in a three-dimensional space. This reduction happens when two of the rotational axes align and lock together [35]. Formally, a quaternion is defined as the sum of a scalar

and a vector



where , , and denote real numbers, and , and refer to the fundamental quaternion unit vectors.

During simulation, the position (Eqn. 3) of the body at time is updated by integrating the linear velocity (Eqn. 2) and initial position , as shown in Eqn. 3.


where represents the linear acceleration obtained by applying Newton’s second law added to the gravity vector as illustrated in Eqn. 4 and is a function of acceleration over time .


The orientation is updated by computing the instantaneous axis through the angle , where refers to the angular velocity in the body frame concerning a fixed (world) frame and can be determined by Newton’s equations for dynamics.

Fig. 2: The proposed Multi-Task Regression-based Learning approach. The network predicts 3 positional () and 4 rotational values (,,,).

Flight stability is achieved by combining the rate and attitude control loops at each iteration (Algorithm 1). The Rate Control Loop (RCL) has three independent PD (Proportional Derivative) controllers (PD_roll_rc,PD_pitch_rc,PD_yaw_rc) for controlling the body rates (, and ). The body rates (aka desired body rates) are derived from the target rates (), and current rates (). The quartenion values outputted by the network () are directly fed into the AirSim RCL. This generates the target rates. The Attitude Control Loop (ACL) uses IMU (Inertial Measurement Unit) readings (, ) to estimate the current rate (). Thereafter, the motion () commands are generated by the ACL by integrating (PID_roll_ac,PID_pitch_ac,PID_yaw_ac) the desired rates and angular speeds () acquired from the readings of a 3-axis gyro.

1 while True do
2       On each ;
3       ,,, ;
4       ,, read_gyro() ;
5       ,, ;
6       ,, ;
7       ,, (,,,,,,,,) ;
       /* calculate desired body rate */
8       ;
9       ;
10       ;
       /* motion commands */
11       ;
12       ;
13       ;
       /* translate motion commands to PWM signals */
14       ;
15       ;
16       ;
17       ;
       /* Driving */
18       ;
20 end while
Algorithm 1 Implementation of Attitude and Rate Control

Iv Network Architecture

In our proposed approach, rather than explicitly following a trail, the objective is to identify clear flight areas and predict the flight behaviour while exploring an unknown environment. As such, to compare the effectiveness of the different approaches described in this work, each technique is required to be adapted in order to produce the same navigational output, which is subsequently mapped into the flight controller.

Originally, the approach by Wang et al.[13] receives as its input a pair of images and two sets of navigational coordinates. The first is the () positional coordinates, and the second is the Euler angles. The distance between the ground truth pose and the predicted pose is minimised during training, resulting in the final output of three positional coordinates () and three Euler angles (, and ). By contrast, during this experiment, we feed to the network the NED positions and orientation quaternion. As such, the output vector is resized to accommodate seven quantities instead of the six initially specified by [13]. The architecture of the network used in [13] consists of nine convolutional layers which feed the first LSTM recurrent neural network. This LSTM then supplies its output to a second LSTM network. Each LSTM has 1000 hidden layers. For a detailed description of the architecture, we refer the reader to [13].

Similarly, the approach in [14], which receives one input image and processes three navigational directions, will now output a vector containing seven estimations. Since the approach in [12] originally receives one input image and outputs the camera’s NED position and orientation in unit quaternion, no changes are required. In all cases, the number of input images originally required for each network is maintained, and all images are resized to . In addition, all networks receive the coordinates referent to , instead of . As such, the aim is to predict the next action instead of the current position. Image normalisation is performed according to the specification of each network.

Iv-a Implementation Details

Our network is based on a Multi-Task Regression-based Learning (MTRL) approach (Figure 2), where the features learned from the input are shared across two subnetworks to learn the NED position and rotational quaternions. To achieve this, the input image is first resized and pre-processed by applying channel mean subtraction. Next, the normalised image is shared between two identical subnetworks, each with a convolutional layer , followed by a max polling layer , a second convolutional layer

, followed by a second max pooling layer

, a third convolutional layer , a third max pooling layer , in which the output is flattened and fed into a dense layer , where dropout is applied. For the positional branch, we apply a dropout of

while none is applied to the rotational branch. The output is then fed into a second dense layer, to which we apply a dropout of 0.25 only to the rotational branch. The third dense layer (20) performs the linear regression and outputs the estimation for each task. Our cost function

minimises the Euclidean distance between the predicted output and the ground truth . A scalar factor with assigned value of is inserted to reduce the difference between the positional and rotational errors.


V Experimental Setup

Training is performed using adaptive moment estimation

[36] with a learning rate of and default parameters following those provided in the original paper () [36] using a GTX 1080Ti using 64,000 frames for training data and 17,674 frames for validation. During the test, AirSim is set up to use the GPU while all approaches are tested using an Intel Xeon processor.

V-a Data Preparation

Data is obtained by manually flying the UAV through the Redwood Forest environment using a FrSky Taranis (Plus) Digital Telemetry Radio System. In total, 81,674 frames were captured together with the flight telemetry comprising of flights under and above the forest canopy, navigation inside caves and over river beds, lakes and mountains.

V-B Evaluation Criteria

Our evaluation is presented across five metrics: repetition, generalisation, flight behaviour, distance travelled and reliability.

In search and exploration scenarios, exploring as many routes as possible is often the primary objective, which is why we investigate the capability of different approaches in alternating between different paths rather than continuously repeating the same route. Additionally, we aim to evaluate the capability of various methods to generalise to unseen environments. During flight behaviour analysis, we observe wherever the UAV is flying or hovering and note the flight duration and the distance traversed during the mission. Here, the goal is to identify the method capable of traversing greater distances in a shorter period. Finally, the reliability of the flight mission is measured by considering the distance the UAV can fly without any intervention.

Vi Results

In this section, we evaluate the performance of the proposed approach in forecasting the next position and rotation of the UAV in a given environment.

Vi-a Repetition

Our first set of tests assess the approach’s behaviour when processing the test data (Figure 3). For simplicity, we shall define motion in the coordinate as and coordinate as , while will be depicted as . Here, the flight commands are predicted but not sent to the flight controller. The objective is to observe the navigational patterns of each method and to compare the distribution of the predicted positional values for and directions. In order to create the flight pattern shown on the left (Figure 3), we add the predicted values of () to the current position (), just as it would be in the flight controller, using a frame rate of 20fps.

The test set used to compare each network has a high density of repetitive positional values (dark blue areas) distributed over a small area (Figure 3). Because this test set mostly contains scenes of low mobility and hovering behaviour, the endeavour is to observe if any of the approaches are capable of understanding the difference between hovering behaviour and slow motion. As illustrated in Figure 3, the approaches of Bojarski et al.[14] and Kendal et al.[12] perform better at slow motion, given that most predicted positional values in are in the range of to . Similarly, the approach of Wang et al.[13]

showed a lower variance of predicted values for the

direction (range between to ). In contrast, the proposed approach covers more exploratory ground due to significantly higher predicted positional values (range of to ) as illustrated in Figure 3.

Here, we also evaluate the consistency of each model by superposing each one of the resulting routes after four iterations (Figure 3) and although none repeated the same route, the proposed approach shows greater consistency of decision when the same set of images are presented, as evidenced by the overlap/common ground (dark blue) areas in Figure 3. The approach of Bojarski et al.[14] predicts similar positional values during three out of four iterations, while the approaches of Kendal et al.[12] and Wang et al.[13] choose different directions at every iteration.

Fig. 3: 2D representation of flights using the test set. Left image shows the combined routes of four flights which are then superposed. Middle and right images show waypoint distributions for the pair () directions, while flying in the default mode of .
Method NI Reliability Duration Distance (m) Behaviour
Dense Forest

Bojarski et al.[14]
7 98.59 14min 496.61m flying
Kendal et al.[12] 49 91.79 8min 596.7m flying
Wang et al.[13] 0 100.0 5min 443.11m flying
MTRL 31 95.09 10min 631.55m flying
Snowy Mountain

Bojarski et al.[14]
0 - 6min 65.59m hovering
Kendal et al.[12] 0 - 5min 125.62m hovering
Wang et al.[13] 0 - 13min 6.02m hovering
MTRL 27 97.25 20min 982.64m flying
Plain Field

Bojarski et al.[14]
0 - 20min 0.0m hovering
Kendal et al.[12] 0 - 20min 0.0m hovering
Wang et al.[13] 0 - 15min 0.0m hovering
MTRL 7 99.19 11min 867.45m flying

TABLE I: Performance of each approach during autonomous flight in the forest, snowy mountain and plain field environments. Reliability is the percentage of the distance travelled without interventions .
Fig. 4: Illustration of the angular rotation of the UAV when the values in the position are close to zero (a), and far from zero (b).
Fig. 5: Comparison of each approach when autonomously flying under the canopy of a dense forest, over a snowy mountain and over a plain field.

From Figure 4, it can be observed that the higher the variance in the coordinate, the wider the FoV (Field of View) will be. At a later stage, the imagery gathered during navigation can be used to identify any object/person/animal of interest. As such, a wider FoV will undoubtedly be more favourable. Based on the qualitative results in Figure 3, we can observe that the Bojarski et al.[14] and Kendal et al.[12] approaches have a lower variance closer to zero for ; contrarily, the proposed approach tends to forecast positional values in far from zero, which results in a wider angular rotation of the head.

In summary, it can be observed that the proposed approach performs better due to its ability to carry a high-density exploration of the search perimeter by predicting waypoints that leads to the navigation of different routes that are closer to each other. Besides, due to its angular rotation, the FoV of the proposed approach is significantly wider as compared to comparators.

Vi-B Behaviour

During autonomous flight tests using the SITL, we observe that although the approach by Wang et al.[13] has the highest reliability rate, which makes it quantitatively better than the other approaches (Table I), the fact remains that it has, the worst performance when qualitatively analysing the flight mission in the dense forest environment (Figure 5). The network fails to learn exploratory behaviour, which causes the model to predict very similar values, thus resulting in the UAV constantly moving forward in a path. Additionally, this approach [13] produces significantly lower changes in the direction, appointing to small FoV. Similar behaviour is also observed in the findings of the approaches adopted by Bojarski et al.[14] and Kendal et al.[12].

A secondary behaviour observed in [14] is the constant attempts to gain altitude above the canopy. In order to correct this behaviour, a function is created that regulates the altitude during the flight, which forces the UAV to remain under the canopy. Within real-world scenarios, UAV flights are commonly monitored by a Geo-Fencing mechanism [37]. In contrast, the proposed approach traverses the greatest distance (631.55m) with significant changes in , which indicates a greater FoV. Additionally, Figure 5, illustrates that the proposed approach has a precise exploratory behaviour under the canopy, characteristic of low-altitude flight and constant changes in the direction.

Conventionally, sensor filtering, mechanical dampers and dynamic compensation are often used to reduce the effect of motor/propeller vibration before translating attitude commands to motor commands. Although a smooth navigation is desirable for the purpose of this work, we remove the sensor filtering from the RCL which results in a jittering motion. This allows us to observe anomalies, such as overshooting or drifting. Since drifting is caused by a minor increase in rotation rate in a pair of motors, when a low pass filter is applied to it, the rotation rate changes and the resulting total force equals to the gravitational force. This implies that the UAV changes its behaviour from drifting to hovering.

A clear distinction between behaviours is imperative since there is a strong relationship between generalisation and an increase in rotational rates, as depicted in Figure 5. Greater generalisation capabilities lead to a heightened rotational rate and mobility, while lower generalisation capabilities result in rotational rate values closer to zero and no generalisation results in hovering.

Vi-C Generalisation

In assessing the ability of the approaches to generalise to unseen domains, we find that the approaches in [14, 12, 13] fail to generalise in the snowy mountain as well as the plain field (Figure 5). Since the values predicted by these approaches are mostly close to zero, the behaviour observed during the UAV flight is hovering or slowly drifting due to the simulated air flow velocity rather than flying. The hovering behaviour is evidence in the results derived from the plain field, where values predicted using the approaches by Bojarski et al.[14], Kendal et al.[12] and Wang et al.[13] cause the UAV to remain at position (); only change is in the altitude (). In contrast, the proposed approach is capable of generalising in all tested environments and has the greatest flight distance (Table I).

Fig. 7: Three last convolutional layers of Bojarski et al.[14] approach.
Fig. 8: Three last inception layers of Kendall et al.[12] approach.
Fig. 6: Heat and activation maps from the three last convolutional layers of the proposed approach, prior to flattening. Row (A) shows the results of testing the forest environment, while row (B) the plain field and (C) the snowy environment.
Fig. 7: Three last convolutional layers of Bojarski et al.[14] approach.
Fig. 8: Three last inception layers of Kendall et al.[12] approach.
Fig. 9: Three last convolutional layers of Wang et al.[13] approach.
Fig. 6: Heat and activation maps from the three last convolutional layers of the proposed approach, prior to flattening. Row (A) shows the results of testing the forest environment, while row (B) the plain field and (C) the snowy environment.

The generalisation capability of the proposed approach can primarily be attributed to the shallow architecture of its network, in which learning is confined to local features that are commonly found in various obstacles from the global structure of the scene, such as edges and depth information (Figure 9). Consequently, the proposed approach does not learn to differentiate between a tree and a rock or a branch; instead, it learns to differentiate salient objects that are to be avoided from any other elements within the scene. When tested in a new environment, our model will try to avoid areas rich in salient obstacles/object features, regardless of what this saliency may entail.

As observed by the heat map and activation map in Figures 9-9, the deeper the network, the more specific is the knowledge acquired about the training environment. This phenomenon is mainly attributed to the fact later layers tend not only to retain spatial information but also to learn high-level semantic information about the scene [38],[39], as is the case for the approaches by Kendall et al.[12] (Figure 9) and Bojarski et al.[14] (Figure 9). In both these cases, the network highlights arbitrary regions within the heat map, which illustrate the origins of the features observed in the activation map. Here, blue areas signify lower certainty about the classification of the area of interest (ROI), while red areas denote higher certainty. Furthermore, all three approaches [12, 14, 13] suggest that, the last activation map for both the plain field and snowy mountain are smoother than the activation map for the forest environment, implying that the network is less certain regarding to its predictions in these environments than the forest environment [38]. Contrarily, in our proposed approach, we can observe an equal representation of salient features across all three environments, which is indicative of the superior generalisation capabilities of our approach.

Put succinctly, learning very specific details about the content of a given environment can prove to be very useful when navigating within the same environment or those with close similarity in their appearance. However, this can preclude generalisation to unseen environments, which are far more likely to be encountered in real-world applications. Although all comparators [12, 14, 13] suffer from this predicament, our technique demonstrates significantly superior generalisation capabilities.

Vi-D Distance and Reliability

Further experiments are also carried out to verify the validity of our approach when flying from different starting points within the forest and for a longer duration of time (Table II).

Method NI Reliability Duration Distance (m)
MTRL (F1) 2 97.46 4min 78.88m
MTRL (F2) 21 98.07 9min 1086.62m
MTRL (F3) 13 98.01 6min 654.76m
MTRL (F4) 31 95.09 10min 631.55m
MTRL (F5) 197 90.55 2h40min 2086.61m
MTRL (F6) 399 88.55 5h37min 2287.37m

TABLE II: The multi-task approach in 6 distinct flight missions.

The results of the experiments presented in Table II provide a better numerical evaluation of the robustness of the proposed approach. The set of tests F1-F4 have a navigational loop of 150 iterations, while tests F5 and F6 have a navigational loop of 1000 and 2000 iterations respectively. Experiments F4-F6 are premised on the initial starting position with coordinates of (70,-450, -12), while F1-F3 use the default starting position of (0,0,0) within the map. The difference concerning the time taken to complete each one of the flight mission F1-F3 is caused by choice of the route and the amount of airflow to overcome. Regardless of the distance or initial starting position, the level of reliability for each test is above 88%.

Vii Conclusion

In this work, we present a deep learning approach based on Multi-task Regression-based Learning, which takes advantage of the use of shallow networks by learning each task individually. Experiments indicate that our method is capable of generalising to unseen domains and has a larger coverage area than comparators [14, 12, 13]. Our method demonstrates a more aggressive exploratory behaviour due to a wider FoV in comparison with other approaches. Additionally, our techniques is particularly suitable for search and rescue operations in any above-ground environment, as it is does not require distinct pathways or GPS for navigation. An interesting direction for future research would be the investigation of the efficacy of the proposed approach when it is deployed indoors and in areas with limited navigational space, as well as boosting the performance of the approach by incorporating the task of depth estimation into overall multi-task model.


  • [1] S. M. Adams and C. J. Friedland, “A survey of unmanned aerial vehicle (UAV) usage for imagery collection in disaster research and management,” in Int. Workshop on Remote Sensing for Disaster Response, 2011, p. 8.
  • [2] D. F. Carlson and S. Rysgaard, “Adapting open-source drone autopilots for real-time iceberg observations,” MethodsX, vol. 5, pp. 1059–1072, 2018.
  • [3] Y. Karaca, M. Cicek, O. Tatli, A. Sahin, S. Pasli, M. F. Beser, and S. Turedi, “The potential use of unmanned aircraft systems (drones) in mountain search and rescue operations,” American J. Emergency Medicine, vol. 36, no. 4, pp. 583–588, 2018.
  • [4] C. Torresan, A. Berton, F. Carotenuto, S. Filippo Di Gennaro, B. Gioli, A. Matese, F. Miglietta, C. Vagnoli, A. Zaldei, and L. Wallace, “Forestry applications of UAVs in Europe: A review,” International Journal of Remote Sensing, no. 38, pp. 2427–2447, 2017.
  • [5] Y. B. Sebbane, Intelligent Autonomy of UAVs: Advanced Missions and Future Use.   Chapman and Hall/CRC, 2018.
  • [6] C. Kanellakis and G. Nikolakopoulos, “Survey on computer vision for UAVs: Current developments and trends,” J. Intelligent & Robotic Systems, vol. 87, no. 1, pp. 141–168, 2017.
  • [7] F. J. Perez-Grau, R. Ragel, F. Caballero, A. Viguria, and A. Ollero, “An architecture for robust UAV navigation in GPS-denied areas,” J. Field Robotics, vol. 35, no. 1, pp. 121–145, 2018.
  • [8] A. I. Hentati, L. Krichen, M. Fourati, and L. C. Fourati, “Simulation tools, environments and frameworks for UAV systems performance analysis,” in Int. Conf. Wireless Communications & Mobile Computing.   IEEE, 2018, pp. 1495–1500.
  • [9] S. Shah, D. Dey, C. Lovett, and A. Kapoor, “AirSim: High-fidelity visual and physical simulation for autonomous vehicles,” in Field and Service Robotics, 2017, pp. 621–635.
  • [10] ——, “Airsim: High-fidelity visual and physical simulation for autonomous vehicles,” in Field and Service Robotics.   Springer, 2018, pp. 621–635.
  • [11] A. P. Lamping, J. N. Ouwerkerk, N. O. Stockton, K. Cohen, M. Kumar, and D. W. Casbeer, “FlyMASTER: Multi-UAV control and supervision with ROS,” in Aviation Technology, Integration, and Operations Conference, 2018.
  • [12] A. Kendall, M. Grimes, and R. Cipolla, “Posenet: A convolutional network for real-time 6-DOF camera relocalization,” in Int. Conf. Computer Vision, 2015, pp. 2938–2946.
  • [13] S. Wang, R. Clark, H. Wen, and N. Trigoni, “DeepVO: Towards end-to-end visual odometry with deep recurrent convolutional neural networks,” in Int. Conf. Robotics and Automation.   IEEE, 2017, pp. 2043–2050.
  • [14] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang et al., “End to end learning for self-driving cars,” arXiv preprint arXiv:1604.07316, 2016.
  • [15] B. M. Haralick, C.-N. Lee, K. Ottenberg, and M. Nölle, “Review and analysis of solutions of the three point perspective pose estimation problem,” Int. J. Computer Vision, vol. 13, no. 3, pp. 331–356, 1994.
  • [16] D. Dey, K. S. Shankar, S. Zeng, R. Mehta, M. T. Agcayazi, C. Eriksen, S. Daftry, M. Hebert, and J. A. Bagnell, “Vision and learning for deliberative monocular cluttered flight,” arXiv preprint arXiv:1411.6326, 2014.
  • [17] E. Salamí, C. Barrado, and E. Pastor, “Uav flight experiments applied to the remote sensing of vegetated areas,” Remote Sensing, vol. 6, no. 11, pp. 11 051–11 081, 2014.
  • [18] N. Smolyanskiy, A. Kamenev, J. Smith, and S. Birchfield, “Toward low-flying autonomous MAV trail navigation using deep neural networks for environmental awareness,” Int. Conf. Intelligent Robots and Systems, 2017.
  • [19] G. Kahn, T. Zhang, S. Levine, and P. Abbeel, “Plato: Policy learning using adaptive trajectory optimization,” in 2017 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2017, pp. 3342–3349.
  • [20] B. Maciel-Pearson, P. Carbonneau, and T. Breckon, Extending Deep Neural Network Trail Navigation for Unmanned Aerial Vehicle Operation within the Forest Canopy, 2018.
  • [21] P. Drews, G. Williams, B. Goldfain, E. A. Theodorou, and J. M. Rehg, “Aggressive deep driving: Model predictive control with a cnn cost model,” arXiv preprint arXiv:1707.05303, 2017.
  • [22] F. Sadeghi and S. Levine, “Cad2rl: Real single-image flight without a single real image,” arXiv preprint arXiv:1611.04201, 2016.
  • [23]

    T. Zhou, M. Brown, N. Snavely, and D. G. Lowe, “Unsupervised learning of depth and ego-motion from video,” in

    IEEE Conf. Computer Vision and Pattern Recognition

    , 2017, pp. 1851–1858.
  • [24] B. Ummenhofer, H. Zhou, J. Uhrig, N. Mayer, E. Ilg, A. Dosovitskiy, and T. Brox, “Demon: Depth and motion network for learning monocular stereo,” in IEEE Conf. Computer Vision and Pattern Recognition, 2017, pp. 5038–5047.
  • [25] Z. Yin and J. Shi, “Geonet: Unsupervised learning of dense depth, optical flow and camera pose,” in IEEE Conf. Computer Vision and Pattern Recognition, 2018, pp. 1983–1992.
  • [26]

    A. Atapour-Abarghouei and T. P. Breckon, “Veritatem dies aperit-temporally consistent depth prediction enabled by a multi-task geometric and semantic scene understanding approach,” in

    IEEE Conf. Computer Vision and Pattern Recognition, 2019.
  • [27] E. Kaufmann, A. Loquercio, R. Ranftl, A. Dosovitskiy, V. Koltun, and D. Scaramuzza, “Deep drone racing: Learning agile flight in dynamic environments,” arXiv preprint arXiv:1806.08548, 2018.
  • [28] K. Mohta, M. Watterson, Y. Mulgaonkar, S. Liu, C. Qu, A. Makineni, K. Saulnier, K. Sun, A. Zhu, J. Delmerico et al., “Fast, autonomous flight in gps-denied and cluttered environments,” Journal of Field Robotics, vol. 35, no. 1, pp. 101–120, 2018.
  • [29] S. Jung, S. Hwang, H. Shin, and D. H. Shim, “Perception, guidance, and navigation for indoor autonomous drone racing using deep learning,” IEEE Robotics and Automation Letters, vol. 3, no. 3, pp. 2539–2544, 2018.
  • [30] M. S. Mueller and B. Jutzi, “UAS navigation with SqueezePoseNet—accuracy boosting for pose regression by data augmentation,” Drones, vol. 2, no. 1, p. 7, 2018.
  • [31] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
  • [32] B. Karis and E. Games, “Real shading in Unreal engine 4,” Physically Based Shading Theory Practice, vol. 4, 2013.
  • [33] J. Diebel, “Representing attitude: Euler angles, unit quaternions, and rotation vectors,” Matrix, vol. 58, no. 15-16, pp. 1–35, 2006.
  • [34] L. Euler, “Novi commentarii academiae scientiarum petropolitanae,” 1776.
  • [35] E. Fresk and G. Nikolakopoulos, “Full quaternion based attitude control for a quadrotor,” in Euro. Control Conference.   IEEE, 2013, pp. 3864–3869.
  • [36] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  • [37] P. Pratyusha and V. Naidu, “Geo-fencing for unmanned aerial vehicle,” Int. J. Computer Applications, 2013.
  • [38]

    C. Richter and N. Roy, “Safe visual navigation via deep learning and novelty detection,” 2017.

  • [39] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 618–626.