Mini-UAV-based Remote Sensing: Techniques, Applications and Prospectives

12/19/2018
by   Tian-Zhu Xiang, et al.
16

The past few decades have witnessed the great progress of unmanned aircraft vehicles (UAVs) in civilian fields, especially in photogrammetry and remote sensing. In contrast with the platforms of manned aircraft and satellite, the UAV platform holds many promising characteristics: flexibility, efficiency, high-spatial/temporal resolution, low cost, easy operation, etc., which make it an effective complement to other remote-sensing platforms and a cost-effective means for remote sensing. Considering the popularity and expansion of UAV-based remote sensing in recent years, this paper provides a systematic survey on the recent advances and future prospectives of UAVs in the remote-sensing community. Specifically, the main challenges and key technologies of remote-sensing data processing based on UAVs are discussed and summarized firstly. Then, we provide an overview of the widespread applications of UAVs in remote sensing. Finally, some prospects for future work are discussed. We hope this paper will provide remote-sensing researchers an overall picture of recent UAV-based remote sensing developments and help guide the further research on this topic.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 10

page 11

page 14

page 20

page 26

page 27

page 32

07/09/2013

Major Limitations of Satellite images

Remote sensing has proven to be a powerful tool for the monitoring of th...
03/13/2021

A review of machine learning in processing remote sensing data for mineral exploration

As a primary step in mineral exploration, a variety of features are mapp...
01/22/2021

A Review on Deep Learning in UAV Remote Sensing

Deep Neural Networks (DNNs) learn representation from data with an impre...
01/18/2022

Cooperative Multi-UAV Coverage Mission Planning Platform for Remote Sensing Applications

This paper proposes a novel mission planning platform, capable of effici...
09/06/2017

Towards Automated Cadastral Boundary Delineation from UAV Data

Unmanned aerial vehicles (UAV) are evolving as an alternative tool to ac...
03/19/2020

On the Detectability of Conflict: a Remote Sensing Study of the Rohingya Conflict

The detection and quantification of conflict through remote sensing moda...
09/04/2018

Current potentials and challenges using Sentinel-1 for broadacre field remote sensing

ESA operates the Sentinel-1 satellites, which provides Synthetic Apertur...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In recent years, with the rapid development of economy and society, great changes have been taken place on the earth’s surface constantly. Thus, for the remote-sensing community, it is in great demand to acquire remote-sensing data of interesting region and update its geospatial information flexibly and quickly [1, 2, 3].

The main ways of earth observation and geospatial information acquisition are satellite (shown in Tab. 1), manned aviation and low-altitude remote sensing [4], shown in Fig. 1. Remote sensing based on satellite and manned aircraft often have the advantages of large-area or regional remote sensing emergency monitoring with multi-sensors [5]. However, due to the satellite orbit, airspace of take-off and landing, meteorological conditions, etc., these two ways have some limitations, discussed as follows.

Figure 1: Remote sensing platforms of satellite, manned aviation and low-altitude UAV.
Timeliness of data.

In many time-critical remote-sensing applications, it is of great importance to timely acquisition of remote sensing data with high temporal resolution. For instance, in emergency remote sensing, e.g. earthquake, flood and landslide, fast response is the prerequisite [6]. It is necessary to collect remote sensing data of disaster area promptly and frequently for dynamical monitoring and analysis of disaster situation. In addition, precision agriculture requires short revisit times to examine within-field variations of crop condition, so as to respond to fertilizer, pesticide, and water needs [7].

However, although the revisit cycles of satellite sensors have significantly decreased to one days, shown in Tab. 1, due to the launch of satellite constellations and the increasing number of operating systems, it may not be easy to provide response of abrupt changes quickly and multiple acquisition per day. The manned aviation platforms are capable of collecting high-resolution data without the limitation of revisit periods, while they suffer from low maneuverability, high launch/flight costs, limitation of airspace and complex logistics. Besides, the data from these two platforms is often severely limited by weather conditions (e.g. cloud cover, haze and rain), which affects the availability of remote-sensing data [8].

Spatial resolution.

Remote sensing data with ultra-high spatial resolution (e.g. centimeter-level) plays significant roles in some fine-scale remote sensing applications, such as railway monitoring, dam/bridge crack detection, reconstruction and restoration of cultural heritage [9]. Besides, numerous studies have reported that images with centimeter-level spatial resolution (up to 5 cm or more) hold the potential for studying spatio-temporal dynamics of individual organisms [10], mapping fine-scale vegetation species and their spatial patterns [11]

, estimating landscape metrics for ecosystem 

[12], monitoring small changes of coasts by erosion [13], etc. Some examples are shown in Fig. 2.

Currently, satellite remote sensing can provide high-spatial-resolution images of up to 0.3 m, but it remains not to meet the requirements of aforementioned applications. Manned aviation remote sensing is capable of collecting ultra-high spatial resolution data, while it is restricted by operational complexity, costs, flexibility, safety and cloud cover.

Name GSD of PAN/MS (m) Temporal resolution (day) Nations
Planet Labs 0.725/- 1 USA
GF-2 0.8/3.2 5 China
Surperview-1 0.5/2 4 China
Worldview-4 0.31/1.24 1-3 USA
Geoeye-1 0.41/1.65 2-3 USA
Pleiades 0.5/2 1 France
SPOT-7 1.5/6 1 France
KOMPSAT-3A 0.4/1.6 1 Korean
  • GSD: Ground sample distance; PAN: Panchromatic image; MS: Multi-spectral image.

Table 1: Some examples of optical satellite remote sensing.
(a) Dam crack detection
(b) Buddha reconstruction
(c) Pine nematode detection
(d) Counting of crop plants
Figure 2: Examples of ultra-high spatial resolution remote sensing.
Data quality and information content.

Remote sensing data from satellite and manned aircraft platforms are susceptible to cloud conditions and atmosphere, which attenuate electromagnetic waves and cause information loss and data degradation. While low-altitude platforms have the advantage of flying closer to the ground object, which mitigate the effects of cloud and atmosphere significantly. Therefore, low-altitude remote sensing has the advantage of collecting high quality data with rich information and high definition, which benefits for image interpretation. Meanwhile, there is no need for atmospheric corrections as it would be in traditional platforms [14].

Besides, satellite and manned aircraft platforms mainly focus on high-resolution orthophotos, and they are unable to provide high-resolution multi-view facade and occlusion area images, which play a central role in three-dimension (3D) fine modeling [15]. Moreover, it has been demonstrated that multi-view information of ground object is beneficial for analyzing the anisotropic characteristics of its reflectance and further improving remote sensing image classification [16].

Small area remote sensing.

Satellite and manned aircraft platforms often run on fixed orbits or operate along the preset regular paths. However, in many small-area remote sensing applications, e.g., small town planning, tiny islands mapping, urban small-area geographic information update, archeology, agricultural breeding and infrastructure damage detection, there is a demand that collecting data along the irregular planning routes, or modifying route temporarily and taking hover observation according to tasks. The lack of flexibility makes utilization of traditional platforms challenging. The safety of pilots and cost also limit the adoption of manned aircraft platforms. In addition, traditional platforms may be difficult to acquire data in dangerous, difficult-to-access or harsh environments, such as polar remote sensing [17], monitoring of nuclear radiation, volcanoes and toxic liquids [6].

Consequently, to compensate these deficiencies, remote-sensing scientists propose some low-altitude remote sensing platforms, such as light aircraft platforms [18], remote-control aircrafts or kites [19], and unmanned aerial vehicles (UAVs) [20]. Due to the unique advantages, e.g. flexibility, maneuverability, economy, safety, high-spatial resolution and data acquisition on demand, UAVs have been recognized as an effective complement to traditional platforms. In recent years, the boom of UAV technology and the advance of small-sized, low-weight and high-detection-precision sensors equipped on these platforms make the UAV-based remote sensing (UAV-RS) a very popular and increasingly used remote-sensing technology [21].

It is also worth noting that the continuous advance of satellite constellations will improve the spatial/temporal resolution and data acquisition cost of satellite remote sensing. Therefore, in the future, it can be predicted that UAVs will replace manned aircraft platforms and become the main means for remote sensing together with satellite platforms [22].

Considering the rapid evolution of UAV-RS, it is essential to take a comprehensive survey on the current status of UAV-based remote sensing, in order to gain a clearer panorama in UAV-RS and promote further progress. Thus, this paper presents a specific review of recent advances on technologies and applications from the past few years. Some prospects for future research are also addressed.

In this paper, we focus on the Mini-UAV which features less than thirty kilograms of maximum take-off weight [12, 20], since this type of UAVs, more affordable, easier to carry and use than large-sized UAVs, is one of the most widely used types in remote-sensing community. Some examples of mini-UAVs is shown in Fig. 3. A simple example of rotary-wing UAV-RS system is shown in Fig. 4. In this system, the infrared camera is equipped on an eight-rotor UAV to acquire thermal radiation data around heat-supply pipeline for detection of heat leakage. Recognizing space limitations, more detailed description of the unmanned aircraft and remote-sensing sensors specially designed for UAV platforms can be found in [20, 23].

Figure 3: Some examples of mini-UAVs for remote sensing. Top: fixed-wing UAVs. Middle: rotary-wing UAVs and unmanned helicopters. Bottom: Hybrid UAVs, umbrella-UAVs, and bionic-UAVs.
Figure 4: An example of the rotary-wing UAV-based remote sensing data acquisition platform.

1.1 Related to previous surveys

No. Survey Title Ref. Year Published Content
1 Overview and Current Status of Remote Sensing Applications Based on Unmanned Aerial Vehicles (UAVs)  [23] 2015 PERS A broad review of current status of remote sensing applications based on UAVs
2 Unmanned Aerial Systems for Photogrammetry and Remote Sensing: A Review  [20] 2014 ISPRS JPRS A survey of recent advances in UAS and its applications in Photogrammetry and Remote Sensing
3 Hyperspectral Imaging: A Review on UAV-Based Sensors, Data Processing and Applications for Agriculture and Forestry  [14] 2017 RS A survey of UAV-based hyperspectral remote sensing for agriculture and forestry
4 UAS, Sensors, and Data Processing in Agroforestry: A Review Towards Practical Applications  [24] 2017 IJRS A survey of data processing and applications of UAS and sensors in agroforestry, and some recommendations towards UAS platform selection
5 Forestry Applications of UAVs in Europe: A Review  [25] 2017 IJRS An overview of applications of UAVs in forest research in Europe, and an introduction of the regulatory framework for the operation of UAVs in the European Union
6 UAVs as Remote Sensing Platform in Glaciology: Present Applications and Future Prospects  [26] 2016 RSE A survey of applications of UAV-RS in glaciological studies, mainly in polar and alpine applications
7 Recent Applications of Unmanned Aerial Imagery in Natural Resource Management  [27] 2014 GISRS A comprehensive review of applications of unmanned aerial imagery for the management of natural resources.
8 Small-scale Unmanned Aerial Vehicles in Environmental Remote Sensing: Challenges and Opportunities  [1] 2011 GISRS An introduction of challenges involved in using small UAVs for environmental remote sensing
9 Recent Developments in Large-scale Tie-point Matching  [28] 2016 ISPRS JPRS A survey of large-scale tie-point matching in unordered image collections
10 State of the Art in High Density Image Matching  [29] 2014 PHOR A review and comparative analysis of four dense image-matching algorithms, including SURE (semi-global matching), MicMac, PMVS and Photoscan
11 Development and Status of Image Matching in Photogrammetry  [30] 2012 PHOR A comprehensive survey of image matching techniques in photogrammetry over the past 50 years
12 Review of the Current State of UAV Regulations  [31] 2017 RS A comprehensive survey of civil UAV regulations on the global scale from the perspectives of past, present, and future development
13 UAVs:Regulations and Law Enforcement  [32] 2017 IJRS An introduction to the development of legislations of different countries regarding UAVs and their use
14 Unmanned Aerial Vehicles and Spatial Thinking: Boarding Education With Geotechnology and Drones  [33] 2017 GRSM A review of current status of geosciences and RS education involving UAVs
15 Unmanned Aircraft Systems in Remote Sensing and Scientific Research: Classification and Considerations of Use  [34] 2012 RS An introduction to UAS platform types, characteristics, some application examples and current regulations
16 Mini-UAV-based Remote Sensing: Techniques, Applications and Prospectives - 2018 Ours A comprehensive survey of mini-UAV-based remote sensing, focusing on techniques, applications and future development
  • This table only shows surveys published in top remote-sensing journals.

  • PERS: Photogrammetric Engineering and Remote Sensing; ISPRS JPRS: ISPRS Journal of Photogrammetry and Remote Sensing; RS: Remote Sensing; IJRS: International Journal of Remote Sensing; RSE: Remote Sensing of Environment; GISRS: GIScience & Remote Sensing; PHOR: The Photogrammetric Record. GRSM: IEEE Geoscience and Remote Sensing Magazine.

Table 2: Summarization of a number of related surveys on UAV-RS in recent years.

A number of representative surveys concerning UAV-based remote sensing have been published in the literature, as summarized in Tab. 2.

These include some excellent surveys on the hardware development of unmanned aerial systems, e.g. unmanned aircrafts and sensors [34, 20, 23, 14, 24], less attention has been paid to the advance of UAV data processing techniques. Some surveys focus on specific aerial remote-sensing data processing, such as image matching [28, 30] and dense image matching [29], which are not specifically for UAV data processing. Although the research reviewed in [20, 24] presents some UAV data processing technologies, e.g.

3D reconstruction and geometric correction, there still lack a complete survey of UAV data processing and its recent advances. In addition, recent striking success and potential of deep learning and related methods on UAV data geometric processing has not been well investigated.

Some surveys review specific applications of UAVs in remote-sensing community, such as agriculture [14], forestry [24, 25], natural resource management [27], environment [1] and glaciology [26]. Besides, [20] and [23] provides a comprehensive reviews of applications of UAV-RS, which also include the advance of remote-sensing sensors and regulations. However, recent developments in the technology of UAV-RS have opened up some new possibilities of applications, e.g. pedestrian behavior understanding [35], intelligent driving and path planning [36], which have not been reviewed.

Considering the problems discussed above, it is imperative to provide a comprehensive surveys of UAV-RS, centering on UAV data processing technologies, recent applications and future directions, the focus of this survey. A thorough review and summarization of existing work is essential for further progress in UAV-RS, particularly for researchers wishing to enter the field. Extensive work on other issues, such as regulations [20, 31, 32], operational considerations [34, 12, 24], which have been well reviewed in the literature, are not included.

Therefore, the objectives of this paper is devoted to present:

  • A systematic survey of data processing technologies, categorized into eight different themes. In each section, we provide a critical overview of the state-of-the-art, illustrations, current challenges and possible future works;

  • A detailed overview of recent potential applications of UAVs in remote sensing;

  • A discussion of the future directions and challenges of UAV-RS from the point of view of platform and technology.

The remainder of this paper is organized as follows. The main challenges and technologies of UAV-RS data processing are reviewed and discussed in Section 2. The potential applications of UAVs in remote-sensing community are provided in Section 3. In Section 4, the current problems and future development trend of UAV-RS are explored. At last, we conclude the paper in Section 5.

2 Techniques for data processing

In this section, main challenges of UAV data processing are briefly introduced. Then, we discuss the general processing framework and the key technologies as well as the recent improvements and breakthroughs of them.

2.1 Main challenges

Compared with satellite and manned aerial remote sensing, UAV-based remote sensing has incomparable advantages to provide a low-cost solution to collect data at spatial, spectral and temporal scales. However, it also faces some special challenges, due to the big differences with satellite and manned aerial remote sensing in platforms, flight height, sensors and photographic attitude, as well as external effects (e.g. airflow).

  1. [label=()]

  2. Non-metric camera problem. Due to payload weight limitations, UAV-RS often adopts low-weight, small-size and non-metric (consumer-grade) cameras, which inevitably result in some problems.

    • Camera geometry issue. Camera factory parameters are generally inaccurate and often affected by extraneous factors (e.g. camera shake). In addition, there are serious lens distortion in consumer-grade cameras, such as radial and tangential distortions. These problems reduce accuracy of data processing, especially in spatial resection and object reconstruction [37]. Thus, it is necessary to calibrate cameras strictly before data processing.

    • Rolling-shutter issue. Most UAVs are equipped with low-cost rolling-shutter cameras. Unlike global shutter, in rolling-shutter acquisition mode, each row is exposed in turn and thus with a different pose when the unmanned aircraft flies [38]. In addition, moving rolling-shutter cameras often produce image distortions [39] (e.g. twisting and slanting). These are beyond the conventional geometric models in 3D vision. Thus, new methods for rolling-shutter cameras are strongly desired.

    • Other issues, including noise, vignetting, blurring and color unbalancing which degrade image quality.

  3. Platform instability and vibration effects. The weak wind resistance make the light-weight, small-size UAVs collect remote-sensing data with unstable sensor positions, which affects data quality [2, 12].

    • The data is often acquired with irregular air lines, even curved lines. It results in image overlap inconsistency, which may causes failure image connection in aerial triangulation, especially between flight strips. Meanwhile, it also leads to complex and unordered image correspondence, which makes it difficult to determine which pairs of images can be matched.

    • Variable attitudes of sensors may result in large rotation and tilt variations among images, and thus bring about obvious image affine deformation. In addition, it can also result in large non-uniformity of scale and illumination. These issues will be aggravated by complex terrain relief, and present challenges for image matching [40].

  4. Large amount of images and high overlap. The small field of views (FOVs) of cameras equipped on UAVs, along with low acquisition height, make UAVs need to capture more photographs than conventional platforms to ensure overlaps and coverage. Therefore, on one hand, it is common that some images only cover homogeneous areas with low textures, resulting in difficulties for feature detection. On the other hand, the large amount of images may result in large-scale tie-points, which increases the difficulty and time for image matching and aerial triangulation. Besides, to ensure overlaps, images are often acquired with high amounts of overlap, which may lead to short baselines and small base-height ratio. Thus, it may cause unstable aerial triangulation and low elevation accuracy.

  5. Relief displacement. Due to the low acquisition altitudes relative to the variation in topographic relief, UAV image processing is prone to the effects of relief displacement  [41]. Such displacement can generally be removed by orthorectification if the digital elevation/surface model represents the terrain correctly. It remains challenging to handle the scenes with trees or buildings, because of the large local displacement and occlusion areas with no data. Besides, effects will be obvious when mosaicking images with different amounts and directions of relief displacement, such as sudden break, blurring and ghosting.

Due to the challenges discussed above, there exists large difficulties for traditional photogrammetric processing approaches designed for well-calibrated metric cameras and regular photography. Hence, rigorous and innovative methodologies are required for UAV data processing and have become a center of attention for researchers worldwide.

Figure 5: General workflow of UAV-based remote sensing.
Figure 6: General workflow of UAV-RS data processing.

2.2 General framework

A general workflow of UAV-based remote sensing is shown in Fig. 5. To conduct data acquisition, suitable UAV platforms and remote-sensing sensors are first selected according to remote-sensing tasks. More importantly, all the hardware needs to be calibrated, including cameras and multi-sensor combination, so as to determine spatial relationship of different sensors and remove geometric distortions caused by cameras. Then mission planning is designed based on topography, weather and lighting conditions in the study areas. Flight parameters, such as flight path, flying altitude, image waypoints, flight speed, camera length and exposure time, need to be carefully designed to ensure data overlaps, full coverage and data quality. Afterwards, remote-sensing data are often collected autonomously based on flight planning, or by the flexible control of the ground pilot. Data is checked and a supplementary photograph is performed if necessary. After data acquisition, a series of methods are performed for data processing and analysis.

To illustrate the UAV-based remote sensing data processing, we takes optical cameras, one of the most widely applied sensors, as an example. The general workflow of data processing can be seen in Fig. 6. Specifically,

  • Data pre-processing. Images collected from UAV platforms often require pre-processing to ensure their usefulness for further processing, including camera distortion correction, image color adjustment, noise elimination, vignetting and blur removal [42].

  • Aerial triangulation, also called structure from motion (SfM) in computer vision. Aerial triangulation aims to recover the camera pose (position and orientation) per image and 3D structures (i.e.

    sparse point clouds) from image sequences, which can also provide a large number of control points of orientation for image measurement. Data from GPS and inertial measurement unit is often used to initialize the position and orientation of each image. In computer vision, camera poses can be estimated based on image matching. Besides, image matching can also be adopted to generate a large number of tie-points and build connection relationships among images. Bundle adjustment is used to optimize the camera positions and orientations and derive scene 3D structures. To meet requirements of high-accuracy measurement, the use of ground control points (GCPs) may be necessary for improving georeferencing, while it is a time-consuming and labor-intensive work.

  • Digital surface model (DSM) generation and 3D reconstruction. The oriented images are used to derive dense point clouds (or DSM) by dense image matching. DSM provides a detailed representation of the terrain surface. Combining with surface reconstruction and texture mapping, a 3D model of scene can be well reconstructed.

  • Digital elevation model (DEM) and orthophoto generation. Digital elevation model can describe the surface topography without effects of raised objects, such as trees and buildings. It can be generated from either sparse or dense point clouds. The former is with lower precision while higher efficiency than the latter. After that, each image can be orthorectified to eliminate the geometric distortion, and then mosaicked into a seamless orthonormal mosaic at the desired resolution.

  • Image interpretation and application.

    Based on orthophotos and 3D models, image interpretation are performed to achieve scene understanding, including image/scene classification, object extraction and change detection. Furthermore, the interpretation results are applied for various applications, such as thematic mapping, precision agriculture and disaster monitoring.

In fact, regardless of the platform from which remote-sensing data is acquired (satellite, airborne, UAV, etc.), its interpretation methods are similar [14]. Therefore, photogrammetric processing is the prominent concern regarding UAV-RS. It is challenging issue for traditional processing approaches. Methods specially designed for UAV-RS data processing are proposed to overcome issues of UAV-RS. Next, the related key technologies are reviewed and summarized.

2.3 Key technologies

2.3.1 Camera calibration

Different from the traditional remote-sensing data processing, camera calibration is essential for UAV-based remote sensing, due to the adoption of light-weight and non-metric cameras that have not been designed for photogrammetric accuracy [43]. Camera calibration aims to estimate camera parameters to eliminate the impact of lens distortion on images and extract metric information from 2D images [44]. In aerial triangulation, camera parameters, including intrinsic parameters (principal-point position and focal length) and lens distortion coefficients (radial and tangential distortion coefficients), are often handled by pre-calibration or on-the-job calibration. The former calibrates cameras before bundle adjustment, and the latter combines camera calibration parameters as unknowns into bundle adjustment for joint optimization and estimation. The combination of two options is also adopted for high-accuracy data processing [45]. On-the-job calibration is often sensitive to camera network geometry (e.g. nadir and oblique acquisition) and the distribution and accuracy of ground control [37]. Thus, pre-calibration is generally an essential component for UAV-RS data processing.

(a) 3D physical calibration field
(b) Checkerboard calibration [46]
(c) Dual LCD-based method [47]
(d) AprilTag-based method [48]
Figure 7: Examples of camera calibration.

In camera calibration, pinhole cameras are often calibrated based on perspective projection model, while fisheye lenses are based on spherical model, orthogonal projection, polynomial transform model, etc. [49]

Methods for camera calibration and distortion correction can be generally classified into two categories: reference object-based calibration and self-calibration. Reference object-based calibration can be performed easily using the projected images of a calibration array, shown in Fig. 

7. The most rigorous method is based on laboratory 3D physical calibration field, where coded markers are distributed in three dimensions with known accurate positions [50]. This method provides high-precise calibration parameters, but it is high-cost and inconvenient, and not suitable for frequent recalibration in UAV-RS. An alternative low-cost solution is based on 2D calibration pattern, e.g. checker board [46], completely flat LCD-based method [47] and AprilTag-based method [48]. It has been demonstrated it can achieve the accuracy close to 3D physical calibration field. Different patterns are designed to improve the accuracy and ease of feature detection and recognition under various conditions.

It is worth noting that reference object-based calibration usually requires pre-prepared calibration patterns and extra manual operations, which make it laborious and time-consuming. By contrast, self-calibration, which depends on structural information detected in images without requiring special calibration objects, are more flexible and efficient. It therefore become an active research in recent years, especially for automatic rectification and calibration of fisheye image.

Among these methods, geometric structures (e.g. conics, lines and plumb lines) are first detected [44, 51, 52]. If given at least three conics on distorted image, the camera intrinsic parameters can be obtained from the decomposition of absolute conics. The fisheye image are generally rectified based on the assumption that the straight line should maintain their line property even after the projection of fisheye lens. Several approaches have been proposed to extract geometric structures, such as extended Hough transform [53] and multi-label energy optimization [54]

. However, the effects of rectification are often limited by the accuracy of geometric structure detection. More recently, deep convolutional neural networks (CNNs) based methods have been proposed which tried to learn more representational visual features with CNNs to rectify the distorted image 

[55]. In [56], an end-to-end deep CNN was proposed which learns semantic information and low-level appearance features simultaneously to estimate the distortion parameters and correct the fisheye image. However, this method does not consider the geometry characteristics, which are strong constrains to rectify distorted images. To this end, Xue [57] designed a deep network to exploit distorted lines as explicit geometry constraints to recover the distortion parameters of fisheye camera and rectify distorted image.

Figure 8: Rectification examples of fisheye image. From left to right are the results by Bukhari [53], AlemnFlores [58], Rong [55] and Xue [57].

Some rectification examples of fisheye image based on self-calibration are shown in Fig. 8. The qualitative evaluation on fisheye dataset are reported in Tab. 3. It can be seen that deep CNNs-based methods ([57]) achieves the excellent rectification performance for fisheye images. Although promising performance have been achieved on fisheye image rectifications, it remains some challenges need to be solved. The encode of other geometry, such as arcs and plume line, into CNNs is still a challengeable issue. Besides, designing robust geometric feature detection methods especially in case of noises or low texture is also in demand. Another important issue is to improve the accuracy of self-calibration to achieve the comparable accuracy with reference object-based methods.

Methods Bukhari [53] AlemnFlores [58] Rong [55] Xue [57]
PSNR 11.47 13.95 12.52 27.61
SSIM 0.2429 0.3922 0.2972 0.8746
RPE 164.7 125.4 121.6 0.4761
Table 3: Qualitative evaluation of rectification on fisheye image dataset from by Xue , using PSNR, SSIM and reprojection error (RPE).

2.3.2 Combined field of view

Because of low flight altitude and narrow FOVs of cameras equipped on UAVs, UAV-RS often acquires images with small ground coverage area, resulting in the increase of image amount, flight lines, flight cost and data collection time [59].

One alternative solution is the combined wide-angle camera which use multiple synchronized cameras. The images acquired form multi-camera combination system (i.e. equivalent large array camera) are rectified, registered and mosaicked to generate a larger virtual image, which can augment the coverage area [50]. In contrast to narrow cameras, the combined wide-angle method can increase acquisition efficiency and enlarge the base-height ratio. Besides, it also benefits the image connection especially in some windy conditions. Another advantage is to obtain multi-view images by oblique acquisition, which can overcome dead areas of photograph and sheltered targets. In [60], the combined wide-angle camera is used for photogrammetric survey and 3D building reconstruction. Fig. 9 shows an example of four-camera system.

Figure 9: Left: four-combined camera system in [60]. Right: the overlapping layout of the images projected from the four cameras.

The combined wide-angle camera has been well-studied in UAV-RS community. However, it remains challenging to improve acquisition efficiency for larger area mapping. An emerging opportunity is multi-UAV collaboration, which uses fleets of simultaneously deployed “swarming” UAVs to achieve a remote sensing goal. Except for improving spatial coverage and efficiency, multi-UAV collaboration overcome the spatial range limitations of a single platform and thus improve the reliability because of redundancy and allow simultaneous intervention in different places [61, 23]. Each vehicle can transmit either the collected data or the processed results to ground workstations for further processing or decision. Data can also be shared between different vehicles to guide optimal collaboration. For instance, in [62], a fleet of UAVs, equipped with various sensors (infrared, visual cameras, and fire detectors), cooperated for automatic forest fire detection and localization using a distributed architecture. The heterogeneous sensors increase the complexity of data processing, but they make it possible to exploit the complementarities of vehicles in different locations and flight attitudes and sensors with different perception abilities. Except for multiple UAVs, collaboration can also be performed between UAVs and other remote-sensing platforms, e.g. unmanned ground vehicles and unmanned marine surface vehicles [63].

Multi-UAV collaboration has become an effective means of collecting accurate and massive information and received increased attention recently. It has been widely used in commercial performance, but it is noting that there are some reports about accidents of multi-UAV systems. There is still a long way to go for broad applications of multi-UAV systems in remote-sensing community. Some problems are worth the effort, such as system resilience, complexity and communication between the UAVs, navigation and cooperative control in harsh conditions, environmental sense and collision avoidance, detection of anomalies within the fleet and disruption handling including environmental obstacles, signal interference and attack [50, 64]. Besides, how to configure the number of UAVs and plan flight routes to achieve optimal efficiency and performance is also a challenging issue [65, 66].

2.3.3 Low-altitude UAV image matching

Image matching is one of the fundamental technologies in photogrammetry and computer vision, which is widely used in image registration, image stitching, 3D reconstruction, etc. [67, 68, 69]. It is a long-standing and challenging task, especially for UAV images, due to the strong geometric deformations (e.g.

affine distortion), viewpoint changes, radiation/illumination variances, repetitive or low texture and occlusion. Although numerous matching algorithms have been proposed 

[30] over the last decades, they may be fail to provide good performance for low-altitude UAV images.

(a) Matching nadir and oblique images [70]
(b) Matching ground to aerial images [71]
(c) Matching UAV image to geo-reference images [68]
Figure 10: Low-altitude UAV image matching.
  1. [label=()]

  2. Multi-view image matching. Multi-view photography can acquire data from nadir and side-looking directions, especially in UAV-based oblique photogrammetry. However, this special data collection manner makes image matching astonishingly difficult, e.g. vertical and oblique image matching, because of the obvious difference in their appearances caused by the wide baseline and large viewpoint changes, especially affine deformations [72].

    Some attempts have been made to create local descriptors invariant to affine distortions, such as maximally stable extremal region (MSER), Harris/Hessian-affine and affine-SIFT (ASIFT), MODS [73]. Although they can handle images with viewpoint variances, they either provide small amount of correspondences or suffer from excessive time consumption and memory occupancy. Besides, these methods are not designed specifically for UAV cases, and may have difficulty in meeting the demands of even distribution of correspondences in images with uneven distributed texture.

    There are usually two strategies proposed to handle affine deformations in UAV image matching. One is to perform multi-view image matching based on MSER. The local regions are often normalized to circular areas, on which interest points are selected and matched. Considering the small quantity and uneven distribution of matching pairs, some geometric constraints, e.g. local homography constraint, can be used to guide the propagative matching [74]. The other is to apply geometric rectification before image matching [40]. If images collected by UAVs contain rough or precise exterior orientation elements and camera installation parameters, they can be used for geometric rectification of oblique UAV images to relieve perspective deformations. With the conventional descriptor matching methods, sufficient and well-distributed tie-points are then extracted and matched. The oblique images can also be rectified by coarse initial affine-invariant matching [73]

    . To achieve reliable feature correspondence, spatial relationships and geometrical information can be adopted to guide matching process and remove outliers,

    e.g. local position constraint, cyclic angular ordering constraint and neighborhood conserving constraint in [72].

    To obtain the matching pairs as evenly distributed as possible, the divide-and-conquer and the tiling strategy are often adopted [40]. Images are split into blocks, and features are extracted and matched from the corresponding blocks. The number of points in each block can be adaptively determined by information entropy [75, 76].

    Although significant progresses have been achieved in UAV multi-view image matching, there is still plenty of room for improvement. Due to the powerful ability of feature representation of deep CNNs and huge success in image classification and target detection, deep learning shows explosive increase in image matching recently [77]. Deep neural networks are designed to learn a local feature detector, such as temporally invariant learned detectors from pre-aligned images of different time and seasons [78], covariant local feature detector which regards the feature detection as a transformation regression problem [79]

    . In fact, limited progresses have been made in deep feature detection, due to the lack of large-scale annotated data and the difficulty to get a clear definition about keypoints. By contrast, great efforts have been made on developing learned descriptors based on CNNs, which have obtained surprising results on some public dataset. Feature descriptors are often developed by Siamese or triplet networks with well-designed loss functions, such as hinge loss, SoftPN, joint loss and global orthogonal regularization 

    [80]. Besides, some geometric information are integrated to facilitate local descriptor learning, e.g.

    patch similarity and image similarity in 

    [81]. In [82]

    , image matching is considered as a classification problem. An attention mechanism is exploited to generate a set of probable matches, from which true matches are separated by a Siamese hybrid CNN model.

    However, it is well-known that deep learning-based image matching requires large annotated datasets, while the existing datasets are often small or lack of diversity. The limited data source reduces the generalization ability of deep models, which may causes poor performance compared with hand-crafted descriptors [81]. Although a diverse and large-scale dataset HPatches has been released recently, it is not constructed from UAV-RS images.

  3. Matching with non-UAV images. UAV images are often co-registered with existing georeferenced aerial/satellite images to locate ground control points for spatial information generation, UAV geo-localization [83]. To increase the number of keypoints, the boundaries of super-pixels are adopted as feature points, followed by one-to-many matching scheme for more matching hypotheses [68]. Geometric constraints based on pixel distance to correct matches are employed for mismatch removal at repetitive image regions. Considering variation of illumination between UAV and satellite images, illumination-invariant image matching is proposed based on phase correlation to match the on-board UAV image sequences to a pre-installed reference satellite images for UAV localization and navigation [84].

    It is a huge challenge that matching UAV images with ground/street-view images due to the drastic change in viewpoint and scales that causes the failure of traditional descriptor-based matching. Some approaches attempted to warp the ground image to the aerial view to improve feature matching [85]. Besides, in [86], the matching problem is considered as a joint regularity optimization problem, where the lattice tile/motif is used as a regularity-based descriptor for facades. Three energy terms, i.e. edge shape context, Lab color features and Gabor filter responses, are designed to construct matching cost function. Another promising method is to employ the CNN to learn representations for matching between ground and aerial images. In [87], a cross-view matching network was developed to learn local features and then form global descriptors that are invariant to large viewpoint change for ground-to-aerial geo-localization. In addition, to handle image matching across large scale differences, which include small-scale features to establish correspondences, Zhou, et al. [71] divided the image scale space into multiple scale levels and encoded it into a compact multi-scale representation by bag-of-features. The matching then restricts the correspondence search of query features within limited related scale space, and thus improve the accuracy and robustness of feature matching under large scale variations.

  4. Challenges. Though tremendous efforts have been devoted to low-altitude image matching, there are many problems need to consider, as follows.

    • Except for interest points, geometric structure features which represent more information, e.g. lines, junctions, circles and ellipse, can also play a significant role in multi-view image matching, especially in urban scenarios [88, 89]. Geometric features often have invariant to radiometric change and scene variation over time. A small amount of work is concentrated on line-based image matching [90]. More efforts are worth to develop image matching based on geometric features.

    • Deep learning-based image matching is a promising method for UAV image matching. However, the lack of large-scale annotation datasets from UAV data hinders development of novel and more powerful deep models. Geometric information (e.g. local coplanar) are often overlooked in learning process, which can be encoded into deep neural networks to improve matching performance. Besides, except for feature detection and description, geometric verification can also be encoded into neural networks for outlier rejection [91]. Moreover, how to learn detector and descriptor of structure feature by CNNs for image matching is also a challenge.

    • Cross-view image matching has drawn a lot of attention in recent years. They play important roles in image-based geo-localization and street-to-aerial urban reconstruction. However, large viewpoint/scale differences should be well considered. More powerful deep models or more effective scale-space image encoding approaches are in demand.

Item Incremental Global Hierarchical
Match graph initialization Initialized by selected seed image pairs All images are treated equally Atomic models
Camera registration Perspective-n-Point (PnP), 2D-3D correspondences Rotation and translation averaging 3D-3D fusion
Bundle adjustment Iterative, many times One time BA when merging
Advantages Robust, high accuracy, good completeness of reconstructed scene Evenly-distributed errors, high efficiency Fewer BA steps
Disadvantages Prone to drifting errors, low efficiency Prone to noisy pairwise matches, relatively low accuracy, low completeness of reconstructed scene Model merging, graph partition
Tools Bundler, OpenMVG, VSFM, MVE, ColMap OpenMVG, 1DSfM, DISCO, Theia Research papers
Table 4: Comparison of three SfM paradigms111Refer to: Tianwei Shen, Jinglu Wang, Tian Fang, Long Quan, Large-scale 3D Reconstruction from Images, ACCV tutorial, 2016..

2.3.4 Low-altitude automatic aerial triangulation

Aerial triangulation, namely recovering camera poses and 3D structures of scene from 2D images, is a fundamental task in photogrammetry and computer vision. For manned aerial photogrammetry that collects images vertically, automatic aerial triangulation (AAT) has been well-studied [92]. As to UAV-based photogrammetry, however, it is demonstrated that the long established and proven photogrammetric AAT cannot handle UAV blocks [93]. This is because low-altitude UAV-RS breaks the acquisition mode of traditional photogrammetry (discussed in 2.1) and does not meet the assumptions of conventional AAT [94].

In the last few years, structure from motion (SfM) brings the light to low-altitude UAV AAT [95]. SfM estimates the 3D geometry of a scene (structure), the poses of cameras (motion) and possibly camera intrinsic calibration parameters simultaneously, without need either camera poses or GCPs to be known prior to scene reconstruction [96]. Some tests that apply SfM software for UAV-based aerial triangulation have demonstrated that SfM can break through the obstacles of UAV irregular blocks for robust low-altitude UAV AAT [20].

  1. [label=()]

  2. Structure from motion. SfM is generally divided into three types: incremental, global and hierarchical SfM, according to their initialization ways of camera pose. A simple comparison of these three SfM paradigms can be seen in Tab. 1. Besides, to make full use of incremental and global SfM, hybrid SfM is proposed to estimate camera rotations in a global way based on an adaptive community-based rotation averaging, and estimate camera centers in an incremental manner [97]. To achieve city-scale sparse reconstruction, Zhu, et al. [98] grouped cameras and performed local incremental SfM in each cluster, and then conducted global averaging between clusters. The hybrid SfM method possesses both robustness inheriting from incremental manner and efficiency inheriting from global manner. However, repeated BA is still needed in estimation of camera centers, which needs more efforts.

    Recently, the semantic information is integrated into sparse reconstruction [99]. These methods consider the semantic SfM as a max-likelihood problem to jointly estimate semantic information (e.g. object classes) and recover the geometry of the scene (camera pose, objects and points). However, due to their large memory and computational cost, this manner is often limited to small scenes and low resolution. Besides, semantic information can also be used to constrain feature matching and bundle adjustment by semantic consistency [100].

  3. Image orientation. In SfM, camera poses are often estimated from feature correspondences by solving the perspective-n-point problem and then optimized by BA. Besides, external orientation sensors can be adopted for camera pose estimation. If UAVs equip with high-quality GPS/IMU, positions and orientations of cameras can be estimated from GPS/IMU data directly without the need of GCPs, namely direct sensor orientation or direct georeferencing [101]. Besides, orientation parameters from GPS/IMU can be used to initialize the camera poses, and then integrate them into aerial triangulation for bundle adjustment, i.e., integrated sensor orientation. However, UAVs are often mount with low-accuracy navigation sensors, due to payload limitation and high costs of low-weight and high-precise navigation systems. Therefore, ground control points are adopted for high-precise aerial triangulation, called indirect sensor orientation, which is time-consuming and laborious.

    The existing SfM approaches generally heavily rely on accurate feature matching. Some failure may be caused by low/no texture, stereo ambiguities and occlusions, which are common in natural scenes. Thus, to break through these limitations, deep models are applied for camera pose estimation or localization recently [102]. In [103], a PoseNet is designed to regress the camera pose from a single images in an end-to-end manner. Besides, the traditional SfM is modeled by learning the monocular depth and ego-motion in a coupled way, which can handle dynamic objects by learning a explain-ability mask [104, 105]. However, the accuracy of these methods is far from that of traditional SfM. Besides, they are dependent on data set and are difficult to provide good generalization capabilities. To build more diverse data sets and encode more geometric constraints into deep models are worth efforts.

    Figure 11: Image-based multi-view 3D reconstruction. Based on UAV images, SfM is performed to estimate camera poses and sparse 3D structure. Dense reconstruction is then adopted to generate dense 3D scene structure. Surface reconstruction is conducted to generate a surface model. After texture mapping, the real 3D model is reconstructed.
  4. SfM for Rolling Shutter Cameras (RSC). Most off-the-shelf cameras are equipped with a rolling shutter due to the low manufacturing cost. However, its row-wise exposure delay bring about some problems. In the presence of camera motion, each row is captured in turn and thus with a different pose, which causes severe geometric artifacts (e.g.skew and curvature distortions) in the recorded image. This defeat the classical global shutter geometric models and result in severe errors in 3D reconstruction. Thus, new methods adapted to RSC are strongly desired.

    Some works contribute to correct rolling shutter distortions [106]. One way is to use inter-frame correspondences to estimate the camera trajectory and register frames. The continuity and smoothness of camera motion between video frames can also be combined to improve performance. Another way is to perform correction as an optimization problem based on straightness, angle, and length constraints on the detected curves to estimate the camera motion and thus rectify the rolling shutter effect. This method is sensitive to feature choice and extraction. Recently, CNNs are adopted to automatically learn the interplay between scene features and the row-wise camera motion and correct the distortions [107]. Large scale data sets are obviously required. They often train CNNs on synthetic dataset which may be different from the real cases, but it is a promising method.

    Rolling shutter effects are modeled in the case of conventional SfM [108, 109]. The complex RSC model is shattered into a constellation of simple, global-shutter, linear perspective feature cameras. The poses (i.e.

    rotation and translation) of each feature are linearly interpolated according to their vertical position in the image between successive key poses. Usually, a linear interpolation is used for translation and a spherical linear interpolation is used for rotation. In general, one may insert as many key poses as the tracked features.

  5. Challenges. Although aerial triangulation/SfM is a long-standing problem, it still faces many challenges, such as very large-scale and high-efficiency SfM, AAT with arbitrary images, multi-source data (ground/street images and UAV images) AAT. Besides, there is a long way to go for semantic SfM and deep CNNs for camera pose estimation.

2.3.5 Dense reconstruction

A complete workflow of 3D construction includes structure-from-motion, dense reconstruction, surface reconstruction and texture mapping [15], shown in Fig. 11. Once a set of UAV images are oriented, namely known camera poses, the scene can be densely reconstructed by dense image matching (i.e. multi-view stereo matching), the focus of this section.

  1. [label=()]

  2. Multi-view stereo (MVS) Reconstruction. Numerous multi-view stereo algorithms have been proposed, e.g. semi-global matching, patch-based methods, and visibility-consistent dense matching [29]. To search for correspondences, similarity or photo-consistency measures are often adopted to compare and estimate the likelihood of two pixels (or groups of pixels) in correspondence. The most common photo-consistency measures include normalized cross correlation, sum of absolute or squared differences, mutual information, census, rank, dense feature descriptors, gradient-based algorithms and bidirectional reflectance distribution functions [110]. MVS is often formulated as a function of illumination, geometry, viewpoints and materials, and thus can be regarded as a constrained optimization problem solved by convex optimization, Markov random fields, dynamic programming, graph-cut or max-flow methods [29].

    Most conventional multi-view stereo matching methods are adopted directly for UAV image-based surface reconstruction [111]. Considering the perspective distortions in oblique images, epipolar rectification is performed based on cost of angle deformation before MVS matching [112]. To minimize the influence of boundary, a hierarchical and adaptive phase correlation is adopted to estimate disparity of UAV stereo images [113]. Besides, some tricks are proposed to improve the performance of conventional methods, including graph network, image-grouping and self-adaptive patch [70].

  3. Learning-based MVS. However, these methods use hand-crafted similarity metrics and engineered regularizations to compute dense matching, and are easily affected by sudden changes in brightness and parallax, repeated/no textures, occlusion, large deformations, etc.

    Recent success on deep learning research has attracted interest to improve dense reconstruction. Numerous works apply CNNs to learn pair-wise matching cost [114] and cost regularization [115], and also perform end-to-end disparity learning [116]. However, most methods focus on stereo matching tasks, which are non-trivial to extend them to multi-view scenarios. Furthermore, the extended operations do not fully utilize the multi-view information and lead to less accurate result. Besides, input images could be of arbitrary camera geometries.

    There are fewer works on learned MVS approaches. SurfaceNet [117] and Learned Stereo Machines [118] encode camera information in the network to form the cost volume, and use 3D CNN to infer the surface voxels. However, these methods are limited by huge memory consumption of 3D volumes and thus only handle small-scale reconstructions. Thus, DeepMVS [119] takes a set of plane-sweep volumes for each neighbor image as input and produces high-quality disparity maps, which can handle an arbitrary number of posed images. MVSNet [120] builds 3D cost volume upon the camera frustum instead of the regular Euclidean space and produces one depth map at each time. Thus, this approach makes large-scale reconstruction possible. However, due to the annotated data without the complete ground truth mesh surfaces, this method may be deteriorated by occluded pixels. The works in [121] provides comparison experiments and demonstrates that deep learning based methods and conventional methods perform almost the same level. While deep learning based methods have better potential to achieve good accuracy and reconstruction completeness.

  4. Challenges. Although great success has been achieved, there remains some challenges which need more efforts, as follows.

    • Specular object reconstruction. Most MVS algorithms often impose strong Lambertian assumption for objects or scenes, however, there are many specular objects or isotropic reflectance objects in man-made environments. Multi-view reconstruction of these glossy surfaces is a challenging problem. One promising method may be to adopt generative adversarial network for transferring multiple views of objects with specular reflection into diffuse ones [122].

    • Dynamic scene modeling. Most existing 3D reconstruction methods are under the assumption of a static rigid scene. How to reconstruct dynamic scene is a challenging issue. One possible way is to pre-segment the scene into different regions where is locally rigid and apply rigid SfM and MVS to each of the regions [123].

    • Multi-source 3D data fusion. Few attempts have been carried out in the fusion of aerial and ground-based 3D point clouds or models [124]. The large differences in camera viewpoints and scales impose a tricky issue to the alignment of aerial and ground 3D data. Moreover, it is also a difficult task to reconstruct a single consistent 3D model that is as large as an entire city with the details as small as individual objects.

2.3.6 Image stitching

Due to the small footprint of UAV images, it is essential to develop automatic image stitching/mosaicking techniques to combine multiple images with overlapping regions into a single large seamless composite image with wide FOV or panorama. Image stitching generally includes geometric correction and image composition. Images acquired from different positions and attitudes are registered to an identical mosaic plane or reference plane in geometric correction, and then the inconsistencies in geometry and radiation (e.g. color or brightness) among geometric-corrected images are mitigated or eliminated by image composition. Some examples of image stitching are shown in Fig. 12. According to the different methods for geometric correction, image stitching can be divided into ortho-rectification based stitching and transformation based stitching, detailed below. Image composition, including seamline generation, color correction and image blending, is generally similar to that of other remote-sensing platforms. Recognizing space limitations, we therefore refer interested readers to several papers [125, 126, 127] for the detailed description.

(a) Ortho-rectification based stitching. Left: inaccurate mosaic map generated by the direct georeferencing using the original inaccurate IMU/GPS data. Right: mosaic map generated based on registration with the reference map in  [128].
(b) Transformation based stitching. Automatically constructed urban panorama with 14 wide-baseline images based on mesh-optimization stitching method proposed in [129].
Figure 12: Examples of image stitching.
  1. [label=()]

  2. Ortho-rectification based image stitching. Ortho-rectification based image stitching is the essential step for generation of digital orthophoto maps, which are used for photogrammetric recording and document and are also the base map for remote sensing interpretation. Images are often ortho-corrected based on camera poses and 3D terrain information, (e.g. DEM/DSM and GCPs), to reduce the geometric deformation and achieve spatial alignment on the same geographical coordinate system. In [101], DEMs/DSMs are generated from SfM point clouds, which are then transformed into real-world coordinates based on direct/indirect/integrated georeferencing. In [130], images are corrected by global transformations derived from the relationships between GCPs and the corresponding image points. Considering the inaccuracy of exterior orientation from GPS/IMU and the difficulties in acquisition of GCPs, another way for ortho-rectification is based on registration with the aerial/satellite orthorectified map [128]. In contrast, this way is more efficient and convenient due to the avoidance of complex aerial triangulation and DEM generation as well as the laborious acquisition of GCPs, however, its mandatory prerequisite is the reference maps.

  3. Transformation based image stitching. Ortho-rectification based image stitching can rectify the geometric distortions and provide geographic coordinate information, however, it is generally computation-complex and time-consuming, which make it unsuitable for time-critical remote-sensing applications [131], such as disaster emergency and security monitoring. This approach provides an effective mosaic method based on transformations calculated from matching correspondences between adjacent images [132].

    A simple approach is to exploit one global transformation to align images [133]. However, it only works well under the assumptions of roughly planar scenes or parallax free camera motion [67], which may be violated in most UAV-based data acquisition cases. Though advanced image composition can mitigate stitching artifacts generated by these methods, they remain when there are misalignments or parallax.

    To this end, spatially-varying warping methods have been proposed for image alignment. One is to adopt multiple local transformations to locally align images, including as-projective-as-possible warping [134] and elastic local alignment model [135]. The other is to consider registration as an energy optimization problem with geometric or radiometric constraints based on mesh optimization model [136, 129]. Local transformations can also be integrated with mesh models to provide good stitching [137]. Spatially-varying warping models can handle moderate parallax and provide satisfactory stitching performance, but it often introduces projective distortions, e.g. perspective and structural distortions, due to the nonlinear of these transformations. Some methods have been proposed to handle distortions, such as global similarity prior model [138], structural constraint model [137], but more efforts should be put into stitching images accurately with reduced distortion.

    Another approach is seam-guided image stitching [139], which hold potential for handling large parallax. Multiple transformation hypotheses can be estimated from different groups of feature correspondences. Seam-line quality is then adopted to evaluate the alignment performance of different hypotheses and select the optimal transformation. This approach adopts a local transformation for global alignment, thus it would get trapped when handling images with complex multi-plane scenes.

  4. Challenges. Although numerous stitching methods have been developed, it is also an open problem, especially for stitching images with efficiency, registration accuracy and reduced distortion. More works should be devoted into high-efficiency/real-time image stitching, large-parallax image stitching and distortion handling in the future. Besides, there exists some attempts of deep learning in homography estimation and image dodging recently [140, 141]. However, there is still a lot of room for improvement. It is a promising and worthwhile direction in image stitching.

2.3.7 Multi-sensor data registration

With the advent of increasing available sensors, UAV-based remote sensing often equip with multiple remote-sensing sensors (e.g. visible cameras, infrared sensors or laser scanners), which can either collect a variety of data at a time to achieve multiple tasks or integrate these complementary and redundant data for better understanding of the entire scene. However, the data from multiple sensors often have dramatically different characteristics, e.g. resolution, intensity, geometry and even data dimension, due to different imaging principles. This imposes a huge challenge to how to integrate multi-sensor data for remote sensing applications [142].

Multi-sensor data registration is a mandatory prerequisite. Multi-sensor data is then fused for data interpretation. Due to limitations on space, this section focus on multi-sensor data registration. Remote-sensing data fusion will not be discussed here and can be referred to the surveys [143, 144].

  1. [label=()]

  2. The registration of multi-band images, e.g. visible and infrared images, visible and SAR images, has caused great concern in recent years. The area-based method commonly adopts intensity statistical information to handle the large appearance differences, such as mutual information and entropy-based measures [145]. Considering its difficulties to handle large radiometric distortions because they are mainly based on image intensities, structure features, which are more robust to radiometric changes, are integrated as similarity metrics to improve registration performance, such as gradient, edge information, local self-similarity and phase congruency [146]. However, these methods are computationally expensive.

    The feature-based registration often extracts geometric features and then matches them based on descriptor matching [147, 148]. However, traditional gradient/intensity based feature descriptors are not suitable for multi-modal image matching due to the large gradient differences. Thus, some structure features, e.g. line segments and edges, are described by geometrical relationship, edge histogram or Log-Gabor filters [149]. Fig. 13 shows some promising results and demonstrates the effectiveness of description based on structure information, but they are far from satisfactory performance. Therefore, there still exists great space for further development. Besides, it is challenging to extract highly repeatable homonymy features from multi-band images because of non-linear radiometric differences.

    Figure 13: Visible and infrared image matching in [149]. Left: the average recognition rate of different multi-modal image matching methods. Right: the recognition rate of different rotations. This experiments are conducted on VIS-IR and CVC-Multimodal datasets. Recognition rate is defined as the number of correct matches among all the correspondences. SIFT: scale-invariant feature transformation; EHD: edge histogram descriptor; PCEHD: phase congruency and edge histogram descriptor; LGHD: Log-Gabor histogram descriptor; RIDLG: rotation invariant feature descriptor based on multi-orientation and multi-scale Log-Gabor filters. The left demonstrates the effectiveness of methods based on structure information. However, most methods provide poor performance under rotation issues (Right). Thus, there is still plenty of room for improvement.
  3. Registration of LiDAR and optical images is a common case in UAV-based remote sensing. The simple way is direct georeferencing, however, it is difficult to achieve high-accuracy registration due to vibration of platforms, unknown exposure delay, limitations of hardware synchronization and calibration, low accuracy of onboard GPS/IMU sensors. There are often three other strategies as follows.

    • The problem can be considered as a multi-modal image registration, by transforming LiDAR data into images, including grayscale-encoded height and return-pulse intensity images (also called reflectance images). Thus, area-based and feature-based multi-modal image registration can be adopted.

    • The problem can be converted as the registration of two point-sets: LiDAR point set and image-derived point set. The iterative closest point (ICP) algorithms can be used. Salient features are often extracted from two point-sets for registration, used as the initialization of ICP [150].

    • Registration can also be performed between LiDAR point cloud and optical images directly, often based on line and plane features.

    In the first method, area-based methods are often affected by return-pulse intensity calibration, which determines the quality and correctness of intensity image. In contrast, feature-based methods provide robust registration [151]. The transformation error may be another issue that affects registration. In the second method, there is a big difference between two point-sets. LiDAR provides a set of irregularly distributed points with abundant information along homogeneous area but poor information along object space discontinuities, while the image-derived point set is the opposite. Besides, the accuracy of image-derived point set and the initialization of ICP are also non-trivial issues. In the third method, it may be a challenging task to find conjugate features automatically in both datasets.

  4. Challenges. Multi-sensor data registration has gained increasing attentions, and there are challenges need to be devoted. Considering the invariance of semantic information of targets in multi-modal images, the semantic feature or target can be extracted for registration. Besides, few works are devoted to consider complex cases with scale, rotation and affine issues in multi-modal image registration. Moreover, multi-sensor image registration based on CNNs is a promising direction.

2.3.8 High-performance data processing

With large amount of data, the complexity of processing algorithms and the request for a fast response, the time to process and deliver the remote-sensing products to users becomes a main concern for UAV-RS. Consequently, automatic and efficient processing has become a key challenge for UAV data processing.

One available way is to perform data processing with low-complexity algorithms and few manual intervention, such as image location estimation with less/no GCPs or direct georeferencing [101]. In deep CNNs, some tricks for light-weight models are proposed, including removing regions of proposal for object detection [152], model compression and acceleration by parameter sharing, pruning, low-rank matrix decomposition and knowledge distillation [153].

Another effective solution is high performance computing (HPC) [154, 155], such as parallel computing. Unlike serial computation for data processing, parallel computing allows the simultaneous use of multiple computer resources to accelerate data processing. Some available strategies are as follows.

  • Hardware accelerators, including field-programmable gate array (FPGA) and graphical processing unit (GPU). GPU holds great potential in computer intensive, massive-data-parallel computation and has gained lots of attentions for UAV data processing [156, 157]. They can also be used for on-board real-time processing.

  • Cluster computers. The processing task should be broken down into subtasks and then allocated to different computers. It is particularly appropriate for efficient information extraction from very large local data archives.

  • Cloud computing. It is a sophisticated high-performance architecture and used for service-oriented and high-performance computing. For instance, cloud computing are used for processing image data to generate 3D models in distributed architectures [158].

Challenges. For large-scale data acquisition of UAV-RS, it may be challenging that how to achieve optimal path planning to collect the optimal and minimum data to meet the requirements of remote sensing tasks, so as to reduce invalid or redundant data, and mitigate the difficulty of extracting information from massive data. Another important challenge related to fast computing is the volume, weight, cost and high energy consumption of high-performance computing architectures, which make it difficult for on-board processing. Besides, the recent literature provides few examples for the use of HPC to implement UAV-RS generic data processing, thus more practice and attempts are required.

2.3.9 A List of open-source data and algorithms

To provide an easy starting point for researchers attempting to work on UAV-RS photogrammetric processing, we list some available resources, including tools and some algorithms.In addition, we provide a selected list of open-source UAV-RS data sets for evaluating algorithms and training deep learning models. It is noting that the open-source resource listed below is a non-exhaustive list.

  1. [label=()]

  2. Tools and algorithms for UAV-RS data processing. Some open-source tools and algorithms which can be used for UAV-RS photogrammetric processing have been proposed, shown in Tab. 5 and Tab. 6. The codes of algorithms can be downloaded from respective papers. Noting that all these examples are offered with open licenses, and the corresponding papers must be acknowledged when using those codes. The rules on the respective websites apply. Please read the specific terms and conditions carefully. These available tools provide great convenience for the development of algorithms used for UAV-RS data processing, and make it easy to get started.

    Item Tools
    Computer vision OpenCV and VLFeat
    UAV data processing OpenDroneMap (ODM)
    SfM library Bundler, VisualSFM, OpenMVG, MVE, Theia and ColMap
    Dense matching MicMac, SURE and PMVS
    Image stitching Image composition editor (ICE), Autostitch and Photoshop
    DL frameworks TensorFlow, Torch, Caffe, Theano and MXNet
    Table 5: Some available tools for UAV-RS data processing.
    Item Algorithms
    Camera calibration Extended Hough transform [53], One-parameter division model [58], MLEO [54], CNN based [55]
    Image matching TILDE [78], TCD [79], ASJ detector [89], Spread-out Descriptor [80], CVM-Net [87]
    Aerial triangulation PoseNet [103], SfMLearner [104], 1DSfM [159]
    Dense reconstruction PMVS [160], MVSNet [120], DeepMVS [119]
    Image stitching APAP [134], ELA [135], NISwGSP [138], Planar mosaicking [133]
    Multisensor registration LGHD [149], HOPC [146]
    Table 6: Some available algorithms for UAV-RS data processing.
  3. Open-source remote-sensing Data. Large data sets are in demand to train deep learning models with good generalization, both for fine-tuning models and for training networks from scratch. They are also useful for evaluating the performance of various algorithms. However, recent years have seen few works about open-source UAV-RS data sets made public, which requires more efforts. Some data sets are as follow.

    • Fisheye rectification data set [56]: This is a synthesized dataset that covers various scenes and distortion parameter settings for rectification of fisheye images. It contains 2,550 source images, each of which is used to generate 10 samples with various distortion parameter settings.

    • ISPRS/EuroSDR benchmark for multi-platform photogrammetry [161]: The ISPRS/EuroSDR provides three data sets (i.e. oblique airborne, UAV-based and terrestrial images) over the two cities of Dortmund (Germany) and Zurich (Switzerland). These data sets are used to assess different algorithms for image orientation and dense matching. Terrestrial laser scanning, aerial laser scanning as well as topographic networks and GNSS points were acquired as ground truth to compare 3D coordinates on check points and evaluate cross sections and residuals on generated point cloud surfaces.

    • Urban Drone Dataset (UDD) [100]: This data set is a collection of UAV images extracted from 10 video sequences used for structure from motion. About 1%-2% data (about 205frames) are annotated by 3 semantic classes (vegetation, building and free space) for semantic constraints in 3D reconstruction. The data is acquired by DJI-Phantom 4 at altitudes between 60 and 100 m over the four cities of Beijing, Huludao, Zhengzhou and Cangzhouo (China).

    • UAV image mosaicking data set [136]: This data set consists of hundreds of images captured by the UAV. The corresponding DOMs are generated by DPGrid, which can be used as the golden standard to evaluate your mosaicking algorithms.

3 Applications

UAV-based remote sensing has attracted increasing attentions in recent years. It is widely used to quickly acquire high-resolution data in small areas or fly on high-risk or difficult regions to carry out remote-sensing tasks. Based on remote-sensing products, e.g. DOM, DEM and 3D models, UAV-RS is applied for urban planning, engineering monitoring, ecological research, and so on. The applications of UAV-based remote sensing seem to be unlimited and continually growing.

Recognizing space limitations, we focus on some potential and novel applications in this section. Some other mature or long-standing applications, such as precision agriculture [2], coastal and polar monitoring [162, 26, 17], disaster and emergency monitoring [6], and atmospheric monitoring [163], could be not discussed here and can be referred to papers [20, 23, 164, 165]. In fact, other applications not discussed here are still booming and deserve attention.

3.1 Urban planning and management

In recent years, the applications of UAV-based remote sensing in urban planning and management has experienced exponentially growth, including inspection of infrastructure conditions, monitoring of urban environments and transportation, 3D landscapes mapping and urban planning [3, 166].

3.1.1 3D city modeling

The camera-based UAV system provides a powerful tool to obtain 3D models of urban scenarios in a non-invasive and low-cost way. The city components are reconstructed for urban planning, including visualization, measurement, inspection and illegal building monitoring [167].

A pilot project was conducted using UAV-RS to build high-resolution urban models at large scale in complex urban areas in [93]. Specifically, a Falcon octocopter UAV equipped with a Sony camera was employed to acquire images under 150 m and generate 3D models of campus with 68 cm accuracy. GIS-layers and near infrared channel are also combined to help reconstruction of urban terrain as well as extraction of streets, buildings and vegetation.

Figure 14: Monitoring of thermal information of buildings by UAVs [168]. Data acquisition for building inspection by UAVs and infrared images of buildings which reflect thermal information (top). 3D thermal model of building (bottom).

3.1.2 Built environment monitoring and assessment

UAV-RS benefits for monitoring and assessing build environment to maintain and improve our living conditions.

Regular inspection of build environment is necessary to assess health of infrastructures and identify any faults at an early stage so as to perform the required maintenance. For instance, the damage of buildings was assessed based on gaps in UAV image-derived 3D point clouds, which were identified by SVM and random forests based on the surrounding damage patterns 

[169]. UAV visible and infrared images are acquired to monitor the condition and structural health of bridges, including bridge deterioration, deck delamination, aging of road surfaces, crack and deformation detection [170]. The inspection help engineers prioritize critical repair and maintenance needs.

UAV-based infrared remote sensing present an opportunity to inspect and analyze urban thermal environment, building performance and heat transfer at a micro scale so as to maintain the energy efficiency of such infrastructure and building stock [168]. An example of monitoring thermal environment in buildings using UAVs is shown in Fig. 14. 3D thermal model of buildings are generated for monitoring and analysis of building heat distribution and leakages, to help retrofitting of aging and energy inefficient building stock and infrastructure.

Urban informal settlement are classified and identified based on very high resolution and up-to-date data from UAVs to support informal settlement upgrading projects [171]. Urban vegetation mapping are performed to identify land cover types and vegetation coverage in urban areas, which is significant to help planners take measures for urban ecosystem optimization and climate improvement [172].

Ref. Platforms Aim of study Methods
[173] Rotary-wing UAV, RGB camera Detect and track road moving objects Optical flow
[174] UAV, gimballed vision sensor Road bounded vehicles search and tracking Particle filter, point-mass filter
[175] Rotary-wing UAV, RGB camera Car detection and counting SIFT+SVM
[176] Rotary-wing UAV, RGB camera Car detection, including the number, position and orientation of cars Similarity measure
[177] UAV, RGB camera Vehicle detection Multiclass classifier
[178] Rotary-wing UAV, Gopro camera Vehicle detection Viola-Jones and HOG+SVM
[179] Rotary-wing UAV, RGB cameras Track container, moving car and people Optical flow
[180] Rotary-wing UAV, infrared camera Pedestrian detection and tracking Classification, optical flow
[181] Rotary-wing UAV, RGB camera Detect, count and localize cars Deep CNN
[182] Rotary-wing UAV, RGB camera Visual object tracking (e.g. people and cars) Deep CNN
[183] UAV, visible camera Vehicle detection Deep CNN
[184] Rotary-wing UAV, RGB camera A large dataset for object detection and tracking Deep CNN
Table 7: Researches on UAV-based traffic target detection and tracking.

3.1.3 Urban traffic monitoring

UAVs, like eyes on the sky, provide the “above-the-head” point of view for surveillance, especially in traffic monitoring [185, 64], including detection and tracking of traffic targets, crowd monitoring, estimation of density, capacity and traffic flow. Traffic monitoring is beneficial to ensure security, optimize urban mobility, avoid traffic jams and congestions, analyze and solve environmental problems affecting urban areas.

Traffic target detection and tracking are two essential technologies in urban traffic monitoring. However, UAV-based detection and tracking is a challenging task, owing to object appearance changes caused by different situations, such as occlusion, shape deformation, large pose variation, onboard mechanical vibration and various ambient illumination [179]. Numerous methods are proposed focusing on UAV-based traffic target detection and tracking, shown in Tab. 7.

Various traffic targets, including cars, pedestrian, roads and bridges, are detected, localized and tracked based on UAV visible or infrared cameras. An example of vehicle detection and traffic monitoring can be seen in Fig. 15. Except for traffic monitoring, UAV-RS can also be used for traffic emergency monitoring and document, pedestrian-vehicle crash analysis and pedestrian/vehicle behavior study. In [186], the camera-equipped UAVs are used to record road traffic data, measure every vehicle’s position and movements from an aerial perspective for analyzing naturalistic vehicle trajectory and naturalistic driving behavior.

Figure 15: Vehicle detection and traffic monitoring by UAVs based on deep learning [187]. Left: Vehicle detection in crossing. Right: Vehicle detection in road and park. Orange boxes denote large cars and green boxes denote small cars.

3.2 Engineering monitoring

UAVs provide a bird’s-eye view solution for engineers to plan, build and maintain projects [3]. With UAVs, construction managers can monitor the entire site with better visibility, so that they are more informed about project progress. In addition, engineering observation and inspection by UAVs can ensure field staff safety, reduce production risks and increase on-site productivity when compared with artificial means. Recently, UAV-based remote sensing are widely applied in oil and gas pipelines monitoring, power infrastructure monitoring, mine areas monitoring, civil engineering, engineering deformation monitoring and railway monitoring [188].

3.2.1 Oil and gas pipeline monitoring

UAV provides a cost-effective solution for monitoring oil/gas pipelines and its surroundings [189], in contrast to conventional foot patrols and aerial surveillance by small planes or helicopters which are time-consuming and high-cost. UAVs are used to map pipelines and the surroundings, detect leakage and theft, monitor soil movement and prevent third-party interference, etc [190]. Generally, frequent observation by UAVs help timely identify corrosion and damage along pipelines so as to make proactive responses and maintenance. For identification of hydrocarbon leak, thermal infrared sensors are widely used to detect the temperature differences between the soil and fluids (i.e. hydrocarbons). For detection of gas emission and leak, gas detection sensors are applied. Although gas may diffuse or disperse into atmosphere, especially in windy weather, the leakage location can be estimated by the concentration of gas.

3.2.2 Power infrastructure monitoring

UAV-RS have been also widely applied to monitor power infrastructures, including power lines, poles, pylons and power station, during the period of plan, construction and maintenance of electric grids [191]. An example of power facilities monitoring is shown in Fig. 16.

In fact, it is an important but challenging task to detect power facilities from cluttered background and identify their defects [66]. As one of the most important power infrastructures, power lines are often detected by line-based detection, supervised classification or 3D point cloud-based methods [192]. Other power equipments are also detected, including conductors, insulators (glass/porcelain cap-and-pin and composite insulator), tower bodies, spacers, dampers, clamps, arcing horns and vegetation in corridors. The defects of power facilities (e.g. mechanical damage and corrosion) and the distance between vegetation/buildings and power lines are often identified based on visual inspection, thermography and ultraviolet cameras [193].

Besides, the radioactivity of nuclear plant was assessed using radiation sensors-equipped UAVs, including mapping evolving distribution of radiation, analyzing the contributing radionuclide species and the radiological or chemo-toxicity risks [194]. The influence of power plant on the surrounding environment is also monitored, which uses UAVs with infrared cameras to map temperature profiles of thermal effluent at a coal burning power plant in [195].

Figure 16: An example of power facilities monitoring. (a) UAV-based power inspection. (b) Visible image of insulator. (c) Infrared image of heating insulator. (d) Laser scanner data of power line corridor acquired by UAVs. (a)-(c) are provided by Xinqiao Wu, and (d) is from Leena et al. [191].

3.2.3 Mine areas monitoring

Mine areas are usually large and located in distant mountainous areas, which bring about challenges for monitoring by traditional methods. UAV-RS offers a promising way for mapping, monitoring and assessment of mine areas and their surroundings.

UAV-RS are often used to monitor mining activities and geomorphic changes of mining area, which can provide guidance for mine production and safety. For instance, surface moisture of peat production area is monitored to ensure environmental safety of peat production using UAVs with hyperspectral frame cameras [196]. Side slopes are mapped for mine area inventory and change monitoring based on terrestrial laser scanning and UAV photogrammetry [197]. Orthophotos and 3D models of mine areas are generated to assess the detailed structural-geological setting and identify potentially unstable zones so as to evaluate safety conditions and plan for proper remediation.

Besides, dust emission of mine tailings has a big influence on surrounding environment of mine areas, which can be mitigated by monitoring and controlling moisture of mine tailings. In [198], thermal sensors are mounted on UAVs to acquire data of iron mine tailings to map the spatial and temporal variations in moisture content of surface tailings. The relationship between moisture and strength of mine tailings is analyzed to help management of mine tailings.

3.3 Ecological and environmental monitoring

For ecological and environmental research, most areas are too remote or too dangerous to be thoroughly surveyed. Besides, most ecological experiments that involve many repetitive tasks are difficult to be conducted due to lack of necessary manpower and time or high cost of manned aerial survey. The emerging of UAVs opens new opportunities and revolutionizes the acquisition of ecological and environmental data [199]. Moreover, UAVs make it possible to monitor ecological phenomena at appropriate spatial and temporal resolutions, even individual organisms and their spatio-temporal dynamics at close range [12]. Recent years have seen rapid expansion of UAV-RS in ecological and environmental research, monitoring, management and conservation.

3.3.1 Population ecology

Population ecology aims to study, monitor and manage wildlife and their habitats. It is challenging for ecologists to approach sensitive or aggressive species and access remote habitats. UAV-RS makes regular wildlife monitoring, management and protection possible and provides more precise results compared with traditional ground-based surveying [200]. It is often applied to estimate populations/abundance and distribution, monitor wildlife behavior, map habitat and range, perform wildlife conservation including anti-poaching and illegal trade surveillance [201], shown in Tab. 8.

Item Contents Methods
Population estimation Wildlife identification, enumeration, and estimation of their population status, e.g. amount, abundance and distribution Manual visual inspection [202], deformable part-based mode [203], threshold and template matching [204], classification [205]
Wildlife tracking Explore animal behaviors (e.g. migratory patterns) and habitats so as to sustain species and prevent extinction Long-term target tracking, acoustic biotelemetry, radio collar tracking [206]
Habitat and range mapping Monitor habitat status, including vegetation distribution and coverage, seasonal or environmental changes of habitats Orthophoto generation, classification [207]
Conservation of wildlife Anti-poaching surveillance and wildlife protection, e.g. detecting animals, people/boats acting as poachers, and illegal activities Target detection [208]
Table 8: Researches on population ecology using UAV-RS.

Most of species that have been monitored by UAVs contains large terrestrial mammals (e.g. elephants), aquatic mammals (e.g. whales) and birds (e.g. snow geese). However, it is noting that UAVs may disturb wildlife and thus cause behavioral and physiological responses when flying at low altitude and high speed for close observation. With the increasing use of UAVs, particularly in research of vulnerable or sensitive species, there is a need to balance the potential disturbance to the animals with benefits obtained from UAV-based observation [209].

3.3.2 Natural resources monitoring

Natural resources, e.g. forest, grassland, soil and water, are of great need for monitoring, management and conservation, which gain increasing benefits from UAV-RS recently [27]. Here we take forest and grassland as examples to illustrate applications of UAV-RS.

a) Forest monitoring. Forest resources are the most common scenarios in UAV applications [25], including forest structure estimation, forest inventory, biomass estimation, biodiversity, disease and pests detection, and forest fire monitoring, shown in Tab. 9. UAV-RS takes a strong advantage in small-area forest monitoring. The continued explosion of forest monitoring applications relies mainly on the flight endurance and observation capability of payload.

Item Contents Methods
Forest structure - Forest 3D structural characterization, including DTM, canopy height model and canopy surface model; - 3D structures: SfM photogrammetry, LiDAR and profiling radar [210];
Forest inventory - Measure properties about geometry structure and spatial distribution of trees; - Plot-level metrics: canopy points or image classification [211];
- Estimate terrain/under-story height, plot-/tree-level metrics. - Tree-level metrics: canopy height model [212].
Forest biomass Above-ground biomass estimation - UAV-based L-band radar [213];
- Vertical information + L-band radar [214].
Forest biodiversity Monitor forest biodiversity at spatial and temporal scale - Quantification of canopy spatial structures and gap patterns [215];
- Fallen trees detection and their spatio-temporal variation analysis [216].
Forest health monitoring Monitor forest health, e.g. identification of disease and insect pest damage Multi- and hyper-spectral remote sensing, dense point clouds [217, 218]
Forest fire monitoring - Before fires: forest prevention, e.g. create fire risk maps, (3D) vegetation maps;
Forest fire monitoring, detection and fighting - During fires: detect active fires, locate fires, predict fire propagation;
- After fires: detect active embers, map burned areas and assess fire effects [219]
Table 9: Researches on forest monitoring using UAV-RS.

b) Grassland and shrubland monitoring. Grassland or shrubland are often located in remote areas with low population density, which poses challenges for their assessment, monitoring and management. Due to flexibility, high resolution and low cost, UAV-RS holds great potential in grassland and shrubland monitoring. Some examples are shown in Tab. 10.

UAV-RS is an emerging technology that has gained growing popularity in grassland monitoring. However, the use of high-standard multi- or hyper-spectral sensors, which are beneficial for species classification, remains a challenge due to the high weight. Besides, it is also encouraged to explore the optimal spatial resolution for studying different vegetation characteristics.

Ref. Platforms Payloads Aim of study
[220] Fixed-wing UAV Canon SD 550 Differentiate bare ground, shrubs, and herbaceous vegetation in an arid rangeland
[130] Fixed-wing UAV Color video camera, Canon SD 900, Mini MCA-6 Rangeland species-level vegetation classification
[221] Octocopter UAV Panasonic GX1 digital camera, hyperspectral camera Estimate plant traits of grasslands and monitor grassland health status
[222] Rotary-wing UAV RGB camera, near-infrared camera, MCA6 and hyperspectral camera Evaluate the applicability of four optical cameras for grassland monitoring
[223] Quadcopter UAV GoPro Hero digital camera Estimation of fractional vegetation cover of alpine grassland
[224] Simulation platform AISA + Eagle imaging spectrometer Hyperspectral classification of grassland species at the level of individuals
[225] UAV RGB camera, hyperspectral camera Mapping the conservation status of Calluna-dominated Natura 2000 dwarf shrub habitats
Table 10: Researches on UAV-based grassland and shrubland monitoring.

3.3.3 Species distribution modeling

Over the past decades, a considerable amount of work has been performed to map species distributions and use these collected information to identify suitable habitats. Species distribution modeling is one such work, which models species geographic distributions based on correlations between known occurrence records and the environmental conditions at occurrence localities [226]. It has been widely applied in selecting nature reserves, predicting the effects of environmental change on species range and assessing the risk of species invasions [227].

Due to the spatial biases and insufficient sampling of conventional field surveys, UAV-RS has become a very effective technology to supply species occurrence data recently, attributable to its ability to quickly and repeatedly acquire very high-spatial resolution imagery with low cost [228]. For instance, UAV-RS is used to detect plant/animal species in terrestrial and aquatic ecosystems, estimate their populations and distribution patterns, and identify important habitat (e.g. stopovers on migratory routes, breeding grounds) [202, 207, 205]. Moreover, UAV-RS provides a timely and on-demand data acquisition, which allows a more dynamic manner to understand habitat suitability and species range expansion or contraction.

However, UAV-RS may also cause uncertainty and errors for species distribution modeling. These errors mainly come from data acquisition and processing algorithms, such as species classification. Thus, strict data acquisition and high-precision data processing and analysis are necessary.

3.3.4 Environmental monitoring and conservation

UAVs are used to monitor environmental process and changes at spatial and temporal scales, which is challenging by conventional remote-sensing platforms [1], e.g. mudflat evolution and morphological dynamics [229]. Besides, environmental pollution monitoring greatly benefits from UAV-based remote sensing. UAVs equipped with multi-spectral sensors are employed to map trophic state of reservoir and investigate water pollution for water quality monitoring in [230]. Soil erosion, degradation and pollution are also monitored based on UAV DTMs and orthophotos. For instance, soil copper contamination was detected based on hydrological models using a multi-rotator UAV, and copper accumulation points were estimated at plot scales based on micro-rill network modeling and wetland prediction indexes [231].

Figure 17: 3D digitalization of cultural heritage for recording and conservation [232]. (a) Dense point cloud of Gutian conference monument. (b) Photo-realistic 3D model of the monument.

3.4 Archeology and cultural heritage

Archeology and cultural heritage is a promising application for UAV-based remote sensing [233].

UAVs are generally adopted to conduct photogrammetric survey and mapping, documentation and preservation of archaeological site [234]. In addition, it is also used for archaeological detection and discovery. In archeology, buried features may produce small changes or anomalies in surface conditions, which can be detected and measured based on UAVs with spectroradiometer, digital or thermal cameras [235].

For cultural heritage, UAVs are often employed to produce high-quality 3D recordings and presentations for documentation, inspection, conservation, restoration and museum exhibitions [236]. Multiple platforms, e.g. terrestrial laser scanner, ultralight aerial platform, unmanned aerial vehicle and terrestrial photogrammetry, are often integrated to acquire multi-view data for 3D reconstruction and visualization of cultural relics. In Fig. 17, a camera-equipped UAV is integrated with a terrestrial laser scanner to facilitate complete data acquisition of historical site, which building facades are captured by terrestrial laser scanner and building roofs are acquired by UAV photogrammetry [232].

Restoration of heritage are usually based on precision 3D data. In [237], a virtual restoration approach was proposed for the ancient plank road. The UAV and terrestrial laser scanner were used to collect detailed 3D data of existing plank roads, which were processed to determine the forms of plank roads and restore each component with detailed sizes based on mechanical analysis. The virtual restoration model was then generated by adding components and background scene into 3D model of plank roads.

3.5 Human and social understanding

The aerial view of UAV-RS makes it a potential solution to help describe, model, predict and understanding human behaviors and interaction with society.

In [35], UAVs are used to collect videos of various types of targets, e.g. pedestrians, bikers, cars and buses, to understand pedestrian trajectory and their interact with the physical space as well as with the targets that populate such spaces. This could make a great contribution to pedestrian tracking, target trajectory prediction and activity understanding [238]. In [186], researchers adopt a camera-equipped UAV to record naturalistic vehicle trajectory and naturalistic behavior of road users, which is intended for scenario-based safety validation of highly automated vehicles. The data can also be used to contribute on driver models and road user prediction models. Besides, UAV-RS is beneficial for crowd risk analysis and crowd safety, especially in mass gatherings of people related to sports, religious and cultural activities [239]. UAV-RS flexibly provides high-resolution and real-time on-the-spot data for people detection, crowd density estimation and crowd behavior analysis so as to make effectively response to potential risk situation. Fig. 18 shows some examples.

Recent studies provide only a few works about human and social understanding using UAV-RS. However, with the popularity of UAVs available to everyone, we can see a huge rising research topic in UAV-RS.

Figure 18: Left: Pedestrian trajectory prediction [35]. Right: Crowd monitoring [239].

4 Future prospectives

Thanks to the progress of UAV platforms and small-size remote-sensing sensors, as well as the improvement of UAV regulations and the opening of market, UAV-RS is gaining a lot of popularity in the remote-sensing community. However, a lot of challenges remain which require more efforts.

  • UAV platforms. Due to the light weight and small size, UAVs often suffer from some inherent defects, including platform instability, limited payload capacity and short flight endurance, which pose challenges for acquisition of reliable remote-sensing data and high-precision data processing.

  • Remote-sensing sensors. Weight and energy consumption are the main limitation for remote-sensing sensors. Thus, it is difficult to use high-precise navigation system, high-standard multi-/hyper-spectral camera, LiDAR, radar, and even massively parallel platforms for onboard processing in small UAVs.

  • UAV policy and regulations. It is one of the major factors impeding the use of UAVs in remote-sensing community [34, 32, 31]. Restrictions in the use of airspace prevent researchers from testing all possibilities. Indeed, UAVs used in civil applications have been developing faster than the corresponding legislation. The adaptations to the relevant legislation will be necessary in future. Undoubtedly, effective UAV regulations will facilitate the wider use of UAVs in remote-sensing community.

  • Data processing. Some challenges have been discussed in each section of key technologies. Some other issues, such as robust, high-efficiency, automation and intelligence for data processing, are worth more efforts. Besides, how to handle massive multi-source/heterogeneous remote-sensing data is also worth considering.

The current research trends and future insights are discussed below.

4.1 Platforms

The continued trend of increasingly miniaturized components of UAV-RS promises an era of tailored systems for on-demand remote sensing at extraordinary levels of sensor precision and navigational accuracy [34].

  • Long flight endurance is expected for efficient remote-sensing data acquisition. Research is ongoing to improve the battery technology, including a power tethered UAV [240], solar-powered UAV [241], and beamed laser power UAV [242]. Laser power beaming would enable unlimited flight endurance and in flight recharging of UAVs. Thus, such UAVs could fly day and night for weeks or possibly months without landing.

  • Light-weight, small-sized and high-precision remote-sensing sensors are ongoing trend, which have been not yet sufficiently miniaturized[243]. Continuing advances in the miniaturization of remote sensing sensors and positioning hardware is placing increasingly powerful monitoring and mapping equipment on ever smaller UAV platforms. Besides, more miniaturized sensors will be developed for UAV-RS, such as detector and atmospheric sensor. Moreover, this makes multi-sensor integration easy to implement, strengthening the earth observation performance of UAV-RS.

  • Safe, reliable and stable UAV remote sensing systems. Due to light weight and small size, UAV-RS often suffer from instability when there is airflow. Stable unmanned aircraft deserves more efforts [244]. Video stabilization could be integrated into data acquisition systems [245]. In addition, safe operation has become a global concern. Obstacle avoidance are often achieved based on ultrasound sensors or depth cameras, which are short-distance oriented. Deep learning-based vision may be a good support. Dynamic vision sensor, e.g. event camera, is another promising solution. Besides, safe landing has been largely un-addressed. Deep networks can be used to learning to estimate depth and safe landing areas for UAVs [246].

  • Autonomous navigation and intelligent UAVs.

    Although the fact that UAV can fly autonomously, there remain challenges under challenging environments, such as indoor fire scene where GPS may fail. Besides, nowadays it is still required the presence of a pilot. One of the main reasons is the lack of device intelligence. This issue could be solved mainly by artificial intelligence, which is able to provide autonomous decision support and reaction to events including law awareness 

    [24]. For instance, deep learning can be used to learn to control UAVs and teach them to fly in complex environments [247, 248]. We envision that UAV-RS is capable of providing the entire automated process from taking off the vehicle to processing the data and turning on the pro-active actions. To this end, more issues need to be considered, including intelligent perception of environments, precision control, indoor/outdoor seamless navigation and positioning [249, 250].

4.2 Data processing

The existing data processing can satisfy the majority of applications of UAVs in remote-sensing community, however, efforts remain in need to facilitate data processing more automatic, efficient and intelligent, which may improve the earth observation performance of UAV-based remote sensing.

Figure 19: Aerial path planning in urban building scenes [251].
  • Aerial view and path planning. How to perform view and path planning to ensure complete and accurate coverage of the surveyed area with minimum flight time is a crucial but challenging issue. UAV-RS often acquire data either under manual control or using pre-designed flight paths, with the camera setting in a fixed direction, e.g. vertical or oblique. It is challenging to perform complete and dense coverage, especially in urban environment. One promising solution is to take the initial scene reconstruction from the nadir acquisition as a reference to continuously optimize the view and position [251]. An example of aerial view and path planning is shown in Fig. 19.

  • Robust data processing. UAV-RS is expected to process remote-sensing data with different source, quality, resolution, scale, distortion, etc. which is an imperative but challenging issue. For instance, handling water covered image, cloud shelter image, arbitrary-attitude image, photography loopholes, and multi-source images (close-range, low-altitude and oblique images, or infrared and visible images) for aerial triangulation. Progress will make in the future.

  • Real-time/on-board data processing. Real-time or on-board data processing plays a significant role in UAV-RS, especially in time-critical remote sensing [252]. In the wave of sensor miniaturization, FPGA and GPU are expected to be designed light-weight and low energy consumption, which are adaptive to miniaturized UAVs for on-board processing. Besides, the collected data should be processed based on high-performance computing, such as cloud computing.

  • Deep learning for UAV-RS. Great success has been achieved in image classification and target detection [253, 254]

    , however, there is a lot of room for deep learning applied in UAV-RS 3D geometric vision, especially in image matching and pose estimation. Some critical issues should be taken into consider, including the lack of large-scale annotation data set, weakly supervised learning for limited annotated data, transfer learning for off-the-shelf deep models.

  • 3D semantic computing. There is a trend that learning to estimate 3D geometry and semantics jointly. More geometric priors should be introduced to capture the complex semantic and geometric dependencies of 3D world. Another issue is the high memory consumption, caused by the necessity to store indicator variables for every semantic label and transition, which should be considered [255].

  • Information mining from UAV-RS big data. Data collected from UAV flights can reach hundreds of megabytes per hectare of surveyed area. Besides, UAVs can form a remote-sensing network to provide fast, cloudless, centimeter-level and hour-level data collection and accurate service on the Internet. This will inevitably generate massive amounts of remote sensing data. Knowledge mining from massive and heterogeneous remote-sensing data is a great challenge. Deep learning and cloud computing shed light on this issue. Besides, how to optimize data acquisition to ensure complete and accurate coverage with minimum data volume and redundancy is also crucial to reduce the difficulty of information mining.

4.3 Applications

With the advance of UAV platforms and remote-sensing sensors, there is potential for wider applications. The attention may shift from monitoring earth environment to human and social understanding, such as individual/group behavior analysis and infectious disease mapping [256]. UAV-RS also hold potential in autonomous driving community. They are adopted to extend the perception capabilities of a vehicle by using a small quadrotor to autonomously locate and observe regions occluded to the vehicle and detect potentially unsafe obstacles such as pedestrians or other cars [36]. More applications are on the way.

5 Conclusions

Compared to conventional platforms (e.g. manned aircrafts and satellites), UAV-RS present several advantages: flexibility, maneuverability, efficiency, high-spatial/temporal resolution, low altitude, low cost, etc. In this article, we have systematically reviewed the current status of UAVs in remote-sensing community, including UAV-based data processing, applications, current trends and future prospectives. Some conclusions can be obtained from this survey.

  • The inspiring advance of UAV platforms and miniaturized sensors has made UAV-RS meet the critical spatial, spectral and temporal resolution requirements, offering a powerful supplement to other remote-sensing platforms. UAV-RS holds great advantages in accommodating the ever-increasing demands for small-area, timely and fine surveying and mapping.

  • Due to the characteristics of UAV platforms, many specialized data-processing technologies are designed for UAV-RS. Technologically speaking, UAV-RS are mature enough to support the development of generic geo-information products and services. With the progress of artificial intelligence (e.g. deep learning) and robotics, UAV-RS will experience a tremendous technological leap towards automatic, efficient and intelligent services.

  • Many current UAV-RS data-processing software is commercially available, which promotes UAV-RS flourish in remote-sensing applications. With the development of UAV-RS, the applications of UAV-based remote sensing will be continually growing.

Noting that challenges still exist and hinder the progress of UAV-RS. Numerous research is required, which is being performed with the advantage of low entrance barriers. Rapid advancement of UAV-RS seems to be unstoppable and more new technologies and applications in UAV-RS will be seen definitely in coming years.

References

  • [1] J. H. Perry and R. J. Ryan, “Small-scale unmanned aerial vehicles in environmental remote sensing: Challenges and opportunities,” GISci. Remote Sens., vol. 48, no. 1, pp. 99–111, 2011.
  • [2] C. Zhang and M. K. John, “The application of small unmanned aerial systems for precision agriculture: a review,” Precis. Agric., vol. 13, no. 6, pp. 693–712, 2012.
  • [3] P. Liu, A. Y. Chen, Y.-N. Huang, J.-Y. Han, J.-S. Lai, S.-C. Kang, T.-H. Wu, M.-C. Wen, and M.-H. Tsai, “A review of rotorcraft unmanned aerial vehicle (UAV) developments and applications in civil engineering,” Smart Struct. Syst., vol. 13, no. 6, pp. 1065–1094, 2014.
  • [4] C. Toth and G. Jóźków, “Remote sensing platforms and sensors: A survey,” ISPRS J. Photogrammetry Remote Sens., vol. 115, pp. 22 – 36, 2016.
  • [5] J. A. Benediktsson, J. Chanussot, and W. M. Moon, “Very high-resolution remote sensing: Challenges and opportunities,” Proceedings of the IEEE, vol. 100, no. 6, pp. 1907–1910, 2012.
  • [6] C. Gomez and H. Purdie, “UAV-based photogrammetry and geocomputing for hazards and disaster risk monitoring-a review,” Geoenvironmental Disasters, vol. 3, no. 1, pp. 1–11, 2016.
  • [7] E. R. Hunt and S. I. Rondon, “Detection of potato beetle damage using remote sensing from small unmanned aircraft systems,” J. Appl. Remote Sens., vol. 11, no. 2, p. 026013, 2017.
  • [8] D. J. Mulla, “Twenty five years of remote sensing in precision agriculture: Key advances and remaining knowledge gaps,” Biosyst. Eng., vol. 114, no. 4, pp. 358–371, 2013.
  • [9] T. Louiset, A. Pamart, E. Gattet, T. Raharijaona, L. De Luca, and F. Ruffier, “A shape-adjusted tridimensional reconstruction of cultural heritage artifacts using a miniature quadrotor,” Remote Sens., vol. 8, no. 10, p. 858, 2016.
  • [10] A. Matese, P. Toscano, S. Di Gennaro, L. Genesio, F. Vaccari, J. Primicerio, C. Belli, A. Zaldei, R. Bianconi, and B. Gioli, “Intercomparison of UAV, aircraft and satellite remote sensing platforms for precision viticulture,” Remote Sens., vol. 7, no. 3, pp. 2971–2990, 2015.
  • [11] A. S. Laliberte and A. Rango, “Texture and scale in object-based analysis of subdecimeter resolution unmanned aerial vehicle (UAV) imagery,” IEEE Trans. Geosci. Remote Sens., vol. 47, no. 3, pp. 761–770, 2009.
  • [12] K. Anderson and K. J. Gaston, “Lightweight unmanned aerial vehicles will revolutionize spatial ecology,” Front. Ecol. Environ., vol. 11, no. 3, pp. 138–146, 2013.
  • [13] V. Prahalad, C. Sharples, J. Kirkpatrick, and R. Mount, “Is wind-wave fetch exposure related to soft shoreline change in swell-sheltered situations with low terrestrial sediment input?” J. Coast. Conserv., vol. 19, pp. 23–33, 2015.
  • [14] T. Adão, J. Hruška, L. Pádua, J. Bessa, E. Peres, R. Morais, and J. J. Sousa, “Hyperspectral imaging: A review on uav-based sensors, data processing and applications for agriculture and forestry,” Remote Sens., vol. 9, no. 11, p. 1110, 2017.
  • [15] F. Nex and F. Remondino, “UAV for 3D mapping applications: a review,” Applied Geomatics, vol. 6, no. 1, pp. 1–15, 2014.
  • [16] T. Liu and A. Abd-Elrahman, “An object-based image analysis method for enhancing classification of land covers using fully convolutional networks and multi-view images of small unmanned aerial system,” Remote Sens., vol. 10, no. 3, p. 457, 2018.
  • [17] D. Leary, “Drones on ice: an assessment of the legal implications of the use of unmanned aerial vehicles in scientific research and by the tourist industry in antarctica,” Polar Rec., vol. 53, no. 04, pp. 343–357, 2017.
  • [18] R. W. Graham, “Small format aerial surveys from light and micro light aircraft,” Photogramm. Rec., vol. 12, no. 71, pp. 561–573, 1988.
  • [19] G. J. J. Verhoeven, J. Loenders, F. Vermeulen, and R. Docter, “Helikite aerial photography-a versatile means of unmanned, radio controlled, low-altitude aerial archaeology,” Archaeol. Prospect., vol. 16, pp. 125–138, 2009.
  • [20] I. Colomina and P. Molina, “Unmanned aerial systems for photogrammetry and remote sensing: A review,” ISPRS J. Photogrammetry Remote Sens., vol. 92, pp. 79–97, 2014.
  • [21] G.-S. Xia, M. Datcu, W. Yang, and X. Bai, “Information processing for unmanned aerial vehicles (uavs) in surveying, mapping, and navigation,” Geo-spatial Information Science, vol. 21, no. 1, pp. 1–1, 2018.
  • [22] X. Liao, C. Zhou, F. Su, H. Lu, H. Yue, and J. Gou, “The mass innovation era of UAV remote sensing,” Journal of Geo-information Science, vol. 18, no. 11, pp. 1439–1447, 2016.
  • [23] G. Pajares, “Overview and current status of remote sensing applications based on unmanned aerial vehicles (UAVs),” Photogramm. Eng. Remote Sensing, vol. 81, no. 4, pp. 281–330, 2015.
  • [24] L. Pádua, J. Vanko, J. Hruška, T. Adão, J. J. Sousa, E. Peres, and R. Morais, “Uas, sensors, and data processing in agroforestry: a review towards practical applications,” Int. J. Remote Sens., vol. 38, no. 8-10, pp. 2349–2391, 2017.
  • [25] C. Torresan, A. Berton, F. Carotenuto, S. F. Di Gennaro, B. Gioli, A. Matese, F. Miglietta, C. Vagnoli, A. Zaldei, and L. Wallace, “Forestry applications of uavs in europe: a review,” Int. J. Remote Sens., vol. 38, no. 8-10, pp. 2427–2447, 2017.
  • [26] A. Bhardwaj, L. Sam, Akanksha, F. Javier Martin-Torres, and R. Kumar, “UAVs as remote sensing platform in glaciology: Present applications and future prospects,” Remote Sens. Environ., vol. 175, pp. 196–204, 2016.
  • [27] M. Shahbazi, J. Théau, and P. Ménard, “Recent applications of unmanned aerial imagery in natural resource management,” GISci. Remote Sens., vol. 51, no. 4, pp. 339–365, 2014.
  • [28] W. Hartmann, M. Havlena, and K. Schindler, “Recent developments in large-scale tie-point matching,” ISPRS J. Photogrammetry Remote Sens., vol. 115, pp. 47–62, 2016.
  • [29] F. Remondino, M. G. Spera, E. Nocerino, F. Menna, and F. Nex, “State of the art in high density image matching,” Photogramm. Rec., vol. 29, no. 146, pp. 144–166, 2014.
  • [30] A. Gruen, “Development and status of image matching in photogrammetry,” Photogramm. Rec., vol. 27, no. 137, pp. 36–57, 2012.
  • [31] C. Stöcker, R. Bennett, F. Nex, M. Gerke, and J. Zevenbergen, “Review of the current state of uav regulations,” Remote Sens., vol. 9, no. 5, p. 459, 2017.
  • [32] A. P. Cracknell, “UAVs: regulations and law enforcement,” Int. J. Remote Sens., vol. 38, no. 8-10, pp. 3054–3067, 2017.
  • [33] A. Fombuena, “Unmanned aerial vehicles and spatial thinking: Boarding education with geotechnology and drones,” IEEE Geosci. Remote Sens. Mag., vol. 5, no. 3, pp. 8 – 18, 2017.
  • [34] A. C. Watts, V. G. Ambrosia, and E. A. Hinkley, “Unmanned aircraft systems in remote sensing and scientific research: Classification and considerations of use,” Remote Sens., vol. 4, no. 6, pp. 1671–1692, 2012.
  • [35] A. Robicquet, A. Sadeghian, A. Alahi, and S. Savarese, “Learning social etiquette: Human trajectory understanding in crowded scenes,” in ECCV, 2016, pp. 549–565.
  • [36] A. Wallar, B. Araki, R. Chang, J. Alonso-Mora, and D. Rus, “Remote sensing for autonomous vehicles using a small unmanned aerial vehicle,” in Field and Service Robotics, 2018, pp. 591–604.
  • [37] S. Harwin, A. Lucieer, and J. Osborn, “The impact of the calibration method on the accuracy of point clouds derived using unmanned aerial vehicle multi-view stereopsis,” Remote Sens., vol. 7, no. 9, pp. 11 933–11 953, 2015.
  • [38] B. Klingner, D. Martin, and J. Roseborough, “Street view motion-from-structure-from-motion,” in ICCV, Sydney, NSW, Australia, 2013, pp. 953–960.
  • [39] L. Deng, Z. Mao, X. Li, Z. Hu, F. Duan, and Y. Yan, “UAV-based multispectral remote sensing for precision agriculture: A comparison between different cameras,” ISPRS J. Photogrammetry Remote Sens., vol. 146, pp. 124–136, 2018.
  • [40]

    S. Jiang and W. Jiang, “On-board GNSS/IMU assisted feature extraction and matching for oblique UAV images,”

    Remote Sens., vol. 9, no. 8, p. 813, 2017.
  • [41] W. Ken and H. H. Chris, “Remote sensing of the environment with small unmanned aircraft systems (UASs), part 1: a review of progress and challenges,” J. Unmanned Veh. Syst., vol. 02, no. 03, pp. 69–85, 2014.
  • [42]

    J. Lei, S. Zhang, L. Luo, J. Xiao, and H. Wang, “Super-resolution enhancement of uav images based on fractional calculus and pocs,”

    Geo-spatial Information Science, vol. 21, no. 1, pp. 56–66, 2018.
  • [43] H. Aasen, A. Burkart, A. Bolten, and G. Bareth, “Generating 3D hyperspectral information with lightweight UAV snapshot cameras for vegetation monitoring: From camera calibration to quality assurance,” ISPRS J. Photogrammetry Remote Sens., vol. 108, pp. 245–259, 2015.
  • [44] G. Wang, Q. M. J. Wu, and W. Zhang, “Kruppa equation based camera calibration from homography induced by remote plane,” Pattern Recogn. Lett., vol. 29, no. 16, pp. 2137–2144, 2008.
  • [45] C. Strecha, W. von Hansen, L. V. Gool, P. Fua, and U. Thoennessen, “On benchmarking camera calibration and multi-view stereo for high resolution imagery,” in CVPR, Anchorage, AK, USA, 2008, pp. 1–8.
  • [46] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, no. 11, pp. 1330–1334, 2000.
  • [47] Z. Zhan, “Camera calibration based on liquid crystal display (LCD),” ISPRS Arch. Photogramm. Remote Sens. Spatial Inf. Sci., vol. 37, no. 1, pp. 15–20, 2008.
  • [48] A. Richardson, J. Strom, and E. Olson, “AprilCal: Assisted and repeatable camera calibration,” in IEEE/RSJ IROS, Tokyo, Japan, 2013, pp. 1814–1821.
  • [49] J. Kannala and S. S. Brandt, “A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 8, pp. 1335–1340, 2006.
  • [50] A. M. G. Tommaselli, M. Galo, M. V. A. de Moraes, J. Marcato, C. R. T. Caldeira, and R. F. Lopes, “Generating virtual images from oblique frames,” Remote Sens., vol. 5, no. 4, pp. 1875–1893, 2013.
  • [51] C. Hughes, P. Denny, M. Glavin, and E. Jones, “Equidistant fish-eye calibration and rectification by vanishing point extraction,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 12, pp. 2289–2296, 2010.
  • [52] H. Babapour, M. Mokhtarzade, and M. J. V. Zoej, “A novel post-calibration method for digital cameras using image linear features,” Int. J. Remote Sens., vol. 38, no. 8-10, pp. 2698–2716, 2017.
  • [53] F. Bukhari and M. N. Dailey., “Automatic radial distortion estimation from a single image,” Journal of mathematical imaging and vision, vol. 45, no. 1, pp. 31–45, 2013.
  • [54] M. Zhang, J. Yao, M. Xia, K. Li, Y. Zhang, and Y. Liu., “Line based multi-label energy optimization for fisheye image rectification and calibration,” in CVPR, 2015, pp. 4137–4145.
  • [55] J. Rong, S. Huang, Z. Shang, and X. Ying, “Radial lens distortion correction using convolutional neural networks trained with synthesized images,” in ACCV, 2016, pp. 35–49.
  • [56] X. Yin, X. Wang, J. Yu, M. Zhang, P. Fua, and D. Tao., “Fisheyerecnet: A multi-context collaborative deep network for fisheye image rectification,” in ECCV, 2018, pp. 1–16.
  • [57] Z.-C. Xue and et al., “Learning to calibrate straight lines for fisheye image rectification,” in CoRR, 2019.
  • [58] M. Aleman-Flores, L. Alvarez, L. Gomez, and D. Santana-Cedres, “Automatic lens distortion correction using one-parameter division models,” in IPOL, 2014, pp. 327–343.
  • [59] D. Wierzbicki, “Multi-camera imaging system for uav photogrammetry,” Sensors, vol. 18, no. 8, 2018.
  • [60] Z. Lin, G. Su, and F. Xie, “UAV borne low altitude photogrammetry system,” in ISPRS Congress, 2012, pp. 415–423.
  • [61] Z.-J. Wang and W. Li, “A solution to cooperative area coverage surveillance for a swarm of MAVs,” Int. J. of Adv. Robot. Syst., vol. 10, no. 12, pp. 398–406, 2013.
  • [62] L. Merino, F. Caballero, J. R. Martínez-De-Dios, I. Maza, and A. Ollero, “An unmanned aircraft system for automatic forest fire monitoring and measurement,” J. Intell. Robot. Syst., vol. 65, no. 1, pp. 533–548, 2012.
  • [63] Y. Lin, J. Hyyppa, T. Rosnell, A. Jaakkola, and E. Honkavaara, “Development of a UAV-MMS-Collaborative aerial-to-ground remote sensing system-a preparatory field validation,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 6, no. 4, pp. 1893–1898, 2013.
  • [64] E. N. Barmpounakis, E. I. Vlahogianni, and J. C. Golias, “Unmanned aerial aircraft systems for transportation engineering: Current practice and future challenges,” International Journal of Transportation Science and Technology, vol. 5, no. 3, pp. 111 – 122, 2016.
  • [65] J. R. Riehl, G. E. Collins, and J. P. Hespanha, “Cooperative search by uav teams: A model predictive approach using dynamic graphs,” IEEE Trans. Aerosp. Electron. Syst., vol. 47, no. 4, pp. 2637–2656, 2009.
  • [66] G. J. Lim, S. Kim, J. Cho, Y. Gong, and A. Khodaei, “Multi-uav pre-positioning and routing for power network damage assessment,” IEEE Transactions on Smart Grid, vol. 9, no. 4, pp. 3643–3651, 2018.
  • [67] Y. H. Zhang, X. Jin, and Z. J. Wang, “A new modified panoramic UAV image stitching model based on the GA-SIFT and adaptive threshold method,” Memet. Comput., vol. 9, no. 3, pp. 231–244, 2017.
  • [68] X. Zhuo, T. Koch, F. Kurz, F. Fraundorfer, and P. Reinartz, “Automatic UAV image geo-registration by matching UAV images to georeferenced image data,” Remote Sens., vol. 9, no. 4, p. 376, 2017.
  • [69] S. Jiang and W. Jiang, “Hierarchical motion consistency constraint for efficient geometrical verification in UAV image matching,” arXiv:1801.04096, 2018.
  • [70] X. Xiao, B. Guo, D. Li, L. Li, N. Yang, J. Liu, P. Zhang, and Z. Peng, “Multi-view stereo matching based on self-adaptive patch and image grouping for multiple unmanned aerial vehicle imagery,” Remote Sens., vol. 8, no. 2, p. 89, 2016.
  • [71] L. Zhou, S. Zhu, T. Shen, J. Wang, T. Fang, and L. Quan, “Progressive large scale-invariant image matching in scale space,” in ICCV, 2017, pp. 2381–2390.
  • [72] H. Hu, Q. Zhu, Z. Du, Y. Zhang, and Y. Ding, “Reliable spatial relationship constrained feature point matching of oblique aerial images,” Photogramm. Eng. Remote Sensing, vol. 81, no. 1, pp. 49–58, 2015.
  • [73] C. Wang, J. Chen, J. Chen, A. Yue, D. He, Q. Huang, and Y. Zhang, “Unmanned aerial vehicle oblique image registration using an ASIFT-based matching method,” J. Appl. Remote Sens., vol. 12, no. 2, p. 025002, 2018.
  • [74] M. Yu, H. Yang, K. Deng, and K. Yuan, “Registrating oblique images by integrating affine and scale-invariant features,” Int. J. Remote Sens., vol. 39, no. 10, pp. 3386–3405, 2018.
  • [75] Y. Sun, L. Zhao, S. Huang, L. Yan, and G. Dissanayake, “-sift: Sift feature extraction and matching for large images in large-scale aerial photogrammetry,” ISPRS J. Photogrammetry Remote Sens., vol. 91, pp. 1–16, 2014.
  • [76] M. Ai, Q. Hu, J. Li, M. Wang, H. Yuan, and S. Wang, “A robust photogrammetric processing method of low-altitude UAV images,” Remote Sens., vol. 7, no. 3, pp. 2302–2333, 2015.
  • [77] V. Balntas, K. Lenc, A. Vedaldi, and K. Mikolajczyk, “HPatch: A benchmark and evaluation of handcrafted and learned local descriptors,” in CVPR, Honolulu, Hawaii, USA, 2017, pp. 1–10.
  • [78] Y. Verdie, K. M. Yi, P. Fua, and V. Lepetit, “TILDE: A temporally invariant learned detector,” in CVPR, Boston, MA, USA, 2015, pp. 1–10.
  • [79] X. Zhang, F. X. Yu, S. Karaman, and S.-F. Chang, “Learning discriminative and transformation covariant local feature detectors,” in CVPR, Honolulu, Hawaii, USA, 2017, pp. 6818–6826.
  • [80] X. Zhang, F. X. Yu, S. Kumar, and S.-F. Chang, “Learning spread-out local feature descriptors,” in ICCV, Venice, Italy, 2017.
  • [81] Z. Luo, T. Shen, L. Zhou, S. Zhu, R. Zhang, Y. Yao, T. Fang, and L. Quan, “Geodesc: Learning local descriptors by integrating geometry constraints,” in ECCV, 2018, pp. 170–185.
  • [82] H. Altwaijry, E. Trulls, J. Hays, P. Fua, and S. Belongie, “Learning to match aerial images with deep attentive architectures,” in CVPR, Seattle, WA, USA, 2016, pp. 3539–3547.
  • [83] S. Castillo-Carrión and J. E. Guerrero-Ginel, “Autonomous 3d metric reconstruction from uncalibrated aerial images captured from uavs,” Int. J. Remote Sens., vol. 38, no. 8-10, pp. 3027–3053, 2017.
  • [84] X. Wan, J. Liu, H. Yan, and G. L. K. Morgan, “Illumination-invariant image matching for autonomous UAV localisation based on optical sensing,” ISPRS J. Photogrammetry Remote Sens., vol. 119, pp. 198–213, 2016.
  • [85] S. Qi, C. Wu, B. Curless, Y. Furukawa, C. Hernandez, and S. M. Seitz, “Accurate geo-registration by ground-to-aerial image matching,” in 3DV, Tokyo, Japan, 2014, pp. 525–532.
  • [86] M. Wolff, R. T. Collins, and Y. Liu, “Regularity-driven building facade matching between aerial and street views,” in CVPR, Seattle, WA, USA, 2016, pp. 1591–1600.
  • [87] S. Hu, M. Feng, R. M. H. Nguyen, and G. H. Lee, “Cvm-net: Cross-view matching network for image-based ground-to-aerial geo-localization,” in CVPR, 2018, pp. 7258–7267.
  • [88] G.-S. Xia, J. Delon, and Y. Gousseau, “Accurate junction detection and characterization in natural images,” Int. J. Comput. Vision, vol. 106, no. 1, pp. 31–56, 2014.
  • [89] N. Xue, G. S. Xia, X. Ba, L. Zhang, and W. Shen, “Anisotropic-scale junction detection and matching for indoor images,” IEEE Trans. Image Process., vol. 27, no. 1, pp. 78–91, 2018.
  • [90] C. Zhao and A. A. Goshtasby, “Registration of multitemporal aerial optical images using line features,” ISPRS J. Photogrammetry Remote Sens., vol. 117, pp. 149–160, 2016.
  • [91] K. M. Yi, E. Trulls, Y. Ono, V. Lepetit, M. Salzmann, and P. Fua, “Learning to find good correspondences,” in CVPR, 2018, pp. 1–9.
  • [92] W. Förstner and B. P. Wrobel, Photogrammetric computer vision-Statistics, Geometry, Orientation and Reconstruction.   Springer, Berlin, 2016.
  • [93] R. Qin, A. Grün, and X. Huang, “UAV project-building a reality-based 3D model,” Coordinates, vol. 9, pp. 18–26, 2013.
  • [94] Y. Zhang, J. Xiong, and L. Hao, “Photogrammetric processing of low-altitude images acquired by unpiloted aerial vehicles,” Photogramm. Rec., vol. 26, no. 134, pp. 190–211, 2011.
  • [95] M. J. Westoby, J. Brasington, N. F. Glasser, M. J. Hambrey, and J. M. Reynolds, “Structure-from-motion photogrammetry: A low-cost, effective tool for geoscience applications,” Geomorphology, vol. 179, pp. 300–314, 2012.
  • [96] O. Özyeşil, V. Vladislav, B. Ronen, and S. Amit, “A survey of structure from motion,” Acta Numer., vol. 26, pp. 305–364, 2017.
  • [97] H. Cui, X. Gao, S. Shen, and Z. Hu, “HSfM: Hybrid structure-from-motion,” in CVPR, Honolulu, Hawaii, USA, 2017, pp. 1–10.
  • [98] S. Zhu, T. Shen, L. Zhou, R. Zhang, J. Wang, T. Fang, and L. Quan, “Parallel structure from motion from local increment to global averaging,” in ICCV, Venice, Italy, 2017.
  • [99] S. Y. Bao, M. Bagra, Y. Chao, and S. Savarese, “Semantic structure from motion with points, regions, and objects,” in CVPR, Providence, RI, USA, 2012, pp. 2703–2710.
  • [100] Y. Chen, Y. Wang, P. Lu, Y. Chen, and G. Wang, “Large-scale structure from motion with semantic constraints of aerial images,” in PRCV, 2018, pp. 347–359.
  • [101] D. Turner, A. Lucieer, and L. Wallace, “Direct georeferencing of ultrahigh-resolution UAV imagery,” IEEE Trans. Geosci. Remote Sens., vol. 52, no. 5, pp. 2738–2745, 2014.
  • [102]

    Z. Yin and J. Shi, “Geonet: Unsupervised learning of dense depth, optical flow and camera pose,” in

    CVPR, 2018.
  • [103] A. Kendall, M. Grimes, and R. Cipolla, “Posenet: A convolutional network for real-time 6-dof camera relocalization,” in ICCV, Santiago, Chile, 2015, pp. 2938–2946.
  • [104] T. Zhou, M. Brown, N. Snavely, and D. G. Lowe, “Unsupervised learning of depth and ego-motion from video,” in CVPR, 2017.
  • [105] S. Vijayanarasimhan, S. Ricco, C. Schmid, R. Sukthankar, and K. Fragkiadaki, “Sfm-net: Learning of structure and motion from video,” CoRR, 2017.
  • [106] V. Rengarajan, A. N. Rajagopalan, and R. Aravind, “From bows to arrows: Rolling shutter rectification of urban scenes,” in CVPR, Las Vegas, NV, USA, 2016, pp. 2773–2781.
  • [107] V. Rengarajan, Y. Balaji, and A. N. Rajagopalan, “Unrolling the shutter: Cnn to correct motion distortions,” in CVPR, Honolulu, HI, USA, 2017, pp. 2345–2353.
  • [108] B. Klingner, D. Martin, and J. Roseborough, “Street view motion-from-structure-from-motion,” in ICCV, Sydney, NSW, Australia, 2013, pp. 953–960.
  • [109] S. Im, H. Ha, G. Choe, H. Jeon, K. Joo, and I. S. Kweon, “High quality structure from small motion for rolling shutter cameras,” in ICCV, Santiago, Chile, 2015, pp. 837–845.
  • [110] Y. Furukawa and H. Carlos, “Multi-view stereo: A tutorial,” Foundations and Trends in Computer Graphics and Vision, vol. 9, no. 1-2, pp. 1–148, 2015.
  • [111] S. Harwin and A. Lucieer, “Assessing the accuracy of georeferenced point clouds produced via multi-view stereopsis from unmanned aerial vehicle (UAV) imagery,” Remote Sens., vol. 4, no. 6, pp. 1573–1599, 2012.
  • [112] J. Liu, B. Guo, W. Jiang, W. Gong, and X. Xiao, “Epipolar rectification with minimum perspective distortion for oblique images,” Sensors, vol. 16, no. 11, p. 1870, 2016.
  • [113] J. Li, Y. Liu, S. Du, P. Wu, and Z. Xu, “Hierarchical and adaptive phase correlation for precise disparity estimation of UAV images,” IEEE Trans. Geosci. Remote Sens., vol. 54, no. 12, pp. 7092–7104, 2016.
  • [114] J. Zbontar and Y. LeCun, “Stereo matching by training a convolutional neural network to compare image patches,”

    Journal of Machine Learning Research

    , vol. 17, pp. 1–32, 2016.
  • [115] A. Seki and M. Pollefeys, “Sgm-nets: Semi-global matching with neural networks,” in CVPR, 2017, pp. 6640–6649.
  • [116] A. Kendall, H. Martirosyan, S. Dasgupta, and P. Henry, “End-to-end learning of geometry and context for deep stereo regression,” in ICCV, 2017, pp. 66–75.
  • [117] M. Ji, J. Gall, H. Zheng, Y. Liu, and L. Fang, “Surfacenet: An end-to-end 3d neural network for multiview stereopsis,” in ICCV, 2017, pp. 2326–2334.
  • [118] A. Kar, C. Häne, and J. Malik, “Learning a multi-view stereo machine,” in NIPS, 2017, pp. 364–375.
  • [119] P.-H. Huang, K. Matzen, J. Kopf, N. Ahuja, and J.-B. Huang, “Deepmvs: Learning multi-view stereopsis,” in CVPR, 2018, pp. 2821–2830.
  • [120] Y. Yao, Z. Luo, S. Li, T. Fang, and L. Quan, “Mvsnet: Depth inference for unstructured multi-view stereo,” ECCV, 2018.
  • [121] J. Liu, S. Ji, C. Zhang, and Z. Qin, “Evaluation of deep learning based stereo matching methods: from ground to aerial images,” in ISPRS Archives, 2018, pp. 593–597.
  • [122] S. Wu, H. Huang, T. Portenier, M. Sela, D. Cohen-Or, R. Kimmel, and M. Zwicker, “Specular-to-diffuse translation for multi-view reconstruction,” in ECCV, 2018, pp. 193–211.
  • [123] S. Kumar, Y. Dai, and H. Li, “Monocular dense 3D reconstruction of a complex dynamic scene from two perspective frames,” in ICCV, Venice, Italy, 2017.
  • [124] X. Gao, L. Hu, H. Cui, S. Shen, and H. Zhanyi, “Accurate and efficient ground-to-aerial model alignment,” Patt. Recog., vol. 76, no. 4, pp. 288–302, 2018.
  • [125] X. Li, N. Hui, H. Shen, Y. Fu, and L. Zhang, “A robust mosaicking procedure for high spatial resolution remote sensing images,” ISPRS J. Photogrammetry Remote Sens., vol. 109, pp. 108–125, 2015.
  • [126] J. Tian, X. Li, F. Duan, J. Wang, and Y. Ou, “An efficient seam elimination method for UAV images based on wallis dodging and gaussian distance weight enhancement,” Sensors, vol. 16, no. 5, p. 662, 2016.
  • [127] M. Song, Z. Jia, S. Huang, and J. Fu, “Mosaicking UAV orthoimages using bounded voronoi diagrams and watersheds,” Int. J. Remote Sens., pp. 1–20, 2017.
  • [128] M. R. Faraji, X. Qi, and A. Jensen, “Computer vision-based orthorectification and georeferencing of aerial image sets,” J. Appl. Remote Sens., vol. 10, no. 3, p. 036027, 2016.
  • [129] G. Zhang, Y. He, W. Chen, J. Jia, and H. Bao, “Multi-viewpoint panorama construction with wide-baseline images,” IEEE Trans. Image Process., vol. 25, no. 7, pp. 3099–3111, 2016.
  • [130] S. L. Andrea, A. G. Mark, M. S. Caitriana, and R. Albert, “Multispectral remote sensing from unmanned aircraft: Image processing workflows and applications for rangeland environments,” Remote Sens., vol. 3, no. 12, pp. 2529–2551, 2011.
  • [131] S. Bang, H. Kim, and H. Kim, “UAV-based automatic generation of high-resolution panorama at a construction site with a focus on preprocessing for image stitching,” Automat. Constr., vol. 84, pp. 70–80, 2017.
  • [132] H. Yu, J. Wang, Y. Bai, W. Yang, and G.-S. Xia, “Analysis of large-scale UAV images using a multi-scale hierarchical representation,” Geo-spatial Inform. Sci., vol. 21, no. 1, pp. 33–44, 2018.
  • [133] M. Xia, J. Yao, R. Xie, L. Li, and W. Zhang, “Globally consistent alignment for planar mosaicking via topology analysis,” Patt. Recog., vol. 66, no. SI, pp. 239–252, 2017.
  • [134] J. Zaragoza, T.-J. Chin, M. S. Brown, and D. Suter, “As-projective-as-possible image stitching with moving DLT,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 36, no. 7, pp. 1285–1298, 2014.
  • [135] J. Li, Z. Wang, S. Lai, Y. Zhai, and M. Zhang, “Parallax-tolerant image stitching based on robust elastic warping,” IEEE Trans. Multimedia, vol. 20, no. 7, pp. 1672–1687, 2018.
  • [136] Y. Xu, J. Ou, H. He, X. Zhang, and J. Mills, “Mosaicking of unmanned aerial vehicle imagery in the absence of camera poses,” Remote Sens., vol. 8, no. 3, p. 204, 2016.
  • [137] T.-Z. Xiang, G.-S. Xia, X. Bai, and L. Zhang, “Image stitching by line-guided local warping with global similarity constraint,” Pattern Recognition, vol. 83, pp. 481–497, 2017.
  • [138] Y.-S. Chen and Y.-Y. Chuang, “Natural image stitching with the global similarity prior,” in ECCV, Amsterdam, Netherland, 2016, pp. 186–201.
  • [139] K. Lin, N. Jiang, L.-F. Cheong, M. Do, and J. Lu, “SEAGULL: Seam-guided local alignment for parallax-tolerant image stitching,” in ECCV, Amsterdam, The Netherland, 2016, pp. 370–385.
  • [140] J. Guo, Z. Pan, B. Lei, and C. Ding, “Automatic color correction for multisource remote sensing images with wasserstein CNN,” Remote Sens., vol. 9, no. 5, p. 483, 2017.
  • [141] T. Nguyen, S. W. Chen, S. S. Shivakumar, C. J. Taylor, and V. Kumar, “Unsupervised deep homography: A fast and robust homography estimation model,” IEEE Robotics and Automation Letters, vol. 3, no. 3, pp. 2346–2353, 2018.
  • [142] M. Nagai, T. Chen, R. Shibasaki, H. Kumagai, and A. Ahmed, “UAV-borne 3-d mapping system by multisensor integration,” IEEE Trans. Geosci. Remote Sens., vol. 47, no. 3, pp. 701–708, 2009.
  • [143] M. Schmitt and X. X. Zhu, “Data fusion and remote sensing: An ever-growing relationship,” IEEE Geosci. Remote Sens. Mag., vol. 4, no. 4, pp. 6–23, 2016.
  • [144] T. Xiang, L. Yan, and R. Gao, “A fusion algorithm for infrared and visible images based on adaptive dual-channel unit-linking pcnn in nsct domain,” Infrared Physics & Technology, vol. 69, pp. 53 – 61, 2015.
  • [145] Y. S. Kim, J. H. Lee, and J. B. Ra, “Multi-sensor image registration based on intensity and edge orientation information,” Patt. Recog., vol. 41, no. 11, pp. 3356–3365, 2008.
  • [146] Y. Ye, J. Shan, L. Bruzzone, and L. Shen, “Robust registration of multimodal remote sensing images based on structural similarity,” IEEE Trans. Geosci. Remote Sens., vol. 55, no. 5, pp. 2941–2958, 2017.
  • [147] J. Han, E. J. Pauwels, and D. Z. Paul, “Visible and infrared image registration in man-made environments employing hybrid visual features,” Pattern Recogn. Lett., vol. 34, no. 1, pp. 42–51, 2013.
  • [148] S. Yahyanejad and B. Rinner, “A fast and mobile system for registration of low-altitude visual and thermal aerial images using multiple small-scale UAVs,” ISPRS J. Photogrammetry Remote Sens., pp. 189–202, 2015.
  • [149] H. Chen, N. Xue, Y. Zhang, Q. Lu, and G.-S. Xia, “Robust visible-infrared image matching by exploiting dominant edge orientations,” Pattern Recogn. Lett., 2019.
  • [150] B. Yang and C. Chen, “Automatic registration of UAV-borne sequent images and LiDAR data,” ISPRS J. Photogrammetry Remote Sens., vol. 101, pp. 262–274, 2015.
  • [151] S. Liu, X. Tong, J. Chen, X. Liu, W. Sun, H. Xie, P. Chen, Y. Jin, and Z. Ye, “A linear feature-based approach for the registration of unmanned aerial vehicle remotely-sensed images and airborne LiDAR data,” Remote Sens., vol. 8, no. 2, p. 82, 2016.
  • [152] N. Tijtgat, W. V. Ranst, B. Volckaert, T. Goedeme, and F. D. Turck, “Embedded real-time object detection for a UAV warning system,” in ICCVW, Venice, Italy, 2017, pp. 2110–2118.
  • [153] Y. Cheng, D. Wang, P. Zhou, and T. Zhang, “A survey of model compression and acceleration for deep neural networks,” arXiv preprint arXiv:1710.09282, 2017.
  • [154] C. A. Lee, S. D. Gasster, A. Plaza, C. Chang, and B. Huang, “Recent developments in high performance computing for remote sensing: A review,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 4, no. 3, pp. 508–527, 2011.
  • [155] P. Ghamisi, N. Yokoya, J. Li, W. Liao, S. Liu, J. Plaza, B. Rasti, and A. Plaza, “Advances in hyperspectral image and signal processing: A comprehensive overview of the state of the art,” IEEE Geosci. Remote Sens. Mag., vol. 5, no. 4, pp. 37–78, 2017.
  • [156] Z. Hong, X. Tong, W. Cao, S. Jiang, P. Chen, and S. Liu, “Rapid three-dimensional detection approach for building damage due to earthquakes by the use of parallel processing of unmanned aerial vehicle imagery,” J. Appl. Remote Sens., vol. 9, no. 1, pp. 1–18, 2015.
  • [157] L. Chen, Y. Ma, P. Liu, J. Wei, W. Jie, and J. He, “A review of parallel computing for large-scale remote sensing image mosaicking,” Cluster Computing, vol. 18, no. 2, pp. 517–529, 2015.
  • [158] R. Zhang, S. Zhu, T. Shen, L. Zhou, Z. Luo, T. Fang, and L. Quan, “Distributed very large scale bundle adjustment by global camera consensus,” IEEE Trans. Pattern Anal. Mach. Intell., 2018.
  • [159] W. Kyle and S. Noah, “Robust global translations with 1dsfm,” in ECCV, vol. 8691, 2014, pp. 61–75.
  • [160] Y. Furukawa and J. Ponce, “Accurate, dense, and robust multiview stereopsis,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 8, pp. 1362–1376, 2010.
  • [161] F. Nex, M. Gerke, F. Remondino, P. H-J, M. Bäumker, and A. Zurhorst, “Isprs benchmark for multi-platform photogrammetry,” in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. II-3/W4, 2015, pp. 135–142.
  • [162] J. A. Gonçalves and R. Henriques, “UAV photogrammetry for topographic monitoring of coastal areas,” ISPRS J. Photogrammetry Remote Sens., vol. 104, pp. 101–111, 2015.
  • [163] J. Elston, B. Argrow, M. Stachura, D. Weibel, D. Lawrence, and D. Pope, “Overview of small fixed-wing unmanned aircraft for meteorological sampling,” J. of Atmos. Ocean. Technol., vol. 32, no. 1, pp. 97–115, 2015.
  • [164] H. Shakhatreh, A. Sawalmeh, A. I. Al-Fuqaha, Z. Dou, E. Almaita, I. Khalil, N. S. Othman, A. Khreishah, and M. Guizani, “Unmanned aerial vehicles: A survey on civil applications and key research challenges,” CoRR, vol. abs/1805.00881, 2018.
  • [165] A. V. Parshin, V. A. Morozov, A. V. Blinov, A. N. Kosterev, and A. E. Budyak, “Low-altitude geophysical magnetic prospecting based on multirotor uav as a promising replacement for traditional ground survey,” Geo-spatial Information Science, vol. 21, no. 1, pp. 67–74, 2018.
  • [166] Y. Ham, K. K. Han, J. J. Lin, and M. Golparvar-Fard, “Visual monitoring of civil infrastructure systems via camera-equipped unmanned aerial vehicles (UAVs): a review of related works,” Vis. Eng., vol. 4, no. 1, 2016.
  • [167] F. Biljecki, J. Stoter, H. Ledoux, S. Zlatanova, and A. Çöltekin, “Applications of 3D city models: State of the art review,” ISPRS Int. J. Geo-Inf., vol. 4, no. 4, pp. 2842–2889, 2015.
  • [168] T. Rakha and A. Gorodetsky, “Review of unmanned aerial system (uas) applications in the built environment: Towards automated building inspection procedures using drones,” Automat. Constr., vol. 93, pp. 252–264, 2018.
  • [169] A. Vetrivel, M. Gerke, N. Kerle, and G. Vosselman, “Identification of damage in buildings based on gaps in 3D point clouds from very high resolution oblique airborne images,” ISPRS J. Photogrammetry Remote Sens., vol. 105, pp. 61–78, 2015.
  • [170] A. Ellenberg, A. Kontsos, F. Moon, and I. Bartoli, “Bridge deck delamination identification from unmanned aerial vehicle infrared imagery,” Automat. Constr., vol. 72, pp. 155–165, 2016.
  • [171] C. M. Gevaert, C. Persello, R. Sliuzas, and G. Vosselman, “Informal settlement classification using point-cloud and image-based features from UAV data,” ISPRS J. Photogrammetry Remote Sens., vol. 125, pp. 225–236, 2017.
  • [172] Q. Feng, J. Liu, and J. Gong, “UAV remote sensing for urban vegetation mapping using random forest and texture analysis,” Remote Sens., vol. 7, no. 1, pp. 1074–1094, 2015.
  • [173] R. R.-C. Gonzalo, T. Stephen, D. C. Jaime, B. Antonio, and M. Bruce, “A real-time method to detect and track moving objects (DATMO) from unmanned aerial vehicles (UAVs) using a single camera,” Remote Sens., vol. 4, no. 12, pp. 1090–1111, 2012.
  • [174] S. Per, O. Umut, T. O. R. David, and G. Fredrik, “Road target search and tracking with gimballed vision sensor on an unmanned aerial vehicle,” Remote Sens., vol. 4, no. 7, pp. 2076–2111, 2012.
  • [175] T. Moranduzzo and F. Melgani, “Automatic car counting method for unmanned aerial vehicle images,” IEEE Trans. Geosci. Remote Sens., vol. 52, no. 3, pp. 1635–1647, 2014.
  • [176] ——, “Detecting cars in UAV images with a catalog-based approach,” IEEE Trans. Geosci. Remote Sens., vol. 52, no. 10, pp. 6356–6367, 2014.
  • [177] K. Liu and G. Mattyus, “Fast multiclass vehicle detection on aerial images,” IEEE Geosci. Remote Sens. Lett., vol. 12, no. 9, pp. 1938–1942, 2015.
  • [178] Y. Xu, G. Yu, Y. Wang, X. Wu, and Y. Ma, “A hybrid vehicle detection method based on viola-jones and HOG+SVM from UAV images,” Sensors, vol. 16, no. 8, 2016.
  • [179] C. Fu, R. Duan, D. Kircali, and E. Kayacan, “Onboard robust visual tracking for uavs using a reliable global-local object model,” Sensors, vol. 16, no. 9, p. 406, 2016.
  • [180] Y. Ma, X. Wu, G. Yu, Y. Xu, and Y. Wang, “Pedestrian detection and tracking from low-resolution unmanned aerial vehicle thermal imagery,” Sensors, vol. 16, no. 4, p. 446, 2016.
  • [181] M.-R. Hsieh, Y.-L. Lin, and W. H. Hsu, “Drone-based object counting by spatially regularized regional proposal networks,” in ICCV, 2017, pp. 4145–4153.
  • [182] S. Li and D.-Y. Yeung, “Visual object tracking for unmanned aerial vehicles: A benchmark and new motion models,” in AAAI, 2017, pp. 4140–4146.
  • [183] M. Y. Yang, W. Liao, X. Li, and B. Rosenhahn, “Vehicle detection in aerial images,” CoRR, vol. abs/1801.07339, 2018.
  • [184] D. Du, Y. Qi, H. Yu, Y. Yang, K. Duan, G. Li, W. Zhang, Q. Huang, and Q. Tian, “The unmanned aerial vehicle benchmark: Object detection and tracking,” in ECCV, 2018, pp. 375–391.
  • [185] J. Leitloff, D. Rosenbaum, F. Kurz, O. Meynberg, and P. Reinartz, “An operational system for estimating road traffic information from aerial images,” Remote Sens., vol. 6, no. 11, pp. 11 315–11 341, 2014.
  • [186] R. Krajewski, J. Bock, L. Kloeker, and L. Eckstein, “The highd dataset: A drone dataset of naturalistic vehicle trajectories on german highways for validation of highly automated driving systems,” in IEEE 21st International Conference on Intelligent Transportation Systems (ITSC), 2018.
  • [187] G.-S. Xia, X. Bai, J. Ding, Z. Zhu, S. Belongie, J. Luo, M. Datcu, M. Pelillo, and L. Zhang, “Dota: A large-scale dataset for object detection in aerial images,” in CVPR, 2018.
  • [188] S. Siebert and J. Teizer, “Mobile 3D mapping for surveying work projects using an unmanned aerial vehicle (UAV) system,” Automat. Constr., vol. 41, pp. 1–14, 2014.
  • [189] T. E. Barchyn, C. H. Hugenholtz, S. Myshak, and J. Bauer, “A uav-based system for detecting natural gas leaks,” J. Unmanned Veh. Syst., vol. 6, no. 1, pp. 18–30, 2018.
  • [190] C. Gomez and D. R. Green, “Small unmanned airborne systems to support oil and gas pipeline monitoring and mapping,” ARAB J. Geosci., vol. 10, no. 9, pp. 1–17, 2017.
  • [191] M. Leena, L. Matti, A. Eero, H. Juha, K. Mika, J. Anttoni, K. Antero, and H. Tero, “Remote sensing methods for power line corridor surveys,” ISPRS J. Photogrammetry Remote Sens., vol. 119, pp. 10–31, 2016.
  • [192] L. Zhu and H. Juha, “Fully-automated power line extraction from airborne laser scanning point clouds in forest areas,” Remote Sens., vol. 6, no. 11, pp. 11 267–11 282, 2014.
  • [193] K. Jaka, P. Franjo, and L. Bostjan, “A survey of mobile robots for distribution power line inspection,” IEEE Trans. Power Del., vol. 25, no. 1, pp. 485–493, 2010.
  • [194] P. Martin, J. Moore, J. Fardoulis, O. Payton, and T. Scott, “Radiological assessment on interest areas on the sellafield nuclear site via unmanned aerial vehicle,” Remote Sens., vol. 8, no. 11, p. 913, 2016.
  • [195] A. DeMario, P. Lopez, E. Plewka, R. Wix, H. Xia, E. Zamora, D. Gessler, and A. Yalin, “Water plume temperature measurements by an unmanned aerial system (UAS),” Sensors, vol. 17, no. 2, p. 306, 2017.
  • [196] H. Eija, A. E. Matti, P. Ilkka, S. Heikki, O. Harri, M. Rami, H. Christer, H. Teemu, L. Paula, R. Tomi, V. Niko, and P. Merja, “Remote sensing of 3-D geometry and surface moisture of a peat production area using hyperspectral frame cameras in visible to short-wave infrared spectral ranges onboard a small unmanned airborne vehicle (UAV),” IEEE Trans. Geosci. Remote Sens., vol. 54, no. 9, pp. 5440–5454, 2016.
  • [197] X. Tong, X. Liu, P. Chen, S. Liu, K. Luan, L. Li, S. Liu, X. Liu, H. Xie, Y. Jin, and Z. Hong, “Integration of UAV-based photogrammetry and terrestrial laser scanning for the three-dimensional mapping and monitoring of open-pit mine areas,” Remote Sens., vol. 7, no. 6, pp. 6635–6662, 2015.
  • [198] B. Zwissler, T. Oommen, S. Vitton, and E. A. Seagren, “Thermal remote sensing for moisture content monitoring of mine tailings: laboratory study,” Environ. Eng. Geosci., vol. 23, no. 3, pp. 1078–7275, 2017.
  • [199] P. K. Lian and A. W. Serge, “Dawn of drone ecology: low-cost autonomous aerial vehicles for conservation,” Trop. Conserv. Sci., vol. 5, no. 2, pp. 121–132, 2012.
  • [200] J. C. Hodgson, S. M. Baylis, R. Mott, A. Herrod, and R. H. Clarke, “Precision wildlife monitoring using unmanned aerial vehicles,” Sci. Rep., vol. 6, no. 1, 2016.
  • [201] L. Julie, L. Jonathan, S. Jean, L. Philippe, and V. Cédric, “Are unmanned aircraft systems (UASs) the future of wildlife monitoring? a review of accomplishments and challenges,” Mammal Rev., vol. 45, no. 4, pp. 239–252, 2015.
  • [202] A. Hodgson, N. Kelly, and D. Peel, “Unmanned aerial vehicles (UAVs) for surveying marine fauna: A dugong case study,” PLoS One, vol. 8, no. 11, p. e79556, 2013.
  • [203] C. V. G. Jan, R. V. Camiel, M. Pascal, E. Kitso, P. K. Lian, and W. Serge, “Nature conservation drones for automatic localization and counting of animals,” in ECCVW, Zurich, Switzerland, 2014, pp. 255–270.
  • [204] L. Gonzalez, G. Montes, E. Puig, S. Johnson, K. Mengersen, and K. Gaston, “Unmanned aerial vehicles (UAVs) and artificial intelligence revolutionizing wildlife monitoring and conservation,” Sensors, vol. 16, no. 1, p. 97, 2016.
  • [205] A. C. Seymour, J. Dale, M. Hammill, P. N. Halpin, and D. W. Johnston, “Automated detection and enumeration of marine wildlife using unmanned aircraft systems (UAS) and thermal imagery,” Sci. Rep., vol. 7, p. 45127, 2017.
  • [206] F. Korner, R. Speck, A. H. Göktoğan, and S. Sukkarieh, “Autonomous airborne wildlife tracking using radio signal strength,” in IEEE/RSJ IROS, Taipei, Taiwan, 2010, pp. 107–112.
  • [207] K. F. Flynn and S. C. Chapra, “Remote sensing of submerged aquatic vegetation in a shallow non-turbid river using an unmanned aerial vehicle,” Remote Sens., vol. 6, no. 12, pp. 12 815–12 836, 2014.
  • [208] M.-P. Margarita, S. Roel, L. D. van Essen, J. J. Negro, and T. Sassen, “Remotely piloted aircraft systems as a rhinoceros anti-poaching tool in africa,” PLoS One, vol. 9, no. 1, p. e83873, 2014.
  • [209] K. S. Christie, S. L. Gilbert, C. L. Brown, M. Hatfield, and L. Hanson, “Unmanned aircraft systems in wildlife research: current and future applications of a transformative technology,” Front. Ecol. Environ., vol. 14, no. 5, pp. 241–251, 2016.
  • [210] Y. Chen, T. Hakala, M. Karjalainen, Z. Feng, J. Tang, P. Litkey, A. Kukko, A. Jaakkola, and J. Hyyppä, “UAV-borne profiling radar for forest research,” Remote Sens., vol. 9, no. 1, p. 58, 2017.
  • [211] L. Wallace, A. Lucieer, and C. S. Watson, “Evaluating tree detection and segmentation routines on very high resolution UAV LiDAR data,” IEEE Trans. Geosci. Remote Sens., vol. 52, no. 12, pp. 7619–7628, 2014.
  • [212] L. Wallace, R. Musk, and A. Lucieer, “An assessment of the repeatability of automatic forest inventory metrics derived from UAV-borne laser scanning data,” IEEE Trans. Geosci. Remote Sens., vol. 52, no. 11, pp. 7160–7169, 2014.
  • [213] Z. Zhang, W. Ni, G. Sun, W. Huang, K. J. Ranson, B. D. Cook, and Z. Guo, “Biomass retrieval from l-band polarimetric UAVSAR backscatter and PRISM stereo imagery,” Remote Sens. Environ., vol. 194, pp. 331–346, 2017.
  • [214] S. Saatchi, M. Marlier, R. L. Chazdon, D. B. Clark, and A. E. Russell, “Impact of spatial variability of tropical forest structure on radar estimation of aboveground biomass,” Remote Sens. Environ., vol. 115, no. 11, pp. 2836–2849, 2011.
  • [215] S. Getzin, R. Nuske, and K. Wiegand, “Using unmanned aerial vehicles (UAV) to quantify spatial gap patterns in forests,” Remote Sens., vol. 6, no. 8, pp. 6988–7004, 2014.
  • [216] F. L. Bunnell and I. Houde, “Down wood and biodiversity-implications to forest practices,” Environ. Rev., vol. 18, no. NA, pp. 397–421, 2010.
  • [217] N. Roope, H. Eija, L.-S. Päivi, B. Minna, P. Litkey, T. Hakala, N. Viljanen, T. Kantola, T. Tanhuanpää, and M. Holopainen, “Using UAV-based photogrammetry and hyperspectral imaging for mapping bark beetle damage at tree-level,” Remote Sens., vol. 7, no. 12, pp. 15 467–15 493, 2015.
  • [218] O. Brovkina, E. Cienciala, P. Surový, and P. Janata, “Unmanned aerial vehicles (uav) for assessment of qualitative classification of norway spruce in temperate forest stands,” Geo-spatial Information Science, vol. 21, no. 1, pp. 12–20, 2018.
  • [219] C. Yuan, Y. Zhang, and Z. Liu, “A survey on technologies for automatic forest fire monitoring, detection and fighting using UAVs and remote sensing techniques,” CAN J. Forest Res., vol. 45, no. 7, pp. 783–792, 2015.
  • [220] S. L. Andrea and R. Albert, “Texture and scale in object-based analysis of subdecimeter resolution unmanned aerial vehicle (UAV) imagery,” IEEE Trans. Geosci. Remote Sens., vol. 47, no. 3, pp. 761–770, 2009.
  • [221] A. Capolupo, L. Kooistra, C. Berendonk, L. Boccia, and J. Suomalainen, “Estimating plant traits of grasslands from UAV-acquired hyperspectral images: A comparison of statistical approaches,” ISPRS Int. J. Geo-Inf., vol. 4, no. 4, pp. 2792–2820, 2015.
  • [222] S. K. V. Bueren, A. Burkart, A. Hueni, U. Rascher, M. P. Tuohy, and I. J. Yule, “Deploying four optical uav-based sensors over grassland: challenges and limitations,” Biogeosciences, vol. 12, no. 1, pp. 163–175, 2015.
  • [223] J. Chen, S. Yi, Y. Qin, and X. Wang, “Improving estimates of fractional vegetation cover based on UAV in alpine grassland on the qinghai-tibetan plateau,” Int. J. Remote Sens., vol. 37, no. 8, pp. 1922–1936, 2016.
  • [224] J. Lopatin, F. E. Fassnacht, T. Kattenborn, and S. Schmidtlein, “Mapping plant species in mixed grassland communities using close range imaging spectroscopy,” Remote Sens. Environ., vol. 201, pp. 12 – 23, 2017.
  • [225] J. Schmidt, F. E. Fassnacht, C. Neff, A. Lausch, B. Kleinschmit, M. Förster, and S. Schmidtlein, “Adapting a natura 2000 field guideline for a remote sensing-based assessment of heathland conservation status,” Int. J. Appl. Earth Obs. Geoinf., vol. 60, pp. 61–71, 2017.
  • [226] V. Gomes, S. IJff, R. N, I. Amaral, and et al., “Species distribution modelling: Contrasting presence-only models with plot abundance data,” Sci Rep., vol. 8, no. 1, p. 1003, 2018.
  • [227] J. Müllerová, J. Brůna, T. Bartaloš, P. Dvořák, M. Vítková, and P. Pyšek, “Timing is important: Unmanned aircraft vs. satellite imagery in plant invasion monitoring,” Frontiers in Plant Science, vol. 8, p. 887, 2017.
  • [228] K. S. He, B. A. Bradley, A. F. Cord, D. Rocchini, M.-N. Tuanmu, S. Schmidtlein, W. Turner, M. Wegmann, and N. Pettorelli, “Will remote sensing shape the next generation of species distribution models?” Remote Sensing in Ecology and Conservation, vol. 1, no. 1, pp. 4–18, 2015.
  • [229] M. Jaud, F. Grasso, N. Le Dantec, R. Verney, C. Delacourt, J. Ammann, J. Deloffre, and P. Grandjean, “Potential of UAVs for monitoring mudflat morphodynamics (application to the seine estuary, france),” ISPRS Int. J. Geo-Inf., vol. 5, no. 4, p. 50, 2016.
  • [230] T.-C. Su and H.-T. Chou, “Application of multispectral sensors carried on unmanned aerial vehicle (UAV) to trophic state mapping of small reservoirs: A case study of tain-pu reservoir in kinmen, taiwan,” Remote Sens., vol. 7, no. 8, pp. 10 078–10 097, 2015.
  • [231] A. Capolupo, S. Pindozzi, C. Okello, N. Fiorentino, and L. Boccia, “Photogrammetry for environmental monitoring: The use of drones and hydrological models for detection of soil contaminated by copper,” Sci. Total Environ., vol. 514, pp. 298–306, 2015.
  • [232] Z. Xu, L. Wu, Y. Shen, F. Li, Q. Wang, and R. Wang, “Tridimensional reconstruction applied to cultural heritage with the use of camera-equipped UAV and terrestrial laser scanner,” Remote Sens., vol. 6, no. 11, pp. 10 413–10 434, 2014.
  • [233] J. Fernández-Hernandez, D. González-Aguilera, P. Rodríguez-Gonzálvez, and J. Mancera-Taboada, “Image-based modelling from unmanned aerial vehicle (UAV) photogrammetry: An effective, low-cost tool for archaeological applications,” Archaeometry, vol. 57, no. 1, pp. 128–145, 2015.
  • [234] M.-C. Francisco-Javier, D. N. G. María, E. M. D. L. Jose, and G.-F. Alfonso, “An analysis of the influence of flight parameters in the generation of unmanned aerial vehicle (UAV) orthomosaicks to survey archaeological areas,” Sensors, vol. 16, no. 11, p. 1838, 2016.
  • [235]

    A. Y.-M. Lin, A. Novo, S. Har-Noy, N. D. Ricklin, and K. Stamatiou, “Combining GeoEye-1 satellite remote sensing, UAV aerial imaging, and geophysical surveys in anomaly detection applied to archaeology,”

    IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 4, no. 4, pp. 870–876, 2011.
  • [236] A. T.-M. Jose, S. Marcello, R.-G. Pablo, H.-L. David, and G.-A. Diego, “A multi-data source and multi-sensor approach for the 3d reconstruction and web visualization of a complex archaelogical site: The case study of “tolmo de minateda”,” Remote Sens., vol. 8, no. 7, p. 550, 2016.
  • [237] S. Chen, Q. Hu, S. Wang, and H. Yang, “A virtual restoration approach for ancient plank road using mechanical analysis with precision 3D data of heritage site,” Remote Sens., vol. 8, no. 10, p. 828, 2016.
  • [238] M. Barekatain, M. Martí, H. Shih, S. Murray, K. Nakayama, Y. Matsuo, and H. Prendinger, “Okutama-action: An aerial view video dataset for concurrent human action detection,” in CVPRW, 2017.
  • [239] A. Al-Sheary, , and A. Almagbile, “Crowd monitoring system using unmanned aerial vehicle (uav),” Journal of Civil Engineering and Architecture, vol. 11, pp. 1014–1024, 2017.
  • [240] S. Sindhuja, R. K. Lav, Z. E. Carlos, J. Sanaz, V. R. Sathuvalli, G. J. Vandemark, P. N. Miklas, A. H. Carter, M. O. Pumphrey, N. R. Knowles, and M. J. Pavek, “Low-altitude, high-resolution aerial imaging systems for row and field crop phenotyping: A review,” Eur. J. Agron., vol. 70, pp. 112–123, 2015.
  • [241] Y. Sun, D. Xu, D. W. K. Ng, L. Dai, and R. Schober, “Resource allocation for solar powered uav communication systems,” IEEE International Workshop on Signal Processing Advances in Wireless Communications, pp. 1–5, 2018.
  • [242] J. Ouyang, Y. Che, J. Xu, and K. Wu, “Throughput maximization for laser-powered uav wireless communication systems,” in IEEE International Conference on Communications Workshops, 2018, pp. 1–6.
  • [243] X. Deng, J. Chen, H. Li, P. Han, and W. Yang, “Log-cumulants of the finite mixture model and their application to statistical analysis of fully polarimetric uavsar data,” Geo-spatial Information Science, vol. 21, no. 1, pp. 45–55, 2018.
  • [244] Y. Yang, Z. Lin, and F. Liu, “Stable imaging and accuracy issues of low-altitude unmanned aerial vehicle photogrammetry systems,” Remote Sens., vol. 8, no. 4, p. 316, 2016.
  • [245] A. E. R. Shabayek, C. Demonceaux, O. Morel, and D. Fofi, “Vision based UAV attitude estimation: Progress and insights,” J. Intell. Robot. Syst., vol. 65, no. 1-4, pp. 295–308, 2012.
  • [246] A. E. Marcu, D. Costea, V. Licaret, M. Pirvu, E. Slusanschi, and M. Leordeanu, “Safeuav: Learning to estimate depth and safe landing areas for uavs from synthetic data,” in International workshop on computer vision for UAVs, 2018, pp. 1–16.
  • [247] A. Giusti, J. Guzzi, D. C. Cireşan, F. L. He, J. P. Rodríguez, F. Fontana, and D. Scaramuzza, “A machine learning approach to visual perception of forest trails for mobile robots,” IEEE Robot. Automat. Lett., vol. 1, no. 2, pp. 661–667, 2016.
  • [248] M. Müller, V. Casser, N. Smith, D. Michels, and B. Ghanem, “Teaching uavs to race: End-to-end regression of agile controls in simulation,” in International workshop on computer vision for UAVs, 2018, pp. 1–17.
  • [249] Y. Lu, Z. Xue, G.-S. Xia, and L. Zhang, “A survey on vision-based uav navigation,” Geo-spatial Information Science, vol. 21, no. 1, pp. 21–32, 2018.
  • [250] J. Li-Chee-Ming and C. Armenakis, “Uav navigation system using line-based sensor pose estimation,” Geo-spatial Information Science, vol. 21, no. 1, pp. 2–11, 2018.
  • [251] N. Smith, N. Moehrle, M. Goesele, and W. Heidrich, “Aerial path planning for urban scene reconstruction: A continuous optimization method and benchmark,” in SIGGRAPH Asia, 2018, pp. 183:1–183:15.
  • [252] G. A. Kumar, A. K. Patil, R. Patil, S. S. Park, and Y. H. Chai, “A LiDAR and IMU integrated indoor navigation system for UAVs and its application in real-time pipeline classification,” Sensors, vol. 17, no. 6, p. 1268, 2017.
  • [253] L. Zhang, L. Zhang, and B. Du, “Deep learning for remote sensing data: A technical tutorial on the state of the art,” IEEE Geosci. Remote Sens. Mag., vol. 4, no. 2, pp. 22–40, 2016.
  • [254] X.-X. Zhu, D. Tuia, L. Mou, G.-S. Xia, L. Zhang, F. Xu, and F. Fraundorfer, “Deep learning in remote sensing: A comprehensive review and list of resources,” IEEE Geosci. Remote Sens. Mag., vol. 5, no. 4, pp. 8–36, 2017.
  • [255] I. Cherabier, C. Häne, M. R. Oswald, and M. Pollefeys, “Multi-label semantic 3d reconstruction using voxel blocks,” in International Conference on 3D Vision, Stanford, CA, USA, 2016, pp. 601–610.
  • [256] K. M. Fornace, C. J. Drakeley, T. William, F. Espino, and J. Cox, “Mapping infectious disease landscapes: unmanned aerial vehicles and epidemiology,” Trends Parasitol., vol. 30, no. 11, pp. 514–519, 2014.