Rethinking Drone-Based Search and Rescue with Aerial Person Detection

by   Pasi Pyrrö, et al.

The visual inspection of aerial drone footage is an integral part of land search and rescue (SAR) operations today. Since this inspection is a slow, tedious and error-prone job for humans, we propose a novel deep learning algorithm to automate this aerial person detection (APD) task. We experiment with model architecture selection, online data augmentation, transfer learning, image tiling and several other techniques to improve the test performance of our method. We present the novel Aerial Inspection RetinaNet (AIR) algorithm as the combination of these contributions. The AIR detector demonstrates state-of-the-art performance on a commonly used SAR test data set in terms of both precision ( 21 percentage point increase) and speed. In addition, we provide a new formal definition for the APD problem in SAR missions. That is, we propose a novel evaluation scheme that ranks detectors in terms of real-world SAR localization requirements. Finally, we propose a novel postprocessing method for robust, approximate object localization: the merging of overlapping bounding boxes (MOB) algorithm. This final processing stage used in the AIR detector significantly improves its performance and usability in the face of real-world aerial SAR missions.



page 1

page 2


Boosting ship detection in SAR images with complementary pretraining techniques

Deep learning methods have made significant progress in ship detection i...

Aerial-Ground collaborative sensing: Third-Person view for teleoperation

Rapid deployment and operation are key requirements in time critical app...

Fast Vehicle Detection in Aerial Imagery

In recent years, several real-time or near real-time object detectors ha...

Real-Time Robotic Search using Hierarchical Spatial Point Processes

Aerial robots hold great potential for aiding Search and Rescue (SAR) ef...

Learning a Dilated Residual Network for SAR Image Despeckling

In this letter, to break the limit of the traditional linear models for ...

A Novel Internet-of-Drones and Blockchain-based System Architecture for Search and Rescue

With the development in information and communications technology (ICT) ...

An Autonomous Drone for Search and Rescue in Forests using Airborne Optical Sectioning

Drones will play an essential role in human-machine teaming in future se...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Search and rescue (SAR) missions have been carried out for centuries to aid those who are lost or in distress, typically in some remote areas, such as wilderness. With recent advances in technology, small unmanned aerial vehicles (UAVs) or drones have been used during SAR missions for years in many countries [3, 10, 30, 38, 12]. The reason is that these drones enable rapid aerial photographing of large areas with potentially difficult-to-reach terrain, which improves the safety of SAR personnel during the search operation. Moreover, drone units can outperform even several land parties in search efficiency [18]. However, there remains the issue of inspecting a vast amount of aerial drone images for tiny clues about the missing person location, which is currently a manual task done by humans in most cases. It turns out this inspection process is very slow, tedious and error-prone for most humans, and can significantly delay the entire drone search operation [3, 38, 30].

(a) Input boxes
(b) NMS results
(c) MOB results
Figure 1: Ground truth labels (blue) and three different detection results (red) from the AIR object detector: (a) raw, non-aggregated bounding boxes, (b) NMS-postprocessed boxes, and (c) MOB-postprocessed boxes. In typical object detection scenarios (e.g., the PASCAL VOC2012 challenge [6]), one tries to achieve very accurate boxes that result in (b), although (c) results frequently suffice in SAR-APD tasks. Moreover, (c) results are visually more pleasing and much easier to produce. Indeed, in this paper we analyze detection and evaluation methods that encourage results similar to (c) that overlook the difficult and irrelevant dense group separation problem in SAR-APD. It could be argued that producing and rewarding (c) results better reflects the true objective of aerial SAR searches, which is to provide the ground search teams with an approximate location of clues about the missing person.

In this paper we propose a novel object detection algorithm, called Aerial Inspection RetinaNet (AIR), that automates the visual search problem of aerial SAR missions, which is illustrated in Fig. 2. The AIR detector is based on the novel RetinaNet deep learning model by Lin et al. [24]. RetinaNet is well-suited for addressing the imbalance of foreground and background training examples [24], which can be particularly severe in aerial image data. Furthermore, AIR incorporates various modifications on top of RetinaNet to improve its performance in small object detection from aerial high-resolution images. Consequently, AIR achieves state-of-the-art performance on the difficult HERIDAL benchmark [3] both in terms of precision (21 percentage point increase) and processing time while achieving competitive recall. This commendable performance is achieved through multiple strategies, such as selecting an appropriate convolutional neural network (CNN) backbone, conducting an ablation study and calibrating the model confidence score threshold.

Moreover, we reformulate the evaluation criteria for the problem coined search and rescue with aerial person detection (SAR-APD). This problem is associated with the used HERIDAL data set consisting of aerial search imagery in SAR setting (see Fig. 2). The reason for this reformulation is that the HERIDAL benchmark itself does not define clear evaluation criteria for SAR-APD methods, such that different solutions could be compared reliably. Hence, we define two algorithms for ranking this benchmark problem: VOC2012 [6] and the proposed SAR-APD evaluation scheme. We recommend using the latter for computing the main performance metrics of the HERIDAL benchmark because it focuses on detection rather than localization. This scoring trade-off is designed for SAR-APD problems, since reliable detection is more important in real-world SAR operations. Nevertheless, we suppose the conventional VOC2012 evaluation can be used for computing additional “challenge” metrics on HERIDAL. This could be beneficial for non-SAR-related aerial person detection (APD) applications using the HERIDAL data, which might require counting the persons for example. Ultimately, our proposed evaluation scheme is better aligned with the real-world SAR operational requirements for localization, which is typically tens of meters in ground resolution [31].

Figure 2: An example of visual search problem in SAR context. As evident from the image, it is really difficult to spot small humans without distinctive colors from still, high-resolution images. The aerial image is taken from the HERIDAL data set [3].

Furthermore, we deem the traditional non-maximum suppression (NMS) [32] bounding box aggregation (BBA) algorithm, which is used in many modern object detectors to postprocess detection results [1], insufficient for solving the dense group separation problem of HERIDAL test data. NMS is designed for exact delineation of individual objects, however, this is a daunting task for the small person groups of HERIDAL, as illustrated in Fig. 0(b). Therefore, we propose a different BBA strategy that merges the overlapping bounding box clusters, as shown in Fig. 0(c). This merging of overlapping bounding boxes (MOB) algorithm is well-suited for approximate, robust localization and detection of small objects in SAR-APD tasks. MOB is primarily designed to be used with our SAR-APD evaluation scheme for testing object detector performance (i.e., not currently used during training). Moreover, both the proposed algorithms are independent from AIR and straightforward to implement into most object detector frameworks.

The rest of the paper is organized as follows. In Section 2 we explain the SAR-APD problem and highlight some related work on it. Moreover, we formalize the evaluation criteria for ranking object detector performance in SAR-APD, and define our novel approach to that. Section 3 describes the proposed MOB postprocessing algorithm. In Section 4 our AIR detector is briefly described, and Section 5 discusses our experiment results with the state-of-the-art comparison. Finally, Section 6 concludes the paper.

2 Aerial person detection problem in SAR

Aerial person detection is a very specific class of object detection problems, which involves detecting people from aerial perspective. It is a difficult problem in general due to various challenges, such as rapid platform motion, image instability, varying environments, high variability in viewing angles and distances, and extremely small size of objects. In SAR context, additional constraints are included to the APD problem: straight-down (nadir) image viewpoint (see Fig. 2), 4K resolution for images and a relatively high drone flying altitude (from 40 to 65 m). These requirements are embedded in the HERIDAL data and are based on real-world SAR experience [3]. For example, this nadir viewpoint minimizes occlusion by vegetation and simplifies georeferencing of the image observations. Additionally, the SAR-APD problem definition emphasizes successful detection over accurate localization because fine-grained search can be carried out by ground teams dispatched to the clue location [10, 31].

2.1 Related Work

Despite being a relatively new and very specific research field, many works have been proposed in SAR-APD. Since the field is still lacking a comprehensive theoretical foundation and methodology, these approaches vary significantly. The SAR-APD methods range from simple thresholding rules for nearly raw pixel values [45] to complicated deep learning systems [29, 3, 49]

. Due to the many recent successes of deep neural networks in image processing

[21, 46, 28, 13, 36, 27, 14], our detection algorithm also falls into the latter category.

An intuitive solution is to use infra-red cameras in UAVs to distinguish the human thermal signature from background [38, 8]. Nevertheless, this approach has many limitations, especially during summer in rocky areas when the background thermal radiation can cloak that of humans [3]. Furthermore, high-resolution thermal cameras can be too expensive for typical SAR efforts with limited budget.

The mean shift clustering algorithm by Comaniciu and Meer [4] has seen extensive use in SAR-APD [48, 43, 8]. This general, nonparametric clustering method enables straightforward segmentation of distinctively colored regions and works adequately well in practice. However, the lack of supervised training with diverse data limits the generalizability of this approach (e.g., if the person is not wearing distinctively colored clothing).

Salient object detection is another SAR-APD research direction [11, 3], which attempts to extract visually distinct regions that stand out from the input images. This is inspired by the human visual ability [2]. The wavelet transform based saliency method [16] has shown promising results for proposing class-agnostic candidate object regions (e.g., 93% recall on HERIDAL test data [3]

). Nevertheless, the individual region classification to eliminate the vast amount of false positives remains a problem, since it is computationally expensive, and the classifier training with cropped image patches can be cumbersome.

Deep learning solutions to SAR-APD have been studied extensively as well. Marušić et al. [29] used the well-known Faster R-CNN detector [36], which achieved commendable performance on HERIDAL with little modification. Božić-Štulić et al. [3] used a VGG16 CNN classifier on top of their salient region proposals. Lastly, Vasić and Papić [49] proposed a two-stage multimodel CNN detector, which uses an ensemble approach of two deep CNNs for both region proposal and classification stages in the detector pipeline. This complex model achieves a very high recall on HERIDAL test data at the cost of notably reduced inference speed.

2.2 HERIDAL data set

The HERIDAL data set [3] is currently the only widely used, public SAR-APD benchmark available as far as we know. It is constructed using expert knowledge and statistics compiled by Koester [19] and the Croatian SAR team to ensure proper coverage of realistic SAR scenarios, according to Božić-Štulić et al. [3]. This data set contains 1647 annotated high-resolution UAV aerial images taken from Mediterranean wilderness of Croatia and BiH, that is, 1546 training and 101 testing images. We use 10% of these training samples for validation. An example image of the data set is shown in Fig. 2. Furthermore, the data set contains only a single object class “person”, instances of which are annotated with bounding boxes using the VOC XML label format [5]. Each image in HERIDAL contains at least one person annotation, and some images contain more than ten. In total, HERIDAL contains 3229 person annotations (2892 train and 337 test).

The high-resolution aerial images are taken from several locations with various drone and camera models at altitudes ranging from 40 to 65 m. Most of the images have pixel resolution, however, a few pictures have a wide-format-resolution of pixels. In other words, the aerial images have an approximate 2 cm ground resolution. That is, one pixel width corresponds to roughly 2 cm on the ground [3], which is rather accurate.

However, all objects in the images are very small, as shown in Fig. 2. Indeed, the objects have approximate dimensions of pixels on average, which is only 0.03% of the whole image area. This small object size poses a considerable challenge for object detection methods [26].

2.3 Evaluation schemes

Before solving a problem, one must define what a correct solution looks like. In machine learning, this is typically achieved by calculating some metrics, such as precision (PRC) and recall (RCL), with an

evaluation scheme or algorithm. These metrics attempt to reflect, in a somewhat subjective sense, the true model performance in a given task. In object detection, a traditional choice for an evaluation scheme is the scoring algorithm from PASCAL VOC2012 challenge [6], which mainly uses the average precision (AP) metric. However, the performance metric calculation depends on first classifying all predictions under evaluation into either true positive (TP) or false positive (FP) category given the ground truth (GT) data labels.

In object detection, this classification is traditionally done by a ground truth matching algorithm, such as Algorithm 1. It is a key design choice in an object detection evaluation scheme implementation, as it decides what types of predictions are rewarded (marked as TPs) and what are penalized (marked as FPs). For example, duplicate detections are typically marked as FPs by most evaluation schemes, such as the VOC2012 algorithm [39]. What can be confusing is that most evaluation schemes report the same metrics (PRC, RCL, AP), albeit their way of arriving at those metrics is fundamentally different due to the choice of the ground truth matching algorithm. We next review a couple of existing evaluation schemes, discuss their issues in the SAP-APD context, and finally compare them to our proposed evaluation scheme.

The VOC2012 evaluation scheme is a widely used method for evaluating the performance of object detectors that output bounding box predictions and their associated confidence scores [5, 6, 25, 39, 22]. It is designed for relatively large objects and requires strict alignment of the detection and label box measured by the intersection over union (IoU) metric given by Eq. 4. Therefore, VOC2012 evaluation uses the a typical TP criterion of over IoU between the two boxes [25]. Moreover, each prediction must be matched with exactly one label (one-to-one mapping). In essence, our generalized Algorithm 1 yields GT matching for VOC2012 evaluation with the input parameters , and . These strict requirements are the root cause of the dense group separation problem in HERIDAL, as they encourage prediction postprocessing methods that likely produce results similar to what is depicted in Fig. 0(b). On the other hand, results like in Fig. 0(c) receive zero points (one FP and five FNs) under VOC2012 evaluation because of two reasons. Firstly, the IoU requirement is not satisfied with any of the labels, and secondly, no label grouping is allowed. Nonetheless, the MOB results in Fig. 0(c) are arguably more informative and visually pleasing in the SAR visual search context.

The recent MS COCO evaluation scheme has largely replaced the aforementioned VOC2012 criteria due to the advent of the popular MS COCO object detection challenge

[25]. MS COCO evaluation is very similar to VOC2012, however, it computes some additional metrics on top of VOC2012 AP, such as iterating the evaluation algorithm over different IoU threshold values from 0.5 to 0.95 and taking the mean AP as the end result [26]. Thus, the MS COCO scheme is even stricter than VOC2012 in terms of object localization accuracy requirements, and is thereby far too strict for the SAR-APD problem evaluation. Consequently, we omit the detailed discussion of the MS COCO evaluation in this paper, and use the VOC2012 scheme as a baseline for comparison instead.

The proposed SAR-APD evaluation scheme matches ground truth boxes such that the real-world, loose localization requirement of aerial SAR is respected. Hence, we ignore the dense group separation problem by allowing one-to-many mappings between a prediction and label boxes. Furthermore, we significantly relieve the evaluation IoU threshold from 0.5 to 0.0025, which allows much larger prediction boxes to be considered as TPs (see Fig. 3). As such, the red predicted box in Fig. 0(c) would achieve five TPs under SAR-APD evaluation, as this single prediction captures all the blue label boxes. The ground truth matching method in our SAR-APD evaluation scheme corresponds to Algorithm 1 with the input parameters , and . Although, in our experiments we used . We argue this limit on box minimum size had negligible effect on our performance metrics due to the MOB algorithm tendency of overestimating the bounding box size, as seen in Fig. 0(c). Nonetheless, with the relaxed value, the predicted box minimum size becomes an important evaluation scheme design consideration. That is, barely visible, few pixels wide predictions should not be accepted as TPs. Moreover, all parameters of our generalized Algorithm 1 can be customized for different use cases other than SAR-APD, for example by limiting the maximum prediction group size by changing .

Input:      COMMENT bounding box predictions sorted
       COMMENT by decreasing confidence score
       COMMENT for class from input image .
Input:      COMMENT ground truth bounding box labels
       COMMENT for class from input image .
Input:         COMMENT Box IoU threshold for matching.
Input: COMMENT Maximum number of GT boxes
       COMMENT to match with a single prediction .
Input:   COMMENT Minimum value for ,
       COMMENT which limits TP prediction box size.
Output:     COMMENT A binary sequence of variable length
       COMMENT indicating true and false
       COMMENT positives, if .

1function MatchBoxesGeneric()
3      for  do COMMENT Iterate over predictions.
4             COMMENT Initialize matched label set.
5            ) COMMENT

GT IoU value vector (see

Eq. 4).
6             COMMENT Sort by descending IoU value.
7             COMMENT Sort into descending order.
8            for  do
10                 if  and and  then
12                        COMMENT Attribute one TP to this prediction.                              
13            if  then
14                  COMMENT Nothing matched, add one FP.             
15             COMMENT Remove matched labels.       
16      return
Algorithm 1 Generalized ground truth matching method for typical object detector performance evaluation.

The proposed SAR-APD evaluation scheme encourages results that effectively and reliably summarize the relevant, mission-critical information for SAR operations without much clutter. This is especially important if the SAR-APD algorithm co-operates with a human supervisor, which is the most likely scenario given the potential dire consequences of any mistakes. The reason is that any unnecessary distraction or irritation in a stressful SAR situation can undermine the actions and decisions made by the said supervisor. For example, too many false positives can cause the SAR-APD algorithm suggestions to be mistrusted and ignored.

Moreover, our evaluation scheme enables more research effort to be directed towards more meaningful, value-creating endeavors, such as ensuring high true positive rate for these approximate group labels or improving the detector processing speed, instead of spending excessive time tweaking threshold parameters to near perfectly delineate each tiny person. In addition, the proposed SAR-APD evaluation scheme works as a simple drop-in replacement for the VOC2012 criteria without the need to modify any data labels. In fact, the SAR-APD evaluation scheme can define implicit group labels with configurable size for typical object detection data sets.

Figure 3: Illustration of localization area limits given an accuracy requirement defined by a fixed evaluation IoU threshold : (a) average human width in aerial images, (b) maximum width for prediction bounding box , and (c) maximum acceptable localization area width for a maximum-sized prediction box. See equations (1) and (2) for calculating the values of and respectively. Note that only prediction boxes with unity aspect ratio are considered here for simplicity.

The relationship between the evaluation IoU threshold and maximum acceptable pixel width for the prediction bounding box can be formulated as



is the estimated average human pixel width in the HERIDAL aerial images. Note that this equation only applies when the ground truth bounding box is completely inside the predicted box, otherwise

will be lower. This scenario is depicted in Fig. 3. Furthermore, Fig. 3 illustrates the maximum acceptable (unity aspect ratio) area width for prediction box of width to exist in, that is given by


Therefore, a square prediction box of width is only considered a TP inside this square region of width centered at the GT label, as shown in Fig. 3. In other words, pixels corresponds to roughly 20 meters on the ground in most HERIDAL images [3]. This means the SAR-APD evaluation requires the person to be located within the maximum predicted 20 m 20 m square area on the ground, which is still in alignment with the localization accuracy recommendation by Molina et al. [31]. Hence, the SAR-APD evaluation scheme scores object detection algorithms based on their ability to find approximate georeferences for the objects of interest.

3 MOB postprocessing

In this work, we also propose a novel postprocessing method for the SAR-APD problem, called the MOB algorithm. It is a probabilistic BBA method that trades localization accuracy for detection certainty. MOB first clusters prediction boxes based on an overlap distance metric given by Eq. 3. Subsequently, MOB merges all candidate boxes in each overlap cluster into a single enclosing box by using Eq. 5. This is in contrast to the de facto NMS method that attempts to eliminate all sufficiently overlapping boxes but the max-scoring one in a greedy manner. It could be argued that the popularity of NMS is largely based on its remarkable synergy with the VOC2012 evaluation method that emphasizes accurate individual object delineation, which is not the goal of SAR-APD. Conversely, MOB (with its default merge strategy) is not compatible with the VOC2012 method at all, as explained in Sec. 2.3

. Therefore, we use our proposed SAR-APD scheme for measuring its performance truthfully. However, this makes the fair comparison of MOB to other BBA methods difficult at the moment.

Essentially, MOB helps the SAR personnel to focus on the relevant clues in the image, if there are any. This is achieved by eliminating clutter from the detection results and highlighting broad regions that include all the relevant objects of interest, as done in Fig. 0(c).

MOB is based on the single linkage agglomerative clustering algorithm implemented in the scikit-learn machine learning library [34], which uses the Jaccard distance metric [20]. This metric and the corresponding pair-wise distance matrix are defined as


where and are two input bounding boxes, and denotes the area of region . Intuitively, when and have zero overlapping area, and when they are the same box (100% overlap), as is the case with the diagonal elements of .

The BBA IoU threshold controls whether two boxes are considered overlapping. Its default value is in NMS. For MOB, we set , which is experimentally found to result in the merging of all overlapping bounding boxes. This clustering formulation is equivalent to finding all the connected components in a Jaccard distance graph defined by , if is considered an infinite distance, and thus there is no link between the bounding boxes and .

After forming the overlapping bounding box clusters, we calculate the enclosing box for each cluster according to the enclose merge strategy, given by Eq. 5. This sequence of enclosing boxes is the output of one MOB iteration (i.e., one pass of the MOB algorithm). Particularly, consider a -sized sequence of axis-aligned bounding boxes in one overlap cluster, where each box is defined by its minimum and maximum coordinate . Consequently, the enclosing bounding box of the cluster can be computed as


where and . Moreover, the new confidence score of the enclosed box is computed as the mean of predicted box scores in the overlap cluster. This can sometimes yield relatively low confidence scores for the resulting boxes, so MOB also offers a top-k

parameter that can be used to prune some of the low confidence outlier boxes in a overlap cluster. This pruning frequently reduces the output box size as well.

On the other hand, sometimes this enclose merge strategy, given by Eq. 5, creates new, larger overlapping boxes. Because of this it can be desirable to repeat the MOB iteration a few times to ensure everything gets merged together. The number of these MOB iterations can be specified with an algorithm input parameter . However, if there is only one merged bounding box left after any MOB iteration, the algorithm terminates early. In our experiments, we use , which is experimentally found to be sufficient for merging all overlapping bounding boxes in most cases.

However, sometimes the merged boxes can grow undesirably large, especially if the original overlapping boxes were organized in a diagonal formation. For this reason, MOB allows one to specify a maximum inflation factor , which limits the size of merged bounding boxes. It sets an upper bound for the result box areas that is given by


where is a sequence of input bounding boxes to the MOB algorithm, an example of which is illustrated in Fig. 0(a). In our experiments we use , which means no output box can be larger than 100 times the largest input box size. As such, if the area of any merged bounding box exceeds during MOB clustering, its underlying box cluster is recursively subdivided into smaller box clusters until

is no longer exceeded by any merged box. The subdivision algorithm uses a simple heuristic rule that divides a box cluster into two subclusters along the maximum length axis of the box cluster, such that each subcluster has a roughly equal number of boxes in them. This functionality limits the size inflation of the merged boxes. It is useful if a certain localization accuracy needs to be met, such as the one given by

in Algorithm 1. Using guarantees that the localization accuracy requirement of the proposed SAR-APD evaluation scheme is satisfied.

The most obvious improvement in MOB is the improved visual representation for the dense people group predictions, as shown in Fig. 0(c). Such groups are realistic, yet rare, occurrences in SAR [3, 10]. MOB also makes prediction more robust by simplifying the SAR-APD problem. That is, MOB is far less sensitive to the choice of various threshold parameters, such as , in terms of precision (when using SAR-APD evaluation). This enables the use of considerably lower threshold values that can increase the recall metric, as depicted in Fig. 4. Therefore, fewer search targets are likely to go unnoticed during aerial drone searches. Moreover, there is less need for threshold parameter calibration on new unseen data, which makes MOB a more general approach to SAR-APD than NMS.

Finally, it should be mentioned that the enclose method given by Eq. 5 is only one potential merge strategy. MOB also supports other similar strategies, such as calculating the cluster average box, to further customize its behavior for different potential use cases.

4 AIR detector

Next, we briefly describe our approach to this SAR-APD problem, namely the AIR detector [35]. It is built on top of the famous one-stage RetinaNet detector by Lin et al. [24]. The AIR implementation is based on the keras-retinanet Python framework by Gaiser [7].

The first aspect in designing AIR is naturally to address the small object detection problem, an instance of which SAR-APD essentially is. For this, the built-in feature pyramid network (FPN) [23] of RetinaNet is essential, since it enhances the semantic quality of high-resolution CNN feature maps that capture the small objects. Moreover, the novel focal loss used in training the RetinaNet detector helps to mitigate the severe foreground-background class imbalance in the HERIDAL data [24]. According to Lin et al. [24], the focal loss is shown to solve this imbalance problem better than various sampling heuristics, such as OHEM [41]. Furthermore, we employ a variant of image tiling strategy [33] to enable the full pixel information usage in 4K input images within typical GPU memory limits.

Secondly, one needs to solve the training data scarcity problem of HERIDAL. Therefore, we use CNN backbone weights pretrained on ImageNet1000 data [39]

for all our AIR experiments. This is a common strategy to mitigate the lack of big data in learning low-level feature presentations necessary for most computer vision tasks

[40, 44, 50]. In addition, we adopt the online, random data augmentation scheme by Zoph et al. [50] with slight modifications, such as adding white Gaussian noise to the images. The idea is to incorporate label invariance to small geometric perturbations and lighting changes into the training data [40] by augmenting it with various geometric transformations and color operations. Furthermore, random data augmentation can regularize model training as well [21].

Lastly, we naturally employ the novel MOB algorithm, discussed in Sec. 3, for robust AIR output postprocessing during test time. However, we also use conventional NMS postprocessing as a performance benchmark, and for comparing our results to other notable SAR-APD work in Tab. 3.

5 Experiments and results

A total of six major experiments were conducted in order to create the AIR detector: a general hyperparameter search, backbone architecture selection (see

Tab. 1), ablation study (see Tab. 2), random trial experiment, model calibration study (see Fig. 4), and a tiling performance study. For full details of these experiments, see the thesis work by Pyrrö et al. [35]. In this paper, we focus on the most relevant results that can help to improve SAR-APD performance in general.

Train set results (%) Test set results (%)
VGG16 [42] 74.4 98.9 89.1 79.8 79.8 76.5
ResNet50 [13] 74.6 91.8 77.8 83.5 72.1 69.6
ResNet101 [13] 76.9 94.9 77.4 79.3 75.1 72.3
ResNet152 [13] 81.8 95.8 81.7 85.0 80.7 78.1
DenseNet201 [15] 77.7 42.6 35.4 73.9 28.5 26.3
SeResNeXt101 [14] - - - 77.6 69.7 65.8
SeResNet152 [14] 78.8 84.3 70.8 76.9 66.2 63.1
EfficientNetB4 [47] 66.9 82.2 57.9 79.5 58.8 54.5
Table 1: CNN backbone architecture selection experiment results on the HERIDAL data set. ResNet152 is the clear winner. Train set evaluation was not possible due to a corrupted model file.

One of the most important experiments is the CNN backbone selection for SAR-APD, since the choice of a feature extractor is a fundamental aspect of solving any object detection problem [17, 9, 26]

. Therefore, we test fine-tuning eight different ImageNet pretrained backbone architectures on HERIDAL, and the results are shown in

Tab. 1. Here, we report the PRC, RCL and AP metrics computed using the VOC2012 evaluation scheme. As evident, ResNet152 outperforms all the other candidates on the test set, thus, it is selected as the CNN backbone for later experiments. Surprisingly, the simplest VGG16 network yields comparable results to ResNet152, while the last four most sophisticated models in Tab. 1

, according to ImageNet classification score

[26], perform relatively poor on HERIDAL.

Ablations Test results (%)
(a) (b) (c) (d) (e) (f) (g) (h) (i) PRC RCL AP
1 57.7 4.5 2.9
2 76.5 57.9 52.2
3 87.8 72.4 69.3
4 85.0 80.7 78.1
5 82.8 81.3 77.7
6 57.7 84.3 81.7
7 17.2 81.9 79.2
8 49.5 79.5 77.3
9 17.7 89.0 86.9
10 90.1 86.1 84.6
Table 2: Ablation study results with the selected ResNet152 CNN backbone. The HERIDAL test set performance metrics are evaluated for each different type of added feature (ablation): (a) anchor boxes optimized for small objects, (b) improved hyperparameters, (c) image tiling, (d) pretraining on Stanford Drone data set [37], (e) online data augmentation pipeline, (f) decreased training example IoU threshold, (g) removing the two topmost layers from FPN, (h) best seed from random trial experiment and (i) calibrated score threshold parameter from model calibration study. The final model uses a combination of six ablations out of the nine tested.

With the CNN backbone selected, we test several other tricks (or ablations) to improve AIR test performance, which are collected in Tab. 2. We only keep the tested trick if it improves HERIDAL test AP metric, otherwise it is discarded from further experiments. The last calibration study ablation is an exception to this rule, as it primarily optimizes PRC instead. Table 2 includes our most important modifications for improving SAR-APD performance: (a) adjusting image region sampling grid, (b) tuning hyperparameters (e.g., learning rate), (c) using image tiling instead of resizing, (e) using data augmentation, and interestingly (h) finding appropriate random initialization for training.

Due to PRC sensitivity of NMS, it is beneficial to perform confidence score threshold parameter calibration as the last experiment (see Tab. 2). To accomplish this, we test AIR with 10 different values from 0.05 to 0.5 on both VOC2012 and SAR-APD evaluation schemes. Moreover, we calibrate both the NMS and MOB postprocessing method, and the results are shown in Fig. 4. For a balanced trade-off between PRC and RCL, we choose for NMS and for MOB.

Given the similarity of Fig. 4 and Fig. 4, we can concur the choice of evaluation method affects the NMS-based detector relatively little in this experiment. On the other hand, by comparing Fig. 4 and Fig. 4, it is evident that MOB is far less sensitive to the choice of when compared to NMS. This is a compelling property of the MOB algorithm, which can in some cases eliminate the need for configuring parameter altogether. Furthermore, MOB enables setting much lower to increase recall without sacrificing precision significantly.

Figure 4: AIR detector model calibration study results on the HERIDAL test set with different evaluation and bounding box aggregation methods. The choice of score threshold calibration parameter

yields different trade-offs between precision and recall.

Figure 5: Precision-recall curves for different evaluation experiments with the AIR detector on the HERIDAL test set. The red dot corresponds to a precision-recall pair at a certain fixed confidence score threshold value . As such, by moving the dot along the curve, different trade-offs between precision and recall can be made. Notice that MOB enables moving the dot much further to the right without the precision metric plummeting. This encourages placing more emphasis on recall, which is critical for the SAR mission success rate.

With the score thresholds fixed, we analyze NMS and MOB performance further under both evaluation methods. For this, we plot precision-recall curves shown in Fig. 5. Similar phenomenon to Fig. 4 can be seen here as well: SAR-APD evaluation affects NMS performance very little (see Fig. 5), likely mostly due to the relaxed criterion. However, adding MOB to the equation significantly increases all measured performance metrics in Fig. 5. This can likely be attributed to the relaxation of the dense group separation problem, and thus to the lessened need for eliminating boxes with score thresholding to keep PRC satisfactory, which in turn is reflected on the increased RCL metric. This shows the importance of using accurate metrics.

Model PRC (%) RCL (%) AP (%) ATI (s)
VOC2012 evaluation
  Mean shift clustering [48] 18.7 74.7 - -
  Saliency guided VGG16 [3] 34.8 88.9 - -
  Faster R-CNN (2019) [3] 58.1 85.0 - -
  Faster R-CNN (2018) [29] 67.3 88.3 86.1 1
  Multimodel CNN [49] 68.9 94.7 - 10
  SSD [49] 4.3 94.4 - -
  AIR with NMS (ours) 90.1 86.1 84.6 1
SAR-APD evaluation (ours)
  AIR with NMS (ours) 90.5 87.8 86.5 1
  AIR with MOB (ours) 94.9 92.9 91.7 1
Table 3: Comparison with the state-of-the-art in SAR-APD on the HERIDAL test set using two different evaluation methods. The abbreviation ATI refers to average time per image, which is a very crude estimate due to software and hardware differences in the reported test environments [35]. Under the traditional VOC2012 evaluation AIR achieves state-of-the-art results in both PRC and ATI. MOB boosts all accuracy metrics under SAR-APD evaluation.

The comparison of the AIR detector to the state-of-the-art SAR-APD methods is shown in Tab. 3. Both evaluation methods are included, however, the SAR-APD scheme results cannot be directly compared to other work. Our strongest competitor is the Multimodel CNN by Vasić and Papić [49], which achieves a very high RCL. Nevertheless, this model is an order of magnitude slower due to its very high complexity (an ensemble model of four deep CNNs), as shown by its average time per image (ATI) metric in Tab. 3. This can hinder its practical adoption to SAR missions. Furthermore, it lacks end-to-end training capability. Some of the RCL gap can likely be attributed to better solving of the less important dense person group problem, as indicated by our MOB results in Tab. 3. As for the other strong competitor Faster R-CNN [29], we argue AIR can achieve similar recall and AP while still being 11 points more precise by choosing , as shown in Fig. 4.

Lastly, we highlight that our final MOB results in Tab. 3 far exceed the human average performance in the same task. According to Pyrrö et al. [35], the average human achieves roughly the following metrics in SAR-APD: 59% PRC, 68% RCL and 33 s ATI, with less than around 200 images to inspect. This is a fraction of the average, real-world number [12]. Moreover, the human performance is demonstrated to drop with the number of images inspected due to accumulated fatigue [35]. Therefore, the tireless AIR detector and its innovations can significantly improve drone-based SAR efficiency in the future.

6 Conclusions

The current aerial drone searches in SAR lack efficient means for visual footage inspection. Our deep learning solution to this problem showcased state-of-the-art performance (21 point increase in PRC) on the difficult HERIDAL benchmark. Moreover, we redefined the related SAR-APD problem by presenting a novel evaluation and BBA method for it. This is to better capture the real-world requirements of aerial SAR. We also added extensive SAR-APD experimentation results to the scientific literature. Finally, we identified some important directions for future work as well. These include the use of MOB and SAR-APD evaluation for model validation during training, an extensive study of different MOB merge strategies, and the pursuit of achieving AIR real-time inference performance.


  • [1] Navaneeth Bodla, Bharat Singh, Rama Chellappa, and Larry S Davis. Soft-NMS – Improving object detection with one line of code. In Proceedings of the IEEE International Conference on Computer Vision, pages 5561–5569, 2017.
  • [2] Ali Borji, Ming-Ming Cheng, Qibin Hou, Huaizu Jiang, and Jia Li. Salient object detection: A survey. Computational Visual Media, 5(2):117–150, 2019.
  • [3] Dunja Božić-Štulić, Željko Marušić, and Sven Gotovac. Deep learning approach in aerial imagery for supporting land search and rescue missions. International Journal of Computer Vision, 127(9):1256–1278, 2019.
  • [4] Dorin Comaniciu and Peter Meer. Mean shift: A robust approach toward feature space analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(5):603–619, 2002.
  • [5] Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes (VOC) challenge. International Journal of Computer Vision, 88(2):303–338, 2010.
  • [6] Mark Everingham and John Winn. The pascal visual object classes challenge 2012 (VOC2012) development kit. Pattern Analysis, Statistical Modelling and Computational Learning, Tech. Rep, 8, 2012.
  • [7] Hans Gaiser. Keras implementation of the RetinaNet object detector., 2019. Accessed 20 March 2021.
  • [8] Anna Gaszczak, Toby P Breckon, and Jiwan Han. Real-time people and vehicle detection from UAV imagery. In Intelligent Robots and Computer Vision XXVIII: Algorithms and Techniques, volume 7878, page 78780B. International Society for Optics and Photonics, 2011.
  • [9] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT Press, 2016.
  • [10] Michael A Goodrich, Bryan S Morse, Damon Gerhardt, Joseph L Cooper, Morgan Quigley, Julie A Adams, and Curtis Humphrey. Supporting wilderness search and rescue using a camera-equipped mini UAV. Journal of Field Robotics, 25(1-2):89–110, 2008.
  • [11] Sven Gotovac, Vladan Papić, and Željko Marušić. Analysis of saliency object detection algorithms for search and rescue operations. In 2016 24th International Conference on Software, Telecommunications and Computer Networks (SoftCOM), pages 1–6. IEEE, 2016.
  • [12] Sven Gotovac, Danijel Zelenika, Željko Marušić, and Dunja Božić-Štulić. Visual-based person detection for search-and-rescue with uas: Humans vs. machine learning algorithm. Remote Sensing, 12(20):3295, 2020.
  • [13] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , pages 770–778, 2016.
  • [14] Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7132–7141, 2018.
  • [15] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4700–4708, 2017.
  • [16] Nevrez Imamoglu, Weisi Lin, and Yuming Fang. A saliency detection model using low-level features based on wavelet transform. IEEE Transactions on Multimedia, 15(1):96–105, 2012.
  • [17] Alexander Jung. Machine learning: The basics. arXiv:1805.05052, 2021.
  • [18] Jeremiah Karpowicz. UAVs for law enforcement, first response & search and rescue (SAR). Commercial UAV Expo, 2016.
  • [19] Robert J Koester. Lost person behavior: A search and rescue guide on where to look-for land, air and water. dbs productions llc. Charlottesville, VA, August, 2008.
  • [20] Sven Kosub. A note on the triangle inequality for the Jaccard distance. Pattern Recognition Letters, 120:36–38, 2019.
  • [21] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Citeseer, 2009.
  • [22] Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Alexander Kolesnikov, et al. The Open Images Dataset V4. International Journal of Computer Vision, pages 1956–1981, 2020.
  • [23] Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2117–2125, 2017.
  • [24] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 2980–2988, 2017.
  • [25] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft COCO: Common objects in context. In European Conference on Computer Vision, pages 740–755. Springer, 2014.
  • [26] Li Liu, Wanli Ouyang, Xiaogang Wang, Paul Fieguth, Jie Chen, Xinwang Liu, and Matti Pietikäinen. Deep learning for generic object detection: A survey. International Journal of Computer Vision, 128(2):261–318, 2020.
  • [27] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg. SSD: Single shot multibox detector. In European Conference on Computer Vision, pages 21–37. Springer, 2016.
  • [28] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3431–3440, 2015.
  • [29] Željko Marušić, Dunja Božić-Štulić, Sven Gotovac, and Tonćo Marušić. Region proposal approach for human detection on aerial imagery. In 2018 3rd International Conference on Smart and Sustainable Technologies (SpliTech), pages 1–6. IEEE, 2018.
  • [30] Željko Marušić, Danijel Zelenika, Tonćo Marušić, and Sven Gotovac. Visual search on aerial imagery as support for finding lost persons. In 2019 8th Mediterranean Conference on Embedded Computing, MECO 2019 - Proceedings. Institute of Electrical and Electronics Engineers Inc., 2019.
  • [31] Pere Molina, Ismael Colomina, T Victoria, Jan Skaloud, W Kornus, R Prades, and C Aguilera. Searching lost people with UAVs: The system and results of the CLOSE-SEARCH project. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 39(CONF):441–446, 2012.
  • [32] A. Neubeck and L. Van Gool. Efficient non-maximum suppression. In 18th International Conference on Pattern Recognition (ICPR’06), volume 3, pages 850–855, 2006.
  • [33] F Ozge Unel, Burak O Ozkalayci, and Cevahir Cigla. The power of tiling for small object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 582–591, 2019.
  • [34] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011.
  • [35] Pasi Pyrrö, Hassan Naseri, and Alexander Jung. AIR: Aerial inspection retinanet for land search and rescue missions. Master’s thesis, Aalto University, School of Science, 2021.
  • [36] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. Advances in Neural Information Processing Systems, 28:91–99, 2015.
  • [37] A Robicquet, A Sadeghian, A Alahi, and S Savarese. Learning social etiquette: Human trajectory prediction in crowded scenes. In European Conference on Computer Vision (ECCV), 2016.
  • [38] Piotr Rudol and Patrick Doherty. Human body detection and geolocalization for UAV search and rescue missions using color and thermal imagery. In 2008 IEEE Aerospace Conference, pages 1–8. IEEE, 2008.
  • [39] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252, 2015.
  • [40] Connor Shorten and Taghi M Khoshgoftaar. A survey on image data augmentation for deep learning. Journal of Big Data, 6(1):60, 2019.
  • [41] Abhinav Shrivastava, Abhinav Gupta, and Ross Girshick. Training region-based object detectors with online hard example mining. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 761–769, 2016.
  • [42] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, 2014.
  • [43] Jan Sokalski, Toby P Breckon, and Ian Cowling. Automatic salient object detection in UAV imagery. Proc. of the 25th Int. Unmanned Air Vehicle Systems, pages 1–12, 2010.
  • [44] Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting unreasonable effectiveness of data in deep learning era. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017.
  • [45] Jingxuan Sun, Boyang Li, Yifan Jiang, and Chih-yung Wen. A camera-based target detection and positioning UAV system for search and rescue (SAR) purposes. Sensors, 16(11):1778, 2016.
  • [46] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1–9, 2015.
  • [47] Mingxing Tan and Quoc V Le. EfficientNet: Rethinking model scaling for convolutional neural networks. In International Conference on Machine Learning, pages 6105–6114. PMLR, 2019.
  • [48] Hrvoje Turić, Hrvoje Dujmić, and Vladan Papić. Two-stage segmentation of aerial images for search and rescue. Information Technology and Control, 39(2), 2010.
  • [49] Mirela Kundid Vasić and Vladan Papić. Multimodel deep learning for person detection in aerial images. Electronics, 9(9):1459, 2020.
  • [50] Barret Zoph, Ekin D Cubuk, Golnaz Ghiasi, Tsung-Yi Lin, Jonathon Shlens, and Quoc V Le. Learning data augmentation strategies for object detection. In European Conference on Computer Vision, pages 566–583. Springer, 2020.