A Two-Stream Symmetric Network with Bidirectional Ensemble for Aerial Image Matching

02/04/2020 ∙ by Jae-Hyun Park, et al. ∙ Korea University 2

In this paper, we propose a novel method to precisely match two aerial images that were obtained in different environments via a two-stream deep network. By internally augmenting the target image, the network considers the two-stream with the three input images and reflects the additional augmented pair in the training. As a result, the training process of the deep network is regularized and the network becomes robust for the variance of aerial images. Furthermore, we introduce an ensemble method that is based on the bidirectional network, which is motivated by the isomorphic nature of the geometric transformation. We obtain two global transformation parameters without any additional network or parameters, which alleviate asymmetric matching results and enable significant improvement in performance by fusing two outcomes. For the experiment, we adopt aerial images from Google Earth and the International Society for Photogrammetry and Remote Sensing (ISPRS). To quantitatively assess our result, we apply the probability of correct keypoints (PCK) metric, which measures the degree of matching. The qualitative and quantitative results show the sizable gap of performance compared to the conventional methods for matching the aerial images. All code and our trained model, as well as the dataset are available online.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 5

page 6

page 7

page 18

page 19

page 20

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

1.1 Motivation

Aerial image matching is a geometric process of aligning a source image with a target image. Both images display the same scene but are obtained in different environments, such as time, viewpoints and sensors. It also a prerequisite of a variety of aerial image tasks such as change detection, image fusion, and image stitching. Since it can have a significant impact on the performance of the following tasks, it is an extremely important task. As shown in Figure 1

, various environments have considerable visual differences of land-coverage, weather, and objects. The variance in the aerial images causes degradation of the matching precision. In conventional computer vision approaches, correspondences between two images are computed by the hand-crafted algorithm (such as SIFT 

Lowe04distinctiveimage, SURF Bay_surf:speeded, HOG Dalal05histogramsof, and ASIFT Morel:2009:ANF:1658384.1658390

), followed by estimating the global geometric transformation using RANSAC 

Fischler:1981:RSC:358669.358692 or Hough transform Leibe_08_ijcv; Lamdan_88_cvpr. However, these approaches are not very successful for aerial images due to their high-resolution, computational costs, large-scale transformation, and variation in the environments.

Figure 1: Variance in the aerial image data. We captured images that were obtained at different times, viewpoints and by different sensors. These images represent the same place but are visually different, which causes degradation in performance.

Another problem with aerial image matching is the asymmetric result. As aforementioned, there are tons of aerial image matching methods Lowe04distinctiveimage; Bay_surf:speeded; Dalal05histogramsof; Morel:2009:ANF:1658384.1658390; Fischler:1981:RSC:358669.358692; Leibe_08_ijcv; Lamdan_88_cvpr. Notwithstanding, these methods Lowe04distinctiveimage; Bay_surf:speeded; Dalal05histogramsof; Morel:2009:ANF:1658384.1658390; Fischler:1981:RSC:358669.358692; Leibe_08_ijcv; Lamdan_88_cvpr have overlooked the consistency of matching flow. i.e., most methods consider only one direction of the matching flows (from source to target). It causes asymmetric matching results and degradation of the overall performance. In Figure 2, it illustrates a failure case when the source image and the target image are swapped.

Figure 2: Asymmetric matching result. When image 1 and image 2 enter into source and target respectively, the matching process is successful. In the opposite case, however, it completely fails.

Many computer vision tasks have been applied and developed in real life Dihua:2002; face_tracking; Roh:2010; Roh:2007; Nighttime; Kang:2014; SUK20103059; Jung:2004; Hwang:2000; Park:2007; Maeng:2011; PARK:2005; PARK:2004767; SUK:2011; SONG1996329; ROH:2000; book. Because deep n eural networks (DNNs) have shown impressive performance in real-world computer vision tasks Alex_12_NIPS; Girshick_15_FAST; Long_2015_CVPR; Goodfellow_14_NIPS , several approaches apply DNNs to overcome the limitation of traditional computer vision methods for matching the images. The Siamese network Koch_2015_SiameseNN; Chopra_05_learninga; Altwaijry_2016_CVPR; Melekhov2016SiameseNF has been extensively applied to extract important features and to match image-patch pairs Simo-Serra_2015_ICCV; Zagoruyko_2015_CVPR; Han_2015_CVPR. Furthermore, several works Rocco_2017_CVPR; Rocco_2018_CVPR; Seo_2018_ECCV

apply an end-to-end manner in the geometric matching area. While numerous matching tasks have been actively explored with deep learning, few approaches utilize DNNs in aerial image matching areas.

In this work, we utilize a deep end-to-end trainable matching network and design a two-stream architecture to address the variance in the aerial images obtained in diverse environments. By internally augmenting the target image and considering the three inputs, we regularize the training process, which produces a more generalized deep network. Furthermore, our method is designed as a bidirectional network with an efficient ensemble manner. Our ensemble method is inspired by the isomorphic nature of the geometric transformation. We apply this method in our inference procedure without any additional networks or parameters. The ensemble approach also assists in alleviating the variance between estimated transformation parameters from both directions. Figure 3 illustrates an overview of our proposed method.

Figure 3: Overview of the proposed network. Our network directly estimates the outcomes , where and are the global transformation parameters that transform to , vice versa, and (, ) are those between and

. Subsequently, the outcomes are employed for the backpropagation in the training procedure. In the inference procedure, we warp

to using the final ensembled parameters.

1.2 Contibutions

To sum up, our contributions are three-fold:

  • [leftmargin=*,labelsep=5mm]

  • For aerial image matching, we propose a deep end-to-end trainable network with a two-stream architecture. The three inputs are constructed by internal augmentation of the target image, which regularizes the training process and overcomes the shortcomings of the aerial images due to various capturing environments.

  • We introduce a bidirectional training architecture and an ensemble method, inspired by the isomorphism of the geometric transformation. It alleviates the asymmetric result of image matching. The proposed ensemble method assists the deep network to become robust for the variance between estimated transformation parameters from both directions and shows improved performance in evaluation without any additional network or parameters.

  • Our method shows more stable and precise matching results from the qualitative and quantitative assessment. In the aerial image matching domain, we first apply probability of correct keypoints (PCK) metrics [44] to objectively assess quantitative performance with a large volume of aerial images. Our dataset, model and source code are available at https://github.com/jaehyunnn/DeepAerialMatching.

1.3 Related Works

In general, the image matching problem has been addressed in two types of methods: area-based methods and feature-based methods Brown:1992:SIR:146370.146374; Zitova03imageregistration. The former methods investigate the correspondence between two images using pixel intensities. However, these methods are vulnerable to noise and variation in illumination. The latter methods extract the salient features from the images to solve these drawbacks.

Most classical pipelines for matching two images consist of three stages, (1) feature extraction, (2) feature matching, and (3) regression of transformation parameters. As conventional matching methods, hand-crafted algorithms 

Lowe04distinctiveimage; Bay_surf:speeded; Dalal05histogramsof; Morel:2009:ANF:1658384.1658390 are extensively used to extract local features. However, these methods often fail for large changes in situations, which is attributed to the lack of generality for various tasks and image domains.

Convolutional neural networks (CNNs) have shown tremendous strength for extracting high-level features to solve various computer vision tasks, such as semantic segmentation Long_2015_CVPR; Chen_2018_ECCV, object detection Girshick_15_FAST; SNIPER_18, classification Alex_12_NIPS; Hu_2018_CVPR, human action recognition P-S; Yang:2007, and matching. In the field of matching, E. Simo-Serra et al. Simo-Serra_2015_ICCV

learned local features based on image-patch with a Siamese network and use the L2-distance for the loss function. X. Han et al. 

Han_2015_CVPR proposed a feature network and metric network to match two image patches. S. Zagoruyko et al. Zagoruyko_2015_CVPR expanded the Siamese network in two-streams: surround stream and central stream. K.-M. Yi et al. Yi_2016_ECCV proposed a framework that includes detection, orientation, estimation, and description by mimicking SIFT Lowe04distinctiveimage. H. Altwaijry et al. Altwaijry_2016_CVPR performed ultra-wide baseline aerial image matching with a deep network and spatial transformer module Max_2015_STN. H. Altwaijry et al. Altwaijry_2016_BMVC also proposed a deep triplet architecture that learns to detect and match keypoints with 3-D keypoints ground-truth extracted by VisualSFM Wu_2011_CVPR; Wu_2013_SFM. I. Rocco et al. Rocco_2017_CVPR first proposed a deep network architecture for geometric matching, and demonstrated the advantage of a deep end-to-end network by achieving 57% PCK score in the semantic alignment. This method constructs a dense-correspondence map using two image features and directly regress the transformation parameters. These researchers further proposed a weakly-supervision approach that does not require any additional ground-truth for training Rocco_2018_CVPR. P. Seo et al. Seo_2018_ECCV applied an attention mechanism with an offset-aware correlation (OAC) kernel based on Rocco_2017_CVPR and achieved a 68% PCK score.

Although these works show meaningful results, their accuracy or computational costs for aerial image matching require improvement. Therefore, we compose a matching network that is suitable for aerial images by pruning the factors that degrade performance.

2 Materials and Methods

We propose a deep end-to-end trainable network with a two-stream architecture and bidirectional ensemble method for aerial image matching. Our proposed network focuses on addressing the variance in the aerial images and asymmetric matching results. The steps for predicting transformation are listed as follows: (1) internal augmentation, (2) feature extraction with the backbone network, (3) correspondence matching, (4) regression of transformation parameters, and (5) application of ensemble to the multiple outcomes. In Figure 4, we present the overall architecture of the proposed network.

Figure 4: Overall architecture of the proposed network. Architecture has four stages: internal augmentation, feature extraction, matching, and regression. First, the target image is augmented using random color-jittering. Subsequently, the source, target, and augmented images are passed through the backbone networks which share the weights, followed by the matching operations, which produces the correspondence maps. The regression networks which also share the weights simultaneously output the geometric transformation parameters of the original pair and the augmented pair . We fuse the transformation parameters for inference or compute the losses with the balance parameters , and  for training.

2.1 Internal Augmentation for Regularization

The network considers two aerial images (source image and target image ) with different temporal and geometric properties as the input. By using this original pair in the training process, the deep network is trained by considering the relation of only two images obtained in different environments. However, this approach is insufficient for addressing the variance in the aerial images. Collecting various pair sets to solve these problems is expensive. To address this issue, we augment the target image by internally jittering image color during the training procedure. The network can be trained with various image pairs since the color of the target image is randomly jittered in every training iteration as shown in Figure 5. This step has a regularization effect of the training process, which produces a more generally trained network. The constructed three inputs are passed through a deep network. Subsequently, the network directly and simultaneously estimates global geometrical transformation parameters for the original pair and augmented pair. Note that the internal augmentation is only performed in the training procedure. In inference procedure, we utilize a single-stream architecture without the internal augmentation process for computational efficiency.

Figure 5: Internal augmented samples. In every iteration of training, the target image is augmented using random color-jittering. Therefore, in every iteration, the network considers a different augmented training pair.

2.2 Feature Extraction with Backbone Network

Given the input images , we extract their feature maps by passing a fully-convolutional backbone network , which is expressed as follows:

(1)

where denote the heights, widths, and dimensions of the input images and are those of the extracted features, respectively.

We investigate various models of the backbone networks, as shown in Section 3. SE-ResNeXt101 Hu_2018_CVPR add the Squeeze-and-Excitation (SE) block as the channel-attention module to ResNeXt101 Xie_2017_CVPR, which has shown its superiority in ILSVRC15. Figure 6 shows the SE-block. Therefore, we leverage SE-ResNeXt101 as the backbone network and empirically show that it has an important role in improving performance compared with other backbone networks. We utilize the image features extracted from layer-3 in the backbone network and apply L2-normalization to extracted features.

Figure 6:

Squeeze-and-Excitation (SE) block. The input feature map is applied by global average pooling (GAP), followed by a multi-layer perceptron (MLP). The input feature map is elementwise multiplied by the channel-scores.

2.3 Correspondence Matching

As a method for computing a dense-correspondence map between two feature maps Rocco_2017_CVPR, the matching function is expressed as follows:

(2)

where is the dense-correspondence map that matches the source feature map to the target feature map . and indicate the coordinate of each feature point in the feature maps. Each element in refers to the similarity score between two points.

We construct the dense-correspondence map of the original pair and augmented pair. To consider only positive values for ease of training, the negative scores in the dense-correspondence map are removed by ReLU non-linearity, followed by L2-normalization.

2.4 Regression of Transformation Parameters

The regression step is for predicting the transformation parameters. When the dense-correspondence maps are passed through the regression network , the network directly estimates the geometric transformation parameters as follows:

(3)

where indicate the heights and widths of the feature maps, and 

means the degrees of freedom of the transformation model.

We adopt the affine transformation which has 6- and the ability to preserve straight lines. In the semantic alignment domain Rocco_2017_CVPR; Rocco_2018_CVPR; Seo_2018_ECCV, thin-plate spline (TPS) transformation TPS_89 which has 18- is used to improve the performance. However, it is not suitable in the aerial image matching domain, because it produces large distortions of the straight lines (such as roads and boundaries of the buildings). Therefore, we infer the six parameters that handle the affine transformation.

2.5 Ensemble Based on Bidirectional Network

The affine transformation is invertible due to its isomorphic nature. We take advantage of this characteristic to design a bidirectional network and apply an ensemble approach. Applying the ensemble method enables alleviating the variance in the aerial images and improvement in the matching performance without any additional networks or models.

2.5.1 Bidirectional Network

Inspired by its isomorphic nature, we expand the base architecture by adding a branch that symmetrically estimates the transformation in the opposite direction symmetrically. The network yields the transformation parameters in both directions of each pair, i.e.,  and . To infer the parameters of another branch, we compute the dense-correspondence map in the opposite direction by using the same method as in Section 2.3. All dense-correspondence maps are passed through the identical regression network . Since we utilize a regression network for all cases, no additional parameters are needed in this procedure. The proposed bidirectional network only adds a small amount of computational overhead compared with the base architecture.

2.5.2 Ensemble

In general, the ensemble technique requires several additional different architectures and consumes additional time costs to train models differently. We introduce an efficient ensemble method without any additional architectures or models by utilizing the isomorphism of the affine transformation. Figure 7 illustrates the overview of the ensemble procedure. , which is the inverse of , can be expressed as another transformation parameters in the direction from to . To compute , we convert into the homogeneous form:

(4)

In the affine transformation parameters , represent the scale, rotated angle and tilted angle, and  denotes the -axis, -axis translation. We compute by converting the homogeneous form, as shown in Equation (4). This inverse matrix denotes another affine transformation from to . As a result, we fuse the two sets of affine transformation parameters as follows:

(5)

where

denotes the mean function for fusing two parameters. In the various experiments, we apply three types of mean: arithmetic mean, harmonic mean and geometric mean. Empirically, arithmetic mean shows the best performance. In the inference process,

warps the source image into the target image. Note that we fuse only parameters that correspond to the original pair since we use the original two-stream network in the inference procedure and do not utilize the ensembled parameters in the training procedure to maximize the ensemble effects.

Figure 7: Ensemble process of affine parameters. The outcomes that correspond to the original pair are the transformation parameters in two possible directions. Since the affine transformation is isomorphic, we can use the inverse of to warp the source image to the target image. Therefore, the final transformation parameters are obtained by fusing these parameters.

2.6 Loss Function

In the training procedure, we adopt the transformed grid loss Rocco_2017_CVPR as the baseline loss function. Given the predicted transformation and the ground-truth , the baseline loss function is obtained by the following:

(6)

where is the number of grid points, and are the transforming operations parameterized by and , respectively. To achieve bidirectional learning, we add a term for training the additional branch to the baseline loss function. Formally, we define the proposed bidirectional loss of the original pair, , as follows:

(7)

Note that additional ground-truth information for the opposite direction is not required due to the isomorphism of the affine transformation. For regularization of training, we add two terms utilizing the augmented pair:

(8)
(9)

The augmented pair also share the ground-truth since the geometric relation between two images is equivalent to the original pair. The identity term in Equation (9) induces training to ensure that the prediction values from the original pair and the augmented pair are equal. Our proposed final loss function is defined by the following:

(10)

where are the balance parameters of each loss term. In our experiment, we set these parameters to (0.5, 0.3, 0.2), respectively.

3 Results

In this section, we present the implementation details, experiment settings, and results. For the quantitative evaluation, we compare the proposed method with other methods for aerial image matching. We further experiment with various backbone networks to obtain more suitable features for our work. We show the contributions of each proposed component in the ablation study section and the qualitative results of the proposed network compared with other networks.

3.1 Implementation Details

We implemented the proposed network using PyTorch 

paszke2017automatic and trained our model with the ADAM optimizer kingma_2015_ICLR, using a learning rate and a batch size of 10. We further performed data augmentation by generating the random affine transformation as the ground-truth. All input images were resized to .

Figure 8: Process of generating the training pairs. In the training procedure, given a multi-temporal aerial image pair, we perform the transformation on the second image using the ground-truth which is randomly generated.

3.2 Experimental Settings

3.2.1 Training

We generated the training input pairs by applying random affine transformations to the multi-temporal aerial image pairs captured in Google Earth. Since no datasets were annotated with completely correct transformation parameters between two images, we built the training dataset, 9000 multi-temporal aerial image pairs, and corresponding ground-truths. Basically, multi-tempral image pairs consisted of the image pairs which were taken at different times (2019, 2017, and 2015) and by different sensors (Landsat-7, Landsat-8, WorldView, and QuickBird). The process of annotating ground-truth is as follows: (1) we employed the multi-temporal image pairs with the same region and viewpoint. (2) The first images in the multi-temporal aerial image pairs were center-cropped. (3) The second images are transformed by the randomly generated affine transformation which was used as a ground-truth and subsequently center-cropped. (4) The center-crop process was performed to exclude the black area that serves as noise after transformation. Figure 8 illustrates the process of generating training pairs and ground-truths. In Algorithm 1, the training procedure is detailed. It has complexity with respect to the number of training pairs . We train our model for 2-days on a single NVIDIA Titan V GPU.

Input : Training aerial image dataset D
Randomly initialized model
Output : Trained model
for epochs do
       for  in D do
             # Construct three inputs
             randomly generated transformation;
             center-cropped image of ;
             center-cropped image of ;
             color-jittered image of ;
             # Feed-forward
             ;
             # Compute loss
             ;
             # Backpropagation and update weights
             ;
            
       end for
      
end for
Algorithm 1 Training procedure.

3.2.2 Evaluation

To demonstrate the superiority of our method quantitatively, we evaluated our model using the PCK pck, which was extensively applied in the other matching tasks Ham_2016_CVPR; Han_2017_ICCV; Rocco_2017_CVPR; Rocco_2018_CVPR; Seo_2018_ECCV; Kim_2017_ICCV; Kim_2017_CVPR. PCK metric is defined as follows:

(11)

where is the th point, which consists of , and  refers to the tolerance term in the image size of . Intuitively, the denominator and the numerator denote the number of correct keypoints and overall annotated keypoints, respectively. The PCK metric shows how well matching is successful globally according to given with a lot of test images. In this evaluation, we assess in the cases of and . The greater value of allows measuring degrees of matching more globally. To adopt the PCK metric, we annotated the keypoints and ground-truth transformation to 500 multi-temporal aerial image pairs. The multi-temporal pairs are captured in Google Earth and composed of major administrative districts in South Korea, like the training image pairs. The annotation process is as the following process: (1) we extracted the keypoints of multi-temporal aerial image pairs using SIFT Lowe04distinctiveimage, and (2) picked up the overlapping keypoints between each image pair. We annotate 20 keypoints per image pair, which generate a total of 10k keypoints for a quantitative assessment. This approach provides a fair demonstration of quantitative performance. In the evaluation and the inference procedure, we used a two-stream network, except for the augmented branch shown in Algorithms 2.

Input : Source and target images
Trained model
Output : Transformed image
# Feed-forward
;
# Ensemble
;
# Transform source image to target image
Algorithm 2 Inference procedure.

3.3 Results

3.3.1 Quantitative results

Aerial Image Dataset

Table 1 shows quantitative comparisons to the conventional computer vision methods (SURF Bay_surf:speeded, SIFT Lowe04distinctiveimage, ASIFT Morel:2009:ANF:1658384.1658390 + RANSAC Fischler:1981:RSC:358669.358692 and OA-Match SONG2019317) and CNNGeo Rocco_2017_CVPR on aerial image data with large transformation. Conventional computer vision methods Bay_surf:speeded; Lowe04distinctiveimage; Morel:2009:ANF:1658384.1658390; Fischler:1981:RSC:358669.358692; SONG2019317 showed quite a number of critical failures globally. As shown in Table 1, the conventional methods show low PCK performance in the case of . However, in the case of , these methods showed lower degradation of performance compared with other deep learning based methods. This result implies that conventional methods enable finer matching if the matching procedure does not failed entirely. Although CNNGeo fine-tuned by aerial images shows somewhat tolerable performance, our method considerably outperforms this method in all cases of . Furthermore, we performed an investigation of the various backbone networks to demonstrate the importance of feature extraction. Since the backbone network substantially affects the total performance, we experimentally adopted the best backbone network.

Methods PCK (%)
SURF Bay_surf:speeded 26.7 23.1 15.3
SIFT Lowe04distinctiveimage 51.2 45.9 33.7
ASIFT Morel:2009:ANF:1658384.1658390 64.8 57.9 37.9
OA-Match SONG2019317 64.9 57.8 38.2
CNNGeo Rocco_2017_CVPR (pretrained) 17.8 10.7 2.5
CNNGeo (fine-tuned) 90.6 76.2 27.6
Ours; ResNet101 He_2016_CVPR 93.8 82.5 35.1
Ours; ResNeXt101 Xie_2017_CVPR 94.6 85.9 43.2
Ours; Densenet169 huang_2017_densely 95.6 88.4 44.0
Ours; SE-ResNeXt101 Hu_2018_CVPR 97.1 91.1 48.0
Table 1: Comparisons of probability of correct keypoints (PCK) in the aerial images. CNNGeo is evaluated in two versions: the pre-trained model provided in Rocco_2017_CVPR and the fine-tuned model by the aerial images. Both models use ResNet101 as the backbone network.

Ablation Study

The proposed method combines two distinct techniques: (1) internal augmentation and (2) bidirectional ensemble. We analyze the contributions and effects of each proposed component and compare our models with CNNGeo Rocco_2017_CVPR. ’+ Int. Aug.’ and ’+ Bi-En.’, which signify the internal augmentation and bidirectional ensemble addition, respectively. As shown in Table 2, all models added by our proposed component improves the performances of CNNGeo for all , while maintaining the number of parameters. We further compare the proposed two-stream architecture to single-stream architecture which is added to the proposed components (internal augmentation, bidirectional ensemble). Table 3 shows the excellence of the proposed two-stream architecture compared to the single-stream architecture. It implies that the proposed regularization terms by the two-stream architecture are reasonable.

Methods PCK (%)
CNNGeo Rocco_2017_CVPR 90.6 76.2 27.6
CNNGeo + Int. Aug. 90.9 76.6 28.4
CNNGeo + Bi-En. 92.1 79.5 31.8
CNNGeo + Int. Aug. + Bi-En. (Ours) 93.8 82.5 35.1
Table 2: Results of models with different additional components. We analyzed the contributions of each component with ResNet-101 backbone.
Methods PCK (%)
Single-stream (with Int. Aug. and Bi-En.) 92.4 79.7 33.5
Two-stream (Ours) 93.8 82.5 35.1
Table 3: Comparison of single-stream and two-stream architecture. We analyzed the effectiveness of the two-stream based regularization with ResNet-101 backbone.

3.3.2 Qualitative Results

Global Matching Performance

We performed a qualitative evaluation using the Google Earth dataset (Figure 9) and the ISPRS dataset (Figure 10). The ISPRS dataset is a real-world aerial image dataset that was obtained from different viewpoints. Although our model was trained from the synthetic transformed aerial image pairs, it is successful with real-world data. In Figure 9 and 10, the samples consist of challenging pairs, including numerous difficulties such as differences in time, occlusion, changes in vegetation, and large-scale transformation between the source images and the target images. Our method correctly aligned the image pairs and yields accurate results of matching compared with other methods Morel:2009:ANF:1658384.1658390; Fischler:1981:RSC:358669.358692; SONG2019317; Rocco_2017_CVPR as shown in Figure 9 and 10.

Source ASIFT Morel:2009:ANF:1658384.1658390 + OA-Match SONG2019317 CNNGeo Rocco_2017_CVPR Ours Target
RANSAC Fischler:1981:RSC:358669.358692
Figure 9: Qualitative results for Google Earth data. These sample pairs are captured in Google Earth with different environments (viewpoints, times, and sensors).
Source ASIFT Morel:2009:ANF:1658384.1658390 + OA-Match SONG2019317 CNNGeo Rocco_2017_CVPR Ours Target
RANSAC Fischler:1981:RSC:358669.358692
Figure 10: Qualitative results for the ISPRS dataset. These samples are released by ISPRS ISPRS.

Localization Performance

We visualized the matched keypoints for comparing localization performance with CNNGeo Rocco_2017_CVPR. It is also important how fine source and target images are matched within the success cases. As shown in Figure 11, we intuitively compared localization performance. The X marks and the O marks on the images indicate the keypoints of the source images and the target images, respectively. Both models (Rocco_2017_CVPR and ours) successfully estimated global transformation. However, looking at the distance of matched keypoints, ours was better localized.

Figure 11: Visualization of the matched keypoints. Rows are each as follows: (1) source images, (2) results of CNNGeo Rocco_2017_CVPR, (3) results of our method, (4) target images.

4 Discussion

4.1 Robustness for the Variance of Aerial Image

Furthermore, we experimented on robustness for the variance of aerial images as shown in Figure 12. The source images were taken in 2004, 2006, 2015, 2016, and 2019, respectively. The target images were absolutely identical images. As a result, ours showed more stable results for overall sessions. Especially, source images which were taken in 2004 and 2006 have large differences of including object compared with the target image. It showed that ours had better robustness for the variance of the aerial images while the baseline Rocco_2017_CVPR is significantly influenced by these differences.

4.2 Limitations and Analysis of Failure Cases

We describe the limitation of our method and analyze the case in which the proposed method fails. As shown in Section 3.3.1, our method quantitatively showed state-of-the-art performance. However, comparing with indicates a substantial difference in performance. Our method is weak in detailed matching even though it successfully estimates global transformation in most cases. This weakness can be addressed by additional fine-grained transformation as post-processing.

Our proposed method failed in several cases. As a result, we have determined that our method fails in mostly wooded areas or largely changed areas as shown in Figure 13 and 14. In mostly wooded areas, repetitive patterns hinder the focus on a salient region. In the case of largely changed areas, massive differences, such as buildings, vegetation, and land-coverage between the source image and the target image are observed, which leads to degradation of performance. To address these limitations, a method that can aggregate local contexts for reducing repetitive patterns is required.

Figure 12: Results for various source images taken at different times. Rows are each as follows: (1) source images, (2) results of CNNGeo Rocco_2017_CVPR, (3) results of our method, (4) target images.
Figure 13: Failure cases, which primarily consist of wooded areas. Although there are objects that can be focused, it fails completely.
Figure 14: Failure cases, which are largely changed areas. Since the changed area is too large, it fails completely.

5 Conclusions

We propose a novel approach based on a deep end-to-end network for aerial image matching. To become robust to the variance of the aerial images, we introduce two-stream architecture using internal augmentation. We show its efficacy for consideration of various image pairs. An augmented image can be seen as an image which is taken in different environments (brightness, contrast, saturation), and by training these images with original target images simultaneously, it leads to the effect of regularizing the deep network. Furthermore, by training and inferring in two possible directions, we apply an efficient ensemble method without any additional networks or parameters, which considers the variances between transformation parameters from both directions and substantially improves performance. In the experimental section, we show stable matching results with a large volume of aerial images. However, our method also has some limitations as aforementioned (Section 4.2). To overcome these limitations, we plan to research the localization problem and the attention mechanism. Moreover, The studies applying Structure from Motion (SfM) and 3D reconstruction to image matching are very interesting and can improve performance of image matching, so we also plan to conduct this study in the future work.

conceptualization, J.-H.P., W.-J.N. and S.-W.L.; data curation, J.-H.P. and W.-J.N.; formal analysis, J.-H.P. and W.-J.N.; funding acquisition, S.-W.L.; investigation, J.-H.P. and W.-J.N.; methodology, J.-H.P. and W.-J.N.; project administration, S.-W.L.; resources, S.-W.L.; software, J.-H.P. and W.-J.N.; supervision, S.-W.L.; validation, J.-H.P., W.-J.N. and S.-W.L.; visualization, J.-H.P. and W.-J.N.; writing—original draft, J.-H.P. and W.-J.N.; writing—review and editing, J.-H.P., W.-J.N. and S.-W.L. All authors have read and agreed to the published version of the manuscript.

This work was supported by the Agency for Defense Development (ADD) and the Defense Acquisition Program Administration (DAPA) of Korea (UC160016FD).

Acknowledgements.
The authors would like to thank the anonymous reviewers for their valuable suggestions to improve the quality of this paper. The authors declare no conflicts of interest. The following abbreviations are used in this manuscript:
DNNs Deep Neural Networks
CNNs Convolutional Neural Networks
ReLU Rectified Linear Unit
TPS Thin-Plate Spline
PCK Probability of Correct Keypoints
ADAM

ADAptive Moment estimation

Bi-En. Bidirectional Ensemble
Int. Aug. Internal Augmentation
ISPRS International Society for Photogrammetry and Remote Sensing
References

References