Remote sensing has benefited a lot from deep learning in the past few years, mainly thanks to progress achieved in the computer vision community on natural RGB images. Indeed, most deep learning architectures designed for multimedia vision can be used on remote sensing optical images. This resulted in significant improvements in many remote sensing tasks such as vehicle detectionchen_vehicle_2014 , semantic labeling audebert_semantic_2016 ; marmanis_semantic_2016 ; maggiori_fully_2016 ; sherrah_fully_2016 and land cover/use classification penatti_deep_2015 ; nogueira_towards_2016 . However, these improvements have been mostly restrained to traditional 3-channels RGB images that are the main focus of the computer vision community.
On the other hand, Earth Observation data is rarely limited to this kind of optical sensor. Additional data, either from the same sensor (e.g. multispectral data) or from another one (e.g. a Lidar point cloud) is sometimes available. However, adapting vision-based deep networks to these larger data is not trivial, as this requires to work with new data structures that do not share the same underlying physical and numerical properties. Nonetheless, all these sources provide complementary information that should be used jointly to maximize the labeling accuracy.
In this work we present how to build a comprehensive deep learning model to leverage multi-modal high-resolution remote sensing data, with the example of semantic labeling of Lidar and multispectral data over urban areas. Our contributions are the following:
We investigate early fusion of multi-modal remote sensing data based on the FuseNet principle hazirbas_fusenet:_2016 . We show that while early fusion significantly improves semantic segmentation by allowing the network to learn jointly stronger multi-modal features, it also induces higher sensitivity to missing or noisy data.
We investigate late fusion of multi-modal remote sensing data based on the residual correction strategy audebert_semantic_2016 . We show that, although performing not as good as early fusion, residual correction improves semantic labeling and makes it possible to recover some critical errors on hard pixels.
We successfully validate our methods on the ISPRS Semantic Labeling Challenge datasets of Vaihingen and Potsdam cramer_dgpf_2010 , with results placing our methods amongst the best of the state-of-the-art.
2 Related Work
Semantic labeling of remote sensing data relates to the dense pixel-wise classification of images, which is called either “semantic segmentation” or “scene understanding” in the computer vision community. Deep learning has proved itself to be both effective and popular on this task, especially since the introduction of Fully Convolutional Networks (FCN)long_fully_2015
. By replacing standard fully connected layers of traditional Convolutional Neural Networks (CNN) by convolutional layers, it was possible to densify the single-vector output of the CNN to achieve a dense classification at 1:8 resolution. The first FCN model has quickly been improved and declined in several variants. Some improvements have been based on convolutional auto-encoders with a symetrical architecture such as SegNetbadrinarayanan_segnet:_2015 and DeconvNet noh_learning_2015 . Both use a bottleneck architecture in which the feature maps are upsampled to match the original input resolution, therefore performing pixel-wise predictions at 1:1 resolution. These models have however been outperformed on multimedia images by more sophisticated approaches, such as removing the pooling layers from standard CNN and using dilated convolutions yu_multi-scale_2015 to preserve most of the input spatial information, which resulted in models such as the multi-scale DeepLab chen_deeplab:_2016 which performs predictions at several resolutions using separate branches and produces 1:8 predictions. Finally, the rise of the residual networks he_deep_2016 was soon followed by new architectures derived from ResNet Pohlen_2017_CVPR ; Zhao_2017_CVPR . These architectures leverage the state-of-the-art effectiveness of residual learning for image classification by adapting them for semantic segmentation, again at a 1:8 resolution. All these architectures were shown to perform especially well on popular semantic segmentation of natural images benchmarks such as Pascal VOC everingham_pascal_2014 and COCO lin_microsoft_2014 .
On the other hand, deep learning has also been investigated for multi-modal data processing. Using dual-stream autoencoders,ngiam_multimodal_2011 successfully jointly processed audio-video data using an architecture with two branches, one for audio and one for video that merge in the middle of the network. Moreover, processing RGB-D (or 2.5D) data has a significant interest for the computer vision and robotics communities, as many embedded sensors can sense both optical and depth information. Relevant architectures include two parallel networks CNN merging in the same fully connected layers eitel_multimodal_2015 (for RGB-D data classification) and two CNN streams merging in the middle guo_two-stream_2016 (for fingertip detection). FuseNet hazirbas_fusenet:_2016 extended this idea to fully convolutional networks for semantic segmentation of RGB-D data by integrating an early fusion scheme into the SegNet architecture. Finally, the recent work of Park_2017_ICCV builds on the FuseNet architecture to incorporate residual learning and multiple stages of refinement to obtain high resolution multi-modal predictions RGB-D data. These models can be used to learn jointly from several heterogeneous data sources;, although they focus on multimedia images.
As deep learning significantly improved computer vision tasks, remote sensing adopted those techniques and deep networks have been often used for Earth Observation. Since the first successful use of patch-based CNN for roads and buildings extraction mnih_learning_2010 , many models were built upon the deep learning pipeline to process remote sensing data. For example, saito_multiple_2016 performed multiple label prediction (i.e. both roads and buildings) in a single CNN. vakalopoulou_building_2015 extended the approach to multispectral images including visible and infrared bands. Although successful, the patch-based classification approach only produces coarse maps, as an entire patch gets associated with only one label. Dense maps can be obtained by sliding a window over the entire input, but this is an expensive and slow process. Therefore, for urban scenes with dense labeling in very high resolution, superpixel-based classification campos-taberner_processing_2016
of urban remote sensing images was a successful approach that classified homogeneous regions to produce dense maps, as it combines the patch-based approach with an unsupervised pre-segmentation. Thanks to concatenating features fed to the SVM classifier,audebert_how_2016 ; lagrange_benchmarking_2015 managed to extend this framework to multi-scale processing using a superpixel-based pyramidal approach. Other approaches for semantic segmentation included patch-based prediction with mixed deep and expert features paisitkriangkrai_effective_2015 , that used prior knowledge and feature engineering to improve the deep network predictions. Multi-scale CNN predictions have been investigated by liu_learning_2016 with a pyramid of images used as input to an ensemble of CNN for land cover use classification, while chen_vehicle_2014 used several convolutional blocks to process multiple scales. Lately, semantic labeling of aerial images has moved to FCN models sherrah_fully_2016 ; maggiori_fully_2016 ; volpi_dense_2016 . Indeed, Fully Convolutional Networks such as SegNet or DeconvNet directly perform pixel-wise classification are very well suited for semantic mapping of Earth Observation data, as they can capture the spatial dependencies between classes without the need for pre-processing such as a superpixel segmentation, and they produce high resolution predictions. These approaches have again been extended for sophisticated multi-scale processing in marmanis_classification_2016 using both the expensive pyramidal approach with an FCN and the multiple resolutions output inspired from chen_deeplab:_2016 . Multiple scales allow the model to capture spatial relationships for objects of different sizes, from large arrangements of buildings to individual trees, allowing for a better understanding of the scene. To enforce a better spatial regularity, probabilistic graphical models such as Conditional Random Fields (CRF) post-processing have been used to model relationships between neighboring pixels and integrate these priors in the prediction lin_efficient_2015 ; sherrah_fully_2016 ; Liu_2017_CVPR_Workshops , although this add expensive computations that significantly slow the inference. On the other hand, marmanis_classification_2016 proposed a network that learnt both the semantic labeling and the explicit inter-class boundaries to improve the spatial structure of the predictions. However, these explicit spatial regularization schemes are expensive. In this work, we aim to show that these are not necessary to obtain semantic labeling results that are competitive with the state-of-the-art.
Previously, works investigated fusion of multi-modal data for remote sensing. Indeed, complementary sensors can be used on the same scene to measure several properties that give different insights on the semantics of the scene. Therefore, data fusion strategies can help obtain better models that can use these multiple data modalities. To this end, paisitkriangkrai_effective_2015
fused optical and Lidar data by concatenating deep and expert features as inputs to random forests. Similarly,Liu_2017_CVPR_Workshops integrates expert features from the ancillary data (Lidar and NDVI) into their higher-order CRF to improve the main optical classification network. The work of audebert_semantic_2016 investigated late fusion of Lidar and optical data for semantic segmentation using prediction fusion that required no feature engineering by combining two classifiers with a deep learning end-to-end approach. This was also investigated in Audebert_2017_CVPR_Workshops to fuse optical and OpenStreetMap for semantic labeling. During the Data Fusion Contest (DFC) 2015, lagrange_benchmarking_2015
proposed an early fusion scheme of Lidar and optical data based on a stack of deep features for superpixel-based classification of urban remote sensed data. In the DFC 2016,mou_spatiotemporal_2016 performed land cover classification and traffic analysis by fusing multispectral and video data at a late stage. Our goal is to thoroughly study end-to-end deep learning approaches for multi-modal data fusion and to compare early and late fusion strategies for this task.
3 Method description
3.1 Semantic segmentation of aerial images
Semantic labeling of aerial image requires a dense pixel-wise classification of the images. Therefore, we can use FCN architectures to achieve this, using the same techniques that are effective for natural images. We choose the SegNet badrinarayanan_segnet:_2015 model as the base network in this paper. SegNet is based on an encoder-decoder architecture that produces an output with the same resolution as the input, as illustrated in Fig. 1. This is a desirable property as we want to label the data at original image resolution, therefore producing maps at 1:1 resolution compared to the input. SegNet allows such task to do as the decoder is able to upsample the feature maps using the unpooling operation. We also compare this base network to a modified version of the ResNet-34 network he_deep_2016 adapted for semantic segmentation.
The encoder from SegNet is based on the convolutional layers from VGG-16 simonyan_very_2014 . It has 5 convolution blocks, each containing 2 or 3 convolutional layers of kernelioffe_batch_2015
. Each convolution block is followed by a max-pooling layer of size. Therefore, at the end of the encoder, the feature maps are each where the original image has a resolution .
The decoder performs both the upsampling and the classification. It learns how to restore the full spatial resolution while transforming the encoded feature maps into the final labels. Its structure is symmetrical with respect to the encoder. Pooling layers are replaced by unpooling layers as described in zeiler_visualizing_2014 . The unpooling relocates the activation from the smaller feature maps into a zero-padded upsampled map. The activations are relocated at the indices computed at the pooling stages, i.e. the from the max-pooling (cf. Fig. 2). This unpooling allows to replace the highly-abstracted features of the decoder to the saliency points of the low-level geometrical feature maps of the encoder. This is especially effective on small objects that would otherwise be misplaced or misclassified. After the unpooling, the convolution blocks densify the sparse feature maps. This process is repeated until the feature maps reach the input resolution.
According to he_deep_2016
, residual learning helps train deeper networks and achieved new state-of-the-art classification performance on ImageNet, as well as state-of-the-art semantic segmentation results on the COCO dataset. Consequently, we also compare our methods applied to the ResNet-34 architecture. ResNet-34 model uses four residual blocks. Each block is comprised of 2 or 3 convolutions of
kernels and the input of the block is summed into the output using a skip connection. As in SegNet, convolutions are followed by Batch Normalization and ReLU activation layers. The skip connection can be either the identity if the tensor shapes match, or aconvolution that projects the input feature maps into the same space as the output ones if the number of convolution planes changed. In our case, to keep most of the spatial resolution, we keep the initial
max-pooling but reduce the stride of all convolutions to 1. Therefore, the output of the ResNet-34 model is a 1:2 prediction map. To upsample this map back to full resolution, we perform an unpooling followed by a standard convolutional block.
Finally, both networks use a softmax layer to compute the multinomial logistic loss, averaged over the whole patch:
where is the number of pixels in the input image, the number of classes and, for a specified pixel , denote its label and the prediction vector. This means that we only minimize the average pixel-wise classification loss without any spatial regularization, as it will be learnt by the network during training. We do not use any post-processing, e.g. a CRF, as it would significantly slow down the computations for little to no gain.
3.2 Multi-scale aspects
Often multi-scale processing is addressed using a pyramidal approach: different context sizes and different resolutions are fed as parallel inputs to one or multiple classifiers. Our first contribution is the study of an alternative approach which consists in branching our deep network to generate output predictions at several resolutions. Each output has its own loss which is backpropagated to earlier layers of the network, in the same way as when performing deep supervision lee_deeply-supervised_2014 . This is the approach that has been used for the DeepLab chen_deeplab:_2016 architecture.
Therefore, considering our SegNet model, we not only predict one semantic map at full resolution, but we also branch the model earlier in the decoder to predict maps of smaller resolutions. After the th convolutional block of the decoder, we add a convolution layer that projects the feature maps into the label space, with a resolution , as illustrated in Fig. 3
. Those smaller maps are then interpolated to full resolution and averaged to obtain the final full resolution semantic map.
Let denote the full resolution prediction, the predictions at the downscale factor and the bilinear interpolation that upsamples a map by a factor . Therefore, we can aggregate our multi-resolution predictions using a simple summation (with ), e.g. if we use four scales:
During backpropagation, each branch will receive two contributions:
The contribution coming from the loss of the average prediction.
The contribution coming from its own downscaled loss.
This ensures that earlier layers still have a meaningful gradient, even when the global optimization is converging. As argued in lin_refinenet:_2016 , deeper layers now only have to learn how to refine the coarser predictions from the lower resolutions, which helps the overall learning process.
3.3 Early fusion
In the computer vision community, RGB-D images are often called 2.5D images. Integrating this data into deep learning models has proved itself to be quite challenging as the naive stacking approach does not perform well in practice. Several data fusion schemes have been proposed to work around this obstacle. The FuseNet hazirbas_fusenet:_2016 approach uses the SegNet architecture in a multi-modal context. As illustrated in Fig. 3(a)
, it jointly encodes both the RGB and depth information using two encoders whose contributions are summed after each convolutional block. Then, a single decoder upsamples the encoded joint-representation back into the label probability space. This data fusion approach can also be adapted to other deep neural networks, such as residual networks as illustrated inFig. 3(b).
However, in this architecture the depth data is treated as second-hand. Indeed, the two branches are not exactly symmetrical: the depth branch works only with depth-related information where as the optical branch actually deals with a mix of depth and optical data. Moreover, in the upsampling process, only the indices from the main branch will be used. Therefore, one needs to choose which data source will be the primary one and which one will be the auxiliary data (cf. Fig. 4(a)). There is a conceptual unbalance in the way the two sources are dealt with. We suggest an alternative architecture with a third “virtual” branch that does not have this unbalance, which might improve performance.
Instead of computing the sum of the two sets of feature maps, we suggest an alternative fusion process to obtain the multi-modal joint-features. We introduce a third encoder that does not correspond to any real modality, but instead to a virtual fused data source. At stage , the virtual encoder takes as input its previous activations concatenated with both activations from the other encoders. These feature maps are passed through a convolutional block to learn a residual that is summed with the average feature maps from the other encoders. This is illustrated in Fig. 4(b)
. This strategy makes FuseNet symetrical and therefore relieves us of the choice of the main source, which would be an additional hyperparameter to tune. This architecture will be namedV-FuseNet in the rest of the paper for Virtual-FuseNet.
3.4 Late fusion
One caveat of the FuseNet approach is that both streams are expected to be topologically compatible in order to fuse the encoders. However, this might not always be the case, especially when dealing with data that does not possess the same structure (e.g. 2D images and a 3D point cloud). Therefore, we propose an alternative fusion technique that relies only on the late feature maps with no assumption on the models. Specifically, instead of investigating fusion at the data level, we work around the data heterogeneity by trying to achieve prediction fusion. This process was investigated in audebert_semantic_2016 where a residual correction module was introduced. This module consists in a residual convolutional neural network that takes as input the last feature maps from two deep networks. Those networks can be topologically identical or not. In our case, each deep network is a Fully Convolutional Network that has been trained on either the optical or the auxiliary data source. Each FCN generates a prediction. First, we average the two predictions to obtain a smooth classification map. Then, we re-train the correction module in a residual fashion. The residual correction network therefore learns a small offset to apply to each pixel-probabilities. This is illustrated in Fig. 5(a) for the SegNet architecture and Fig. 5(b) for the ResNet architecture.
Let be the number of outputs on which to perform residual correction, the ground truth, the prediction and the error term from w.r.t. the ground truth. We predict , the sum of the averaged predictions and the correction term which is inferred by the fusion network:
As our residual correction module is optimized to minimize the loss, we enforce:
which translates into a constraint on and :
As this offset is learnt in a supervised way, the network can infer which input to trust depending on the predicted classes. For example, if the auxiliary data is better for vegetation detection, the residual correction will attribute more weight to the prediction coming out of the auxiliary SegNet. This module can be generalized to inputs, even with different network architectures. This architecture will be denoted SegNet-RC (for SegNet-Residual Correction) in the rest of the paper.
3.5 Class balancing
The remote sensing datasets (later described in Section 4.1) we consider have unbalanced semantic classes. Indeed, the relevant structures in urban areas do not occupy the same surface (i.e. the same number of pixels) in the images. Therefore, when performing semantic segmentation, the class frequencies can be very inhomogeneous. To improve the class average accuracy, we balance the loss using the inverse class frequencies. However, as one of the considered class is a reject class (“clutter”) that is also very rare, we do not use inverse class frequency for this one. Instead, we apply the same weight on this class as the lowest weight on all the other classes. This takes into account that the clutter class is an ill-posed problem anyway.
|Model||Overall accuracy||Average F1|
|SegNet (IRRG)||90.2 1.4||89.3 1.2|
|SegNet (composite)||88.3 0.9||81.6 0.8|
|SegNet-RC||90.6 1.4||89.2 1.2|
|FuseNet||90.8 1.4||90.1 1.2|
|V-FuseNet||91.1 1.5||90.3 1.2|
|ResNet-34 (IRRG)||90.3 1.0||89.1 0.7|
|ResNet-34 (composite)||88.8 1.1||83.4 1.3|
|ResNet-34-RC||90.8 1.0||89.1 1.1|
|FusResNet||90.6 1.1||89.3 0.7|
|Number of branches||imp. surf.||buildings||low veg.||trees||cars||Overall|
|No branch||92.2||95.5||82.6||88.1||88.2||90.2 1.4|
|1 branch||92.4||95.7||82.3||87.9||88.5||90.3 1.5|
|2 branches||92.5||95.8||82.4||87.8||87.6||90.3 1.4|
|3 branches||92.7||95.8||82.6||88.1||88.1||90.5 1.5|
|Method||imp. surf.||buildings||low veg.||trees||cars||Overall|
|FCN + fusion + boundaries marmanis_classification_2016||92.3||95.2||84.1||90.0||79.3||90.3|
|Method||imp. surf.||buildings||low veg.||trees||cars||Overall|
|FCN + CRF + expert features Liu_2017_CVPR_Workshops||91.2||94.6||85.1||85.1||92.8||88.4|
We validate our method on the two image sets of the ISPRS 2D Semantic Labeling Challenge 555http://www2.isprs.org/commissions/comm3/wg4/semantic-labeling.html. These datasets are comprised of very high resolution aerial images over two cities in Germany: Vaihingen and Potsdam. The goal is to perform semantic labeling of the images on six classes : buildings, impervious surfaces (e.g. roads), low vegetation, trees, cars and clutter. Two online leaderboards (one for each city) are available and report test metrics obtained on held-out test images.
The Vaihingen dataset has a resolution of 9 cm/pixel with tiles of approximately pixels. There are 33 images, from which 16 have a public ground truth. Tiles consist in Infrared-Red-Green (IRRG) images and DSM data extracted from the Lidar point cloud. We also use the normalized DSM (nDSM) from gerke_use_2015 .
The Potsdam dataset has a resolution of 5 cm/pixel with tiles of . There are 38 images, from which 24 have a public ground truth. Tiles consist in Infrared-Red-Green-Blue (IRRGB) multispectral images and DSM data extracted from the Lidar point cloud. nDSM are also included in the dataset with two different methods.
4.2 Experimental setup
For each optical image, we compute the NDVI using the following formula:
We then build a composite image comprised of the stacked DSM, nDSM and NDVI.
As the tiles are very high resolution, we cannot process them directly in our deep networks. We use a sliding window approach to extract patches. The stride of the sliding window also defines the size of the overlapping regions between two consecutive patches. At training time, a smaller stride allows us to extract more training samples and acts as data augmentation. At testing time, a smaller stride allows us to average predictions on the overlapping regions, which reduces border effects and improves the overall accuracy. During training, we use a px stride for Potsdam and a px stride for Vaihingen. We use a px stride for testing on Potsdam and a px stride on Vaihingen.
Models are implemented using the Caffe framework. We train all our models using Stochastic Gradient Descent (SGD) with a base learning rate of 0.01, a momentum of 0.9, a weight decay of 0.0005 and a batch size of 10. For SegNet-based architectures, the weights of the encoder in SegNet are initialized with those of VGG-16 trained on ImageNet, while the decoder weights are randomly initalized using the policy fromhe_delving_2015
. We divide the learning rate by 10 after 5, 10 and 15 epochs. For ResNet-based models, the four convolutional blocks are initialized using weights from ResNet-34 trained on ImageNet, the other weights being initialized using the same policy. We divide the learning rate by 10 after 20 and 40 epochs. In both cases, the learning rate of the pre-initialized weights is set as half the learning of the new weights as suggested inaudebert_semantic_2016 .
Results are cross-validated on each dataset using a 3-fold split. Final models for testing on the held-out data are re-trained on the whole training set.
(white: roads, blue: buildings, cyan: low vegetation, green: trees, yellow: cars)
(white: roads, blue: buildings, cyan: low vegetation, green: trees, yellow: cars)
Table 1 details the cross-validated results of our methods on the Vaihingen dataset. We show the pixel-wise accuracy and the average F1 score over all classes. The F1 score over a class is defined by:
where the number of true positives for class , the number of pixels belonging to class , and the number of pixels attributed to class by the model. As per the evaluation instructions from the challenge organizers, these metrics are computed after eroding the borders by a 3px radius circle and discarding those pixels.
Table 2 details the results of the multi-scale approach. “No branch” denotes the reference single-scale SegNet model. The first branch was added after the 4th convolutional block of the decoder (downscale = 2), the second branch after the 3rd (downscale = 4) and the third branch after the 2nd (downscale = 8).
(white: roads, blue: buildings, cyan: low vegetation, green: trees, yellow: cars)
5.1 Baselines and preliminary experiments
As a baseline, we train standard SegNets and ResNets on the IRRG and composite versions of the Vaihingen and Potsdam datasets. These models are already competitive with the state-of-the-art as is, with a significant advance for the IRRG version. Especially, the car class has an average F1 score of 59.0% on the composite images whereas it reaches 85.0% on the IRRG tiles. Nonetheless, we know that the composite tiles contain DSM information that could help on challenging frontiers such as roads/buildings and low vegetation/trees.
As illustrated in Table 1, ResNet-34 performs slightly better in overall accuracy and obtains more stable results compared to SegNet. This is probably due to a better generalization capacity of ResNet that makes the model less subject to overfitting. Overall, ResNet and SegNet obtain similar results, with ResNet being more stable. However, ResNet requires significantly more memory compared to SegNet, especially when using the fusion schemes. Notably, we were not able to use the V-FuseNet scheme with ResNet-34 due to the memory limitation (12Gb) of our GPUs. Nonetheless, these results show that the investigated data fusion strategies can be applied to several flavors of Fully Convolutional Networks and that our findings should generalize to other base networks from the state-of-the-art.
5.2 Effects of the multi-scale strategy
The gain using the multi-scale approach is small, although it is virtually free as this only requires a few additional convolution parameters to extract downscaled maps from the lower layers. As could be expected, large structures such as roads and buildings benefit from the downscaled predictions, while cars are slightly less well detected in lower resolutions. We assume that vegetation is not structured and therefore the multi-scale approach does not help here, but instead increases the confusion between low and arboreal vegetation. Increasing the number of branches improves the overall classification but by a smaller margin each time, which is to be expected as the downscaled predictions become very coarse at 1:16 or 1:32 resolution. Finally, although the quantitative improvements are low, a visual assessment of the inferred maps show that the qualitative improvement is non-negligible. As illustrated in Fig. 7, the multi-scale prediction regularizes and reduces the noise in the predictions. This makes it easier for subsequent human interpretation or post-processing, such as vectorization or shapefiles generation, especially on the man-made structures.
As a side effect of this investigation, our tests showed that the downscaled outputs were still quite accurate. For example, the prediction downscaled by a factor 8 was in average accuracy only 0.5% below the full resolution prediction, with the difference mostly residing in “car” class. This is unsurprising as cars are usually 30px long in the full resolution tile and therefore cover only 3-4 pixels in the downscaled prediction, which makes them harder to see. Though, the good average accuracy of the downscaled outputs seems to indicate that the decoder from SegNet could be reduced to its first convolutional block without losing too much accuracy. This technique could be used to reduce the inference time when small objects are irrelevant while maintaining a good accuracy on the other classes.
5.3 Effects of the fusion strategies
As expected, both fusion methods improve the classification accuracy on the two datasets, as illustrated in Fig. 8. We show some examples of misclassified patches that are corrected using the fusion process in Fig. 11. In Figs. 10(b) and 10(a), SegNet is confused by the material of the building and the presence of cars. However, FuseNet uses the nDSM to decide that the structure is a building and ignore the cars, while the late fusion manages to mainly to recover the cars. This is similar to Fig. 10(c), in which SegNet confuses the building with a road while FuseNet and the residual correction recover the information thanks to the nDSM both for the road and the trees in the top row. One advantage of using the early fusion is that complementarity between the multiple modalities is leveraged more efficiently as it requires less parameters, yet achieves a better classification accuracy for all classes. At the opposite, late fusion with residual correction improves the overall accuracy at the price of less balanced predictions. Indeed, the increase mostly affects the “building” and “impervious surface” classes, while all the other F1 scores decrease slightly.
However, on the Potsdam dataset, the residual correction strategy slightly decreases the model accuracy. Indeed, the late fusion is mostly useful to combine strong predictions that are complementary. For example, as illustrated in Fig. 10(b), the composite SegNet has a strong confidence in its building prediction while the IRRG SegNet has a strong confidence in its cars predictions. Therefore, the residual correction is able to leverage those predictions and to fuse them to alleviate the uncertainty around the cars in the rooftop parking lot. This works well on Vaihingen as both the IRRG and composite sources achieve a global accuracy higher than 85%. However, on Potsdam, the composite SegNet is less informative and achieves only 79% accuracy, as the annotations are more precise and the dataset overall more challenging for a data source that relies only on Lidar and NDVI. Therefore, the residual correction fails to make the most of the two data sources. This analysis is comforted by the fact that, on the Vaihingen validation set, the residual correction achieves a better global accuracy with ResNets than with SegNets, thanks to the stronger ResNet-34 trained on the composite source.
Meanwhile, the FuseNet architecture learns a joint representation of the two data sources, but faces the same pitfall as the standard SegNet model : edge cases such as cars on rooftop parking lots disappear. However, the joint-features are significantly stronger and the decoder can perform a better classification using this multi-modal representation, therefore improving the global accuracy of the model.
In conclusion, the two fusion strategies can be used for different use cases. Late fusion by residual correction is more suited to combine several strong classifiers that are confident in their predictions, while the FuseNet early fusion scheme is more adapted for integrating weaker ancillary data into the main learning pipeline.
On the held-out testing set, the V-FuseNet strategy does not perform as well as expected. Its global accuracy is marginally under the original FuseNet model, although F1 scores on smaller and harder classes are improved, especially “clutter” which is improved from 49.3% to 51.0%. As the “clutter” class is ignored in the dataset metrics, this is not reflected in the final accuracy.
5.4 Robustness to uncertainties and missing data
As for all datasets, the ISPRS semantic labels in the ground truth suffer from some limitations. This can cause unfair mislabeling errors caused by missing objects in the ground truth or sharp transitions that do not reflect the true image (cf. Fig. 8(a)).
However, even the raw data (optical and DSM) can be deceptive. Indeed, geometrical artifacts from the stitching process also impact negatively the segmentation, as our model overfits on those deformed pixels (cf. Fig. 8(b)).
Finally, due to limitations and noise in the Lidar point cloud, such as missing or aberrant points, the DSM and subsequently the nDSM present some artifacts. As reported in marmanis_classification_2016 , some buildings vanish in the nDSM and the relevant pixels are falsely attributed a height of 0. This causes significant misclassification in the composite image that are poorly handled by both fusion methods, as illustrated in Fig. 10. marmanis_classification_2016 worked around this problem by manually correcting the nDSM, though this method does not scale to bigger datasets. Therefore, improving the method to be robust to impure data and artifacts could be helpful, e.g. by using hallucination networks hoffman_learning_2016 to infer the missing modality as proposed in kampffmeyer_semantic_2016 . We think that the V-FuseNet architecture could be adapted for such a purpose by using the virtual branch to encode missing data. Moreover, recent work on generative models might help alleviate overfitting and improve robustness by training on synthetic data, as proposed in Xie_2017_ICCV .
In this work, we investigate deep neural networks for semantic labeling of multi-modal very high-resolution urban remote sensing data. Especially, we show that fully convolutional networks are well-suited to the task and obtain excellent results. We present a simple deep supervision trick that extracts semantic maps at multiple resolutions, which helps training the network and improves the overall classification. Then, we extend our work to non-optical data by integrating digital surface model extracted from Lidar point clouds. We study two methods for multi-modal remote sensing data processing with deep networks: early fusion with FuseNet and late fusion using residual correction. We show that both methods can efficiently leverage the complementarity of the heterogeneous data, although on different use cases. While early fusion allows the network to learn stronger features, late fusion can recover errors on hard pixels that are missed by all the other models. We validated our findings on the ISPRS 2D Semantic Labeling datasets of Potsdam and Vaihingen, on which we obtained results competitive with the state-of-the-art.
The Vaihingen dataset was provided by the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) cramer_dgpf_2010 : http://www.ifp.uni-stuttgart.de/dgpf/DKEP-Allg.html. The authors thank the ISPRS for making the Vaihingen and Potsdam datasets available and organizing the semantic labeling challenge. Nicolas Audebert’s work is supported by the Total-ONERA research project NAOMI.
- (1) X. Chen, S. Xiang, C. L. Liu, C. H. Pan, Vehicle Detection in Satellite Images by Hybrid Deep Convolutional Neural Networks, IEEE Geoscience and Remote Sensing Letters 11 (10) (2014) 1797–1801.
- (2) N. Audebert, B. Le Saux, S. Lefèvre, Semantic Segmentation of Earth Observation Data Using Multimodal and Multi-scale Deep Networks, in: Asian Conference on Computer Vision (ACCV16), Taipei, Taiwan, 2016.
- (3) D. Marmanis, J. D. Wegner, S. Galliani, K. Schindler, M. Datcu, U. Stilla, Semantic Segmentation of Aerial Images with an Ensemble of CNNs, ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences 3 (2016) 473–480.
- (4) E. Maggiori, Y. Tarabalka, G. Charpiat, P. Alliez, Convolutional neural networks for large-scale remote-sensing image classification, IEEE Transactions on Geoscience and Remote Sensing 55 (2) (2017) 645–657.
- (5) J. Sherrah, Fully Convolutional Networks for Dense Semantic Labelling of High-Resolution Aerial Imagery, arXiv:1606.02585 [cs]ArXiv: 1606.02585.
O. Penatti, K. Nogueira, J. Dos Santos, Do deep features generalize from everyday objects to remote sensing and aerial scenes domains?, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Boston, USA, 2015, pp. 44–51.
- (7) K. Nogueira, O. A. Penatti, J. A. dos Santos, Towards better exploiting convolutional neural networks for remote sensing scene classification, Pattern Recognition 61 (2017) 539 – 556.
- (8) V. Badrinarayanan, A. Kendall, R. Cipolla, Segnet: A deep convolutional encoder-decoder architecture for image segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence.
- (9) K. He, X. Zhang, S. Ren, J. Sun, Deep Residual Learning for Image Recognition, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 2016.
- (10) C. Hazirbas, L. Ma, C. Domokos, D. Cremers, FuseNet: Incorporating Depth into Semantic Segmentation via Fusion-based CNN Architecture, in: Proceedings of the Asian Conference on Computer Vision, Vol. 2, Taipei, Taiwan, 2016.
- (11) M. Cramer, The DGPF test on digital aerial camera evaluation – overview and test design, Photogrammetrie – Fernerkundung – Geoinformation 2 (2010) 73–82.
- (12) J. Long, E. Shelhamer, T. Darrell, Fully Convolutional Networks for Semantic Segmentation, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 2015, pp. 3431–3440.
- (13) H. Noh, S. Hong, B. Han, Learning Deconvolution Network for Semantic Segmentation, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 2015, pp. 1520–1528.
- (14) F. Yu, V. Koltun, Multi-Scale Context Aggregation by Dilated Convolutions, in: Proceedings of the International Conference on Learning Representations, San Diego, USA, 2015.
- (15) L.-C. Chen, J. T. Barron, G. Papandreou, K. Murphy, A. L. Yuille, Semantic Image Segmentation with Task-Specific Edge Detection Using CNNs and a Discriminatively Trained Domain Transform, in: Proceedings of the International Conference on Learning Representations, San Diego, USA, 2015.
- (16) T. Pohlen, A. Hermans, M. Mathias, B. Leibe, Full-resolution residual networks for semantic segmentation in street scenes, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Honolulu, USA, 2017.
- (17) H. Zhao, J. Shi, X. Qi, X. Wang, J. Jia, Pyramid scene parsing network, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Honolulu, USA, 2017.
- (18) M. Everingham, S. M. A. Eslami, L. V. Gool, C. K. I. Williams, J. Winn, A. Zisserman, The Pascal Visual Object Classes Challenge: A Retrospective, International Journal of Computer Vision 111 (1) (2014) 98–136.
- (19) T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, C. L. Zitnick, Microsoft COCO: Common Objects in Context, in: D. Fleet, T. Pajdla, B. Schiele, T. Tuytelaars (Eds.), Computer Vision – ECCV 2014, no. 8693 in Lecture Notes in Computer Science, Springer International Publishing, 2014, pp. 740–755.
J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, A. Y. Ng, Multimodal deep learning, in: Proceedings of the 28th international conference on machine learning (ICML-11), Washington, USA, 2011, pp. 689–696.
- (21) A. Eitel, J. T. Springenberg, L. Spinello, M. Riedmiller, W. Burgard, Multimodal deep learning for robust RGB-D object recognition, in: Proceedings of the International Conference on Intelligent Robots and Systems, IEEE, Hamburg, Germany, 2015, pp. 681–687.
- (22) H. Guo, G. Wang, X. Chen, Two-stream convolutional neural network for accurate RGB-D fingertip detection using depth and edge information, in: Image Processing (ICIP), 2016 IEEE International Conference on, IEEE, Phoenix, USA, 2016, pp. 2608–2612.
- (23) S.-J. Park, K.-S. Hong, S. Lee, Rdfnet: Rgb-d multi-level residual feature fusion for indoor semantic segmentation, in: The IEEE International Conference on Computer Vision (ICCV), 2017.
- (24) V. Mnih, G. E. Hinton, Learning to Detect Roads in High-Resolution Aerial Images, in: K. Daniilidis, P. Maragos, N. Paragios (Eds.), Computer Vision – ECCV 2010, no. 6316 in Lecture Notes in Computer Science, Springer Berlin Heidelberg, 2010, pp. 210–223.
- (25) S. Saito, T. Yamashita, Y. Aoki, Multiple object extraction from aerial imagery with convolutional neural networks, Electronic Imaging 2016 (10) (2016) 1–9.
- (26) M. Vakalopoulou, K. Karantzalos, N. Komodakis, N. Paragios, Building detection in very high resolution multispectral data with deep learning features, in: Geoscience and Remote Sensing Symposium (IGARSS), 2015 IEEE International, IEEE, Milan, Italy, 2015, pp. 1873–1876.
- (27) M. Campos-Taberner, A. Romero-Soriano, C. Gatta, G. Camps-Valls, A. Lagrange, B. Le Saux, A. Beaupère, A. Boulch, A. Chan-Hon-Tong, S. Herbin, H. Randrianarivo, M. Ferecatu, M. Shimoni, G. Moser, D. Tuia, Processing of Extremely High-Resolution LiDAR and RGB Data: Outcome of the 2015 IEEE GRSS Data Fusion Contest Part A: 2-D Contest, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing PP (99) (2016) 1–13.
- (28) N. Audebert, B. Le Saux, S. Lefèvre, How useful is region-based classification of remote sensing images in a deep learning framework?, in: 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 2016, pp. 5091–5094.
- (29) A. Lagrange, B. Le Saux, A. Beaupere, A. Boulch, A. Chan-Hon-Tong, S. Herbin, H. Randrianarivo, M. Ferecatu, Benchmarking classification of earth-observation data: From learning explicit features to convolutional networks, in: IEEE International Geosciences and Remote Sensing Symposium (IGARSS), 2015, pp. 4173–4176.
- (30) S. Paisitkriangkrai, J. Sherrah, P. Janney, A. Van Den Hengel, Effective semantic pixel labelling with convolutional networks and Conditional Random Fields, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Boston, USA, 2015, pp. 36–43.
- (31) Q. Liu, R. Hang, H. Song, Z. Li, Learning Multi-Scale Deep Features for High-Resolution Satellite Image Classification, arXiv preprint arXiv:1611.03591.
- (32) M. Volpi, D. Tuia, Dense semantic labeling of subdecimeter resolution images with convolutional neural networks, IEEE Transactions on Geoscience and Remote Sensing 55 (2) (2017) 881–893.
- (33) D. Marmanis, K. Schindler, J. D. Wegner, S. Galliani, M. Datcu, U. Stilla, Classification With an Edge: Improving Semantic Image Segmentation with Boundary Detection, arXiv:1612.01337 [cs]ArXiv: 1612.01337.
- (34) G. Lin, C. Shen, A. Van Den Hengel, I. Reid, Efficient piecewise training of deep structured models for semantic segmentation, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 2015.
- (35) Y. Liu, S. Piramanayagam, S. T. Monteiro, E. Saber, Dense semantic labeling of very-high-resolution aerial imagery and LiDAR with fully-convolutional neural networks and higher-order crfs, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Honolulu, USA, 2017.
- (36) N. Audebert, B. Le Saux, S. Lefèvre, Joint learning from Earth Observation and OpenStreetMap data to get faster better semantic maps, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Honolulu, USA, 2017.
- (37) L. Mou, X. X. Zhu, Spatiotemporal scene interpretation of space videos via deep neural network and tracklet analysis, in: IEEE International Geoscience and Remote Sensing Symposium (IGARSS), IEEE, Beijing, China, 2016, pp. 1823–1826.
- (38) K. Simonyan, A. Zisserman, Very Deep Convolutional Networks for Large-Scale Image Recognition, arXiv:1409.1556 [cs]ArXiv: 1409.1556.
- (39) S. Ioffe, C. Szegedy, Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift, in: Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 2015, pp. 448–456.
- (40) M. D. Zeiler, R. Fergus, Visualizing and understanding convolutional networks, in: Computer Vision–ECCV 2014, Springer, 2014, pp. 818–833.
C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, Z. Tu, Deeply-supervised nets, in: Artificial Intelligence and Statistics, 2015, pp. 562–570.
- (42) G. Lin, A. Milan, C. Shen, I. Reid, RefineNet: Multi-Path Refinement Networks with Identity Mappings for High-Resolution Semantic Segmentation, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, USA, 2017.
- (43) M. Gerke, Use of the Stair Vision Library within the ISPRS 2d Semantic Labeling Benchmark (Vaihingen), Tech. rep., International Institute for Geo-Information Science and Earth Observation (2015).
- (44) K. He, X. Zhang, S. Ren, J. Sun, Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification, in: Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 1026–1034.
- (45) J. Hoffman, S. Gupta, T. Darrell, Learning with side information through modality hallucination, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016, pp. 826–834.
- (46) M. Kampffmeyer, A.-B. Salberg, R. Jenssen, Semantic Segmentation of Small Objects and Modeling of Uncertainty in Urban Remote Sensing Images Using Deep Convolutional Neural Networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Las Vegas, USA, 2016, pp. 1–9.
- (47) C. Xie, J. Wang, Z. Zhang, Y. Zhou, L. Xie, A. Yuille, Adversarial examples for semantic segmentation and object detection, in: The IEEE International Conference on Computer Vision (ICCV), 2017.