RiFCN: Recurrent Network in Fully Convolutional Network for Semantic Segmentation of High Resolution Remote Sensing Images

05/05/2018 ∙ by Lichao Mou, et al. ∙ DLR 0

Semantic segmentation in high resolution remote sensing images is a fundamental and challenging task. Convolutional neural networks (CNNs), such as fully convolutional network (FCN) and SegNet, have shown outstanding performance in many segmentation tasks. One key pillar of these successes is mining useful information from features in convolutional layers for producing high resolution segmentation maps. For example, FCN nonlinearly combines high-level features extracted from last convolutional layers; whereas SegNet utilizes a deconvolutional network which takes as input only coarse, high-level feature maps of the last convolutional layer. However, how to better fuse multi-level convolutional feature maps for semantic segmentation of remote sensing images is underexplored. In this work, we propose a novel bidirectional network called recurrent network in fully convolutional network (RiFCN), which is end-to-end trainable. It has a forward stream and a backward stream. The former is a classification CNN architecture for feature extraction, which takes an input image and produces multi-level convolutional feature maps from shallow to deep; while in the later, to achieve accurate boundary inference and semantic segmentation, boundary-aware high resolution feature maps in shallower layers and high-level but low-resolution features are recursively embedded into the learning framework (from deep to shallow) to generate a fused feature representation that draws a holistic picture of not only high-level semantic information but also low-level fine-grained details. Experimental results on two widely-used high resolution remote sensing data sets for semantic segmentation tasks, ISPRS Potsdam and Inria Aerial Image Labeling Data Set, demonstrate competitive performance obtained by the proposed methodology compared to other studied approaches.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 8

page 14

page 15

page 17

page 18

page 19

page 21

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Fig 1: An example of semantic segmentation produced with the proposed network, over a scene taken from the ISPRS Potsdam data set.

Along with the launch of satellites and widespread availability of aeroplanes and unmanned aerial vehicles (UAVs), high resolution remote sensing images are now accessible at a reasonable cost. Automatic interpretation of such high resolution data is a task of primary importance for a wide range of practical applications, to name a few, land cover mapping Marmanis17 ; beyongrgb ; Marcos18 ; Maggiori17 , urban planning, and traffic monitoring Liukang ; segbeforedet . One crucial step towards understanding a high resolution remote sensing image is to perform semantic segmentation, which consists in labeling every pixel in the image with the semantic category of the object it belongs to. The result of semantic segmentation (cf. Fig. 1) can answer the following two questions: 1) What land cover categories are observed in the image? 2) where do they appear? Semantic segmentation can basically be considered a comprehensive task that combines the traditional problems of multi-label recognition, detection, and segmentation in a single process.

1.1 The Challenges of Semantic Segmentation for High Resolution Images

In comparison with hyper- and multi-spectral data, images at high resolution (GSD 5-30 cm) have pretty different characteristics, bringing challenges for semantic segmentation purposes. On the one hand, intricate spatial details (e.g., roof tiles, road markings, shadows of buildings, windows of vehicles, and branches of trees) emerge, which leads to big differences in visual appearance within an object class. On the other hand, the spectral resolution of high spatial resolution sensors is usually limited to four (R-G-B-IR) or three (R-G-B) bands, so available spectral signatures are less discriminative. E.g., some roofs look quite like to roads in color channels. This is also true for low plants and trees. Hence, an effective feature representation is a matter of great importance to a semantic segmentation system for high resolution remote sensing images.

1.2 Semantic Segmentation Using Feature Engineering

Earlier efforts have focused on extracting useful low-level, hand-crafted visual features and/or modeling mid-level semantic features on local portions of images (e.g., patches and superpixels111A superpixel can be defined as a set of locally connected similar pixels that preserve detailed edge-structures for a fine segmentation.

); subsequently, a supervised classifier is employed to learn a mapping from the features to semantic categories. For example, in 

Mura10 , the authors propose a morphological attribute profile (AP), which is capable of extracting effective features from spatial domain, for semantic segmentation of high resolution remote sensing images. Tokarczyk et al. Tokarczyk15

use a boosting classifier to directly choose an optimal, comprehensive feature bank from a vast randomized quasi-exhaustive set of low-level feature candidates, in order to avoid manual feature selection.

1.3 Semantic Segmentation Using Deep Networks

The aforementioned methods mainly rely on manual feature engineering to build a semantic segmentation system. Recently, deep neural networks, especially convolutional neural networks (CNNs), have become the state-of-the-art model in many computer vision 

Hinton12 ; VGG ; densenet ; rcnn ; fastrcnn ; fasterrcnn ; FCN ; SegNet ; Moutnnls and remote sensing problems Zhu17DLinRS ; Marcos18 ; DFC16 ; beyongrgb ; Moudfc16 ; Marmanis17 ; Mou18 ; Maggiori17 ; DLinRS ; Moujurse17 , as they are able to automatically extract high-level features from raw images for visual analysis tasks in an end-to-end fashion. Semantic segmentation tasks in remote sensing data are also approached by means of CNNs. Sherrah Sherrah16 uses a fully convolutional network (FCN) FCN trained on natural images as a pre-trained model and fine-tunes it on high resolution remote sensing images for semantic segmentation tasks. To make use of both color image and digital surface model (DSM) data as input, while respecting their different statistical properties, Marmanis et al. Marmanis16 employ a late fusion approach with two structurally identical, parallel FCNs. In Kampffmeyer16 , the authors focus on small object (e.g., car) segmentation through quantifying the uncertainty at a pixel level for FCNs. By doing so, they can achieve high overall accuracy, while still achieving good accuracy for small objects. Recently, Maggiori et al. Maggiori17

introduce a multilayer perceptron (MLP) on the top of a base FCN to learn how to effectively combine intermediate features to offer a better segmentation result. In 

Audebert16 , Audebert et al. investigate the use of another network architecture, SegNet SegNet ; SegNet2 , for semantic segmentation of high resolution aerial images. In addition, they use a residual correction to perform data fusion from heterogeneous data (i.e., optical image and DSM). Later, in beyongrgb , they systematically study different network architectures for semantic segmentation of multimodal high resolution remote sensing data and, more specifically, they find that late fusion makes it possible to recover errors streaming from ambiguous data while early fusion allows for better joint feature learning but at the cost of higher sensitivity to missing data. Volpi and Tuia Volpi17 compare a SegNet architecture with a standard CNN performing patch classification for semantic segmentation purposes. Marcos et al. Marcos18

propose a segmentation network architecture called rotation equivariant vector field network (RotEqNet) to encode rotation equivariance in the network itself. By doing so, the network can be confronted with a simpler task, as it does have to learn specific weights to address rotated versions of the same object class. Marmanis et al. 

Marmanis17 propose a two-step model that learns a CNN to separately output edge likelihoods at multiple scales from color-infrared (CIR) and height data. Then, the boundaries detected with each source are added as an extra channel to each source, and an FCN or SegNet is trained for semantic segmentation purposes. The intuition behind this work is that using predicted boundaries helps to achieve sharper segmentation maps.

1.4 The motivation of This Work

As our survey of related work shows above, most state-of-the-art CNN architectures for semantic segmentation of aerial images mainly focus on the non-linear combination of high-level features extracted from last convolutional layers. These networks, however, tend to blur object boundaries and visually degrade results due to the lack of low-level visual information that exist in shallower layers. Although, in the computer vision field, there have been some works that make an attempt to mitigate the poor localization of object boundaries either by using dilated convolution or by adding skip connections from early to deep layers of a network, they do not work well enough for remote sensing data Marmanis17 . From the above discussions, we note that 1) how to find an effective strategy to fuse multi-level features, and 2) how to preserve object boundaries should be the most intrinsic problems in semantic segmentation of remote sensing images.

To address these problems, in this paper, we propose a novel network architecture, recurrent network in fully convolutional network (RiFCN), which uses a recurrent way to fuse all-level features of a classification network (e.g., VGG-16 VGG ), while preserving boundaries as far as possible. Our work contributes to the literature in the following respects:

  • We propose an end-to-end trainable, bidirectional network architecture, which is composed of a forward stream and a backward stream, for the generation and fusion of multi-level convolutional features. Learning such a network architecture for pixel-wise annotations in remote sensing data has not been investigated yet to the best of our knowledge.

  • A deep structure in the form of a recurrent network is proposed to achieve the backward stream. It embeds high-level features into low-level ones, layer by layer, and finally incorporates boundary-aware feature maps from the shallowest layer to achieve more accurate object boundary inference and semantic segmentation. The whole network architecture can be trained end-to-end by gradient learning, due to differential properties of all components.

  • We theoretically analyze and discuss the bidirectional network learning, i.e., the backward gradient pass of the proposed network. This helps us to better understand how dose the network learn and update its weights.

The paper is organized as follows. After the introductory Section 1 detailing semantic segmentation of remote sensing images, Section 2 is dedicated to a brief review of representative networks for semantic segmentation tasks in computer vision. Section 3 then describes details of the proposed network. The experimental results are provided in Section 4. Finally, Section 5 concludes the paper.

2 Representative Networks for Semantic Segmentation

In this section, we would like to briefly review two representative network architectures for semantic segmentation in computer vision, namely FCN-based and encoder-decoder architecture, which both are also widely used models in semantic segmentation of remote sensing images.

2.1 FCN-based Architecture

Long et al. FCN first proposed FCN for semantic segmentation tasks, which is both efficient and effective. The key insight of FCN is that fully connected layers in a network for image classification purposes can be considered convolutions with kernels that cover their entire input region. This is equivalent to evaluating the original classification network on overlapping regions of patches. Since computations are shared across overlapping the regions, FCN is more efficient. After convolutionalizing fully connected layers in a classification network pretrained on natural images, final feature maps need to be upsampled because of the existence of pooling operations in the network. In the original FCN FCN , the authors enhanced output feature maps with features from intermediate layers, which is able to enable FCN to make finer predictions. Later, some extensions of FCN have been proposed to improve the performance of semantic segmentation. For example, Chen et al. deeplab

removed some of max-pooling layers and, accordingly, introduced atrous convolutions in a FCN, which can expand the field of view without increasing the number of parameters. Furthermore, structured prediction has been studied with integrated structured models such as conditional random field (CRF). Better classification network architectures also provide new insights, e.g., ResNet

ResNet -based FCN resfcn . In pspnet , the authors proposed a pyramid pooling module and applied it to a ResNet-based network architecture. The intuition behind this model is that global parsing matters because it provides clues on the distribution of semantic categories, and the pyramid pooling module captures this information by utilizing large kernel pooling layers.

2.2 Encoder-Decoder Architecture

Inspired by probabilistic auto-encoders autoencoder ; resconvdeconv , encoder-decoder paradigm has been introduced in semantic segmentation. A clear example of this branch is SegNet proposed by Badrinarayanan et al. SegNet ; SegNet2 , where the encoder is a vanilla CNN (e.g., VGG-16 VGG

) that is trained to classify images while the decoder is used to upsample the output of the encoder. The latter is composed by a set of upsampling layers and convolutional layers which are at last followed by a softmax layer to predict pixel-wise labels. Each upsampling layer in the decoder corresponds to a max-pooling layer in the encoder, and upsampled feature maps are then convolved with a set of filters to produce more dense features with finer resolution. When feature maps have been restored to a desired full resolution, they are fed to the softmax layer to produce the final segmentation map. In 

DeconvNet , the authors proposed DeconvNet, which shares a similar idea with SegNet. Moreover, U-Net unet can be considered an extension of SegNet, by introducing concatenations between the corresponding encoder and decoder layers. RefineNet refinenet , a recent network, adopts a structure similar to U-Net, but introduces several residual convolutional units in both the encoder and decoder.

3 Methodology

3.1 An Observation

Fig 2: Feature responses from different convolutional stages of a VGG model, in which class activation maps (CAMs) generated from shallower layers present an explicit view of low-level features and CAMs from deeper layers highlight coarse discriminative regions. The CAMs generated from the fused features using Hua18 draw a holistic picture of not only where discriminative regions are, but also how the regions appear in detail.

Recently, several studies Zeiler14 ; Mahendran15 that attempt to reveal what learned by CNNs using a gradient guided technique show that deeper layers make use of filters to grasp global high-level information while shallower layers capture local low-level details such as object boundaries and edges. A work in this direction for remote sensing images can be found in Hua18 , where the authors make use of class activation maps (CAMs) CAM to visualize learned feature maps of a CNN and come to an almost same conclusion, i.e., CAMs generated from shallower layers present an explicit view of low-level features, and CAMs from deeper layers highlight coarse discriminative regions. In addition, they design a network that only fuses feature maps of the first and the last stage, and such a network can achieve a significant improvement in terms of classification accuracy. The CAM generated from the fused features draw a holistic picture of not only where discriminative regions are, but also how the regions appear in detail (cf. Fig. 2

). This work gives us an incentive to design a network that is capable of exploiting all the features available along the forward process of a CNN to generate a holistic feature for semantic segmentation of aerial images. In this way, shallower layers that capture fine-grained features can be directly refined using high-level semantic features from deeper layers. To this end, we need to come up with a solution that can sequently and progressively embed high-level features into low-level features. Recurrent neural networks (RNNs) have gained significant attention for solving many challenging problems involving sequential data analysis and recently shown to be successful in several remote sensing applications 

Lyu16 ; Mournn ; Russwurm17 ; recnn ; Russwurm18 ; Lyu18 . Therefore, in this work, we would like to make use of the idea of RNNs to achieve the sequential fusion of all-level features in our network.

3.2 Network Architecture

The proposed bidirectional network, RiFCN, has a forward stream and a backward stream, followed by a final pixel-wise classification layer. The forward stream is a CNN for feature extraction, which takes an input image and produces multi-level convolutional feature maps; while the backward stream exploits all the features available along the forward stream to enable high resolution prediction using recurrent connections. Fig. 3 shows the overall architecture of the proposed RiFCN.

Fig 3: Overall architecture of the proposed recurrent network in fully convolutional network (RiFCN) for semantic segmentation of aerial images. RiFCN is a bidirectional network, which has a forward stream and a backward stream, followed by a final pixel-wise classification layer. The forward stream is a CNN for feature extraction, which takes an input image and produces multi-level convolutional feature maps; while the backward stream incorporates autoregressive recurrent connections to hierarchically and progressively absorb abstract, high-level features and render pixel-wise, high resolution prediction. The latter can be considered a reverse feature fusion process.

3.2.1 Forward Stream

The forward stream in our network is mainly inspired by the philosophy of VGG-16 VGG

, which is well known for its elegance and simplicity and, at the same time, yields nearly state-of-the-art features for image classification and good generalization properties. More specifically, the forward stream consists of 5 convolutional blocks (2 convolutional layers per block). Note that we do not initialize the training process of the network from weights trained for classification on large natural image data set like ImageNet, as the pre-trained model is not suitable to be used on multi-channel images (e.g., R-G-B-IR). In addition, there are no fully connected layers, in order to significantly reduce the number of trainable parameters in the network and retain higher resolution feature maps at the same time.

We make use of convolutional filters with a very small receptive field of , rather than using larger ones, such as or . That is because, as reported in VGG ,

convolutional filters are the smallest kernels that can capture patterns in various directions (e.g., center, up/down, and left/right), increase nonlinearities inside the network, and thus make the network more discriminative as compared to other larger filters. The spatial padding of convolutional layers is such that the spatial resolution of feature maps is preserved after convolution, i.e., it is 1 pixel in our network; the convolution stride is fixed to 1 pixel. Spatial pooling is achieved by carrying out 4 max-pooling layers, which follow the first four convolutional blocks. Max-pooling is performed over

pixel window with stride 2.

In the forward stream of RiFCN, convolutional layers within the same block have the same number of filters. Meanwhile, the number of filters increases in the deeper blocks, roughly doubling after each max-pooling layer, which is meant to preserve time complexity per layer as far as possible. We make use of 64 filters for the first two convolutional layers, 128 filters for the following two layers, 256 filters for the fifth and the sixth convolutional layer, 512 filters for the seventh and the eighth layer, and 1024 filters for the last two convolutional layers. All the convolutional layers in the forward stream have a rectified linear unit (ReLU) nonlinearity. In addition, note that the forward stream in our network is extremely flexible in that it can be replaced and modified in various ways, for example, using other classification network architectures (VGG-19 

VGG , ResNet ResNet , etc.).

Fig 4: Details of the backward stream of RiFCN based on autoregressive recurrent connection. During the backward pass, in each level, it takes feature maps generated by the forward pass and fused feature maps from the previous level as inputs to produce new features of the current level.

3.2.2 Backward Stream

The forward stream is used to extract different-level features of images, by interleaving convolutional layers and max-pooling layers, i.e., spatially shrinking the feature maps. Pooling is necessary to allow the aggregation of information over large areas of feature maps and, more fundamentally, to make network training computationally feasible. It, however, leads to reduced resolutions of high-level feature maps in deeper layers. Therefore, to provide dense pixel-wise predictions, we need a way to refine those coarse, pooled feature maps.

A straightforward idea is to deconvolve all the feature maps to the desired full resolution and stack them together, resulting in a concatenated feature representation that can be used to predict segmentation maps FCN . Although this kind of approaches are capable of semantically segment images from different levels, the inner connection of different-level features is missing. Furthermore, in this approach, fusing earlier layers easily result in diminishing returns, with respect to both visual and quantitative improvements FCN . Another way SegNet is using a deconvolutional network which takes as input only coarse, high-level feature maps. This approach does not exploit low-level features that help to generate sharp, detailed boundaries for high resolution prediction.

In this work, we propose a novel feature fusion architecture based on a recurrent structure to hierarchically and progressively absorb high-level semantic features and render pixel-wise, high resolution predictions. It incorporates autoregressive recurrent connections into predictions from deep to shallow layers, which is opposite to the forward stream. Fig. 4 shows the detailed process of the backward stream in our RiFCN. Formally, given an input image with size , output features maps of the forward stream have size , i.e., the output features are reduced by a factor of 16. Let be the resolution of feature maps and identified by feature level ;

denotes a 3D tensor, i.e., feature maps generated by the

-th convolutional block of the forward stream. During the backward stream, in each level , it takes and fused feature maps from the previous level as inputs to produce new features of the current level as

(1)

where is a function for fusing different feature maps at different resolutions. We define it as follows:

(2)

where represents convolution operation, and denotes deconvolution operation with stride (cf. Fig. 5). and are weights of the convolution and deconvolution, respectively. is a nonlinear function and in this work, we make use of ReLU. Note that Eq. 2 is a general expression, and the first term can be . In this case, is a matrix that its center element is 1 and others are 0.

From Eq. (2), we can clearly see that multiple autoregressive recurrent connections ensure that final fused feature maps have multiple paths from deep to shallow layers, which facilitates effective information exchanges. In addition, this top-down backward stream is able to propagate semantic information back to fine-grained details for the final segmentation prediction.

Fig 5: Deconvolution used in the recurrent connection of the backward stream.

3.3 Bidirectional Network Learning

Denote by with sample pairs representing training set, where and are the input image and the corresponding ground-truth with pixels, respectively. For notional simplicity, we subsequently drop the subscript and consider each image independently. We denote

as parameters of the forward stream. Thus, the loss function of RiFCN can be expressed as

(3)

where and is the number of classes. represents the -th label set. is the confidence score of the prediction that measures how likely the pixel belongs to the -th class. Note that and are both parameters in the backward stream. Since network learning in the forward stream (i.e., the updating of ) is similar to that of a CNN for classification tasks, in this section, we mainly focus on the network inference of the backward stream.

The backward stream of the network starts with computing the gradient of the loss function with respect to the output . This gradient is then propagated backwards level by level from output to input to update the parameters of the network. The recurrent structure to propagate this gradient through the network can be written as follows:

(4)

and starts at:

(5)

From Eq. 4, we can see that only parameters that connect play a role in propagating the error down the network.

The gradients of the loss function with respect to the parameters can then be obtained by summing the parameter gradients in each level (or accumulating them while propagating the error):

(6)

The momentum method is commonly used to help accelerate stochastic gradient descent in the relevant direction and dampen oscillations by adding a fraction

of the update parameter. When updating weights and using the momentum method, the updating rules can be written as follows:

(7)

where is the learning rate and is the momentum.

Fig 6: An excerpt from the ISPRS Potsdam data set for semantic segmentation. Legend – white: impervious surfaces, blue: buildings, cyan: low vegetation, green: trees, yellow: cars, red: clutter/background.
Fig 7: Two tiles (left) and their corresponding ground truths (right) from the Inria Aerial Image Labeling data set for semantic segmentation. The images in this data set convey dissimilar urban settlements, ranging from densely populated areas to alpine towns.

4 Experiments

4.1 Data Sets

4.1.1 ISPRS Potsdam

The ISPRS Potsdam Semantic Labeling data set is an open benchmark data set provided online222http://www2.isprs.org/commissions/comm3/wg4/2d-sem-label-potsdam.html for semantic segmentation of high resolution remote sensing images. The dataset is consists of 38 ortho-rectified aerial IRRGB images ( px), with a 5 cm spatial resolution and corresponding DSMs generated by dense image matching, taken over the city of Potsdam, Germany. A comprehensive manually annotated pixel-wise segmentation mask is provided as ground truth for 24 tiles, which are the tiles we work on. We randomly selected 6 tiles (tile IDs: 2_11, 3_11, 4_12, 5_12, 7_10, 7_12) from 24 training images and used them as test set in our experiments. The input to the networks contains both IRRG and nDSM, and all results reported on this dataset refer to the aforementioned test set. Fig. 6 shows an excerpt from the ISPRS Potsdam data set.

4.1.2 Inria Aerial Image Labeling Data Set

The Inria Aerial Image Labeling Data Set has been recently proposed and is specially designed for advancing technologies in automatic pixel-wise labeling of aerial imagery. It is comprised of 360 ortho-rectified aerial RGB images ( px) at 30 cm spatial resolution. Each tile covers a surface of m. All the images cover ten cities and an overall area of km. The images convey dissimilar urban settlements, ranging from densely populated areas (e.g., Chicago, USA) to alpine towns (e.g., Tyrol, Austrian). Manually annotated ground truth is only provided for five cities, namely Austin, Chicago, Kitsap County, Western Tyrol, and Vienna. The ground truth is binary, which indicates a pixel belongs either to building or non-building class. For comparability, as suggested by the authors of this data set, we use images 6 to 36 of each city for training and images 1 to 5 for testing. Two examples from the Inria Aerial Image Labeling Data Set are exhibited in Fig. 7.

4.2 Network Training

The network training is based on the TensorFlow framework. We chose Nesterov Adam 

nadam2 ; nadam1 as the optimizer to train the network since, for this task, it shows much faster convergence than standard stochastic gradient descent (SGD) with momentum sgd or Adam adam . We fixed almost all of parameters of Nesterov Aadam as recommended in nadam2 : , , , and a schedule decay of 0.004, making use of a fairly small learning rate of . All network weights are initialized with a Glorot uniform initializer Glorot_normal

that draws samples from a uniform distribution. We utilize softmax and sigmoid as the activation function of the last convolutional layer for multi-class and binary semantic segmentation, respectively. We make use of data augmentation technique to increase the number of training samples. The patches and corresponding pixel-wise ground truth are transformed by horizontally and vertically flipping three-quarters of the patches. We train the network for 30 epochs and use early stopping to avoid overfitting. To monitor overfitting during training, we randomly select 10% of the training samples as the validation set. Furthermore, we exploit fairly small mini-batches of 8 image pairs because, in a sense, every pixel is a training sample. Finally, we train our network on a single NVIDIA GeForce GTX TITAN with 12 GB of GPU memory.

-2cm0cm Method Imp Surf Building Low Veg Tree Car Clutter OA Mean FCN 88.46 92.28 78.33 73.10 82.83 69.55 84.39 80.76 SegNet 88.53 91.90 79.68 76.04 86.51 61.16 84.68 80.64 RiFCN 90.10 92.23 81.94 79.29 88.91 69.71 86.59 83.70 FCN [e] 90.32 93.16 80.03 75.78 89.26 72.23 86.26 83.46 SegNet [e] 90.41 92.77 81.65 78.77 92.41 63.61 86.58 83.27 RiFCN [e] 91.74 93.02 83.71 81.90 93.73 72.18 88.30 86.05

  • [e] means evaluation on eroded boundary ground truths.

Table 1: Numerical Results on the ISPRS Potsdam Data Set.
Fig 8: Example predictions of different models on the ISPRS Potsdam data set. Legend – white: impervious surfaces, blue: buildings, cyan: low vegetation, green: trees, yellow: cars, red: clutter/background.

4.3 ISPRS Potsdam Results

To evaluate the performance of different methods for semantic segmentation of aerial images, score and overall accuracy are used as evaluation criteria. score can be calculated as follows:

(8)

and

(9)

where , , and

are the numbers of true positives, false positives, and false negatives, respectively. These metrics can be calculated by pixel-based confusion matrices per tile, or an accumulated confusion matrix. Overall accuracy is the normalization of the trace from the confusion matrix.

Fig 9: Full prediction for tile ID 3_11. The mean scores achieved by FCN, SegNet, and RiFCN are 82.49%, 80.36%, and 87.31%, respectively; the overall accuracies of these methods on this tile are 85.07%, 85.14%, and 87.31%, respectively. Legend – white: impervious surfaces, blue: buildings, cyan: low vegetation, green: trees, yellow: cars, red: clutter/background.

To verify the effectiveness of the proposed network, we perform comparisons against a couple of state-of-the-art semantic segmentation networks, i.e., FCN and SegNet, which are two most widely used models in semantic segmentation of aerial images. Note that we do not compare RiFCN with other networks in computer vision (e.g., PSPNet pspnet ), as they make use of some techniques (fully connected CRF, ResNet, etc.) which leads to an unfair comparison. Table 1 presents results on the ISPRS Potsdam data set, and we can see that RiFCN significantly outperforms other methods on both score and overall accuracy. Compared to FCN, the proposed RiFCN increases the mean score and overall accuracy by 2.94% and 2.20%, respectively; in comparison with SegNet, increments on mean score and overall accuracy are 3.06% and 1.91%, respectively. Moreover, it is worth noting that the proposed network can achieve the best accuracy on small objects (e.g., cars). The comparisons indicate that the good performance of RiFCN can be ascribed to the proposed top-down backward stream, which effectively fuses multi-level features using autoregressive recurrent connections. Some semantic segmentation results on the ISPRS Potsdam data set are shown in Fig. 8. We can see an improvement in visual quality from FCN and SegNet to RiFCN.

Fig 10: Confusion matrices of FCN (left), SegNet (middle), and RiFCN (right) for the ISPRS Potsdam data set.

In addition, according to the evaluation of the ISPRS Potsdam data set, we also report results based on an alternative ground truth (cf. the last three rows in Table 1) in which the borders have been eroded by a 3 px radius circle, so that evaluation is tolerant to small errors on object edges. In Fig. 9, the full segmentation of image tile 3_11 is given. It summarizes the classification of an entire tile.

Confusion matrices of different networks for the ISPRS Potsdam data set can be found in Fig. 10. We can clearly see that the proposed RiFCN can better differentiate similar classes as compared to FCN and SegNet.

-2.95cm0cm Method Austin Chicago Kitsap County Western Tyrol Vienna Overall FCN IoU 47.66 53.62 33.70 46.86 60.60 53.82 Acc. 92.22 88.59 98.58 95.83 88.72 92.79 FCN-Skip IoU 57.87 61.13 46.43 54.91 70.51 62.97 Acc. 93.85 90.54 98.84 96.47 91.48 94.24 FCN-MLP IoU 61.20 61.30 51.50 57.95 72.13 64.67 Acc. 94.20 90.43 98.92 96.66 91.87 94.42 SegNet IoU 74.81 52.83 68.06 65.68 72.90 70.14 Acc. 92.52 98.65 97.28 91.36 96.04 95.17 Multi-task SegNet IoU 76.76 67.06 73.30 66.91 76.68 73.00 Acc. 93.21 99.25 97.84 91.71 96.61 95.73 Mask R-CNN IoU 65.63 48.07 54.38 70.84 64.40 59.53 Acc. 94.09 85.56 97.32 98.14 87.40 92.49 RiFCN IoU 76.84 67.45 63.95 73.19 79.18 74.00 Acc. 96.50 91.76 99.14 97.75 93.95 95.82

  • This method uses extra supervision information for network training.

Table 2: Numerical Results on the Inria Aerial Image Labeling Data Set.

4.4 Inria Aerial Image Labeling Data Set Results

To quantify performance, two evaluation measures are considered in this data set: intersection over union (IoU) and overall accuracy. IoU, also known as Jaccard index, is defined as follows:

(10)

The IoU is a measure of how close two regions are each other on a scale between 0 and 1 – a value of 0 means the regions do not overlap and a scale of 1 means that the regions are exactly the same. We mainly focus experimental results on IoU as it has become a standard evaluation criterion for binary semantic segmentation tasks. In addition, given that category distribution suffers from an imbalanced phenomenon (a large number of image areas are dedicated to background/non-building class), overall accuracy is not precise enough, since building category is easy to be ignored.

Fig 11: Segmentation results of two large-scale regions in Chicago (top) and Kitsap County (bottom). Colored areas mean building footprints, and different colors indicate different building instances. In this way, we can clearly see the performance of the network at instance level.

We compare the proposed RiFCN with the state-of-the-art in the literature FCN ; Maggiori17 ; SegNet ; SegNet2 ; Bischke17 ; He17 ; Ohleyer18 on the Inria Aerial Image Labeling Data Set (cf. Table 2). As on the ISPRS Potsdam data set, FCN FCN and SegNet SegNet ; SegNet2 are included. Moreover, in Maggiori17 , FCN-MLP has been introduced, which upsamples and concatenates all intermediate feature maps of the convolutional component of a FCN and makes use of an MLP to reduce the concatenated features to predict segmentation maps. The authors of Maggiori17

also provide the results of FCN-Skip, which creates multiple segmentation maps from different convolutional layers (at different resolutions), interpolates them to match the highest resolution, and adds the results to create the final semantic segmentation map. Bischke et al. 

Bischke17 propose a cascaded multi-task learning SegNet (which we will call Multi-task SegNet hereafter) that addresses the problem of building segmentation by exploiting not only semantic segmentation masks but also geometric information (i.e., signed distance), aiming at preserving semantic boundaries in segmentation maps as far as possible. It is worth noting that compared to our RiFCN, Multi-task SegNet uses extra supervision information. He et al. He17 propose a general framework, called Mask Region-based CNN (Mask R-CNN), which is capable of efficiently detecting objects in an image while simultaneously generating a segmentation mask for each detected instance. Mask R-CNN has proved its efficiency in computer vision. Later, the author of Ohleyer18 makes use of Mask R-CNN for the building segmentation on satellite images.

Table 2 shows results of the aforementioned methods on the Inria Aerial Image Labeling Data Set. It can be seen that the proposed RiFCN outperforms FCN, FCN-Skip, and FCN-MLP, and increments of overall IoU are 20.18%, 11.03%, and 9.33%, respectively. This indicates that our feature fusion strategy (i.e. backward stream in our network) is more powerful and effective. In addition, compared with SegNet, the improvement in overall IoU achieved by RiFCN is 3.86%. It is noteworthy that our network can even outperform Multi-task SegNet, which uses more supervision information for network learning. When comparing RiFCN against the recently proposed Mask R-CNN, we can observe an improvement of 14.47% in IoU. Overall, the results show that the approach that produces high resolution segmentation map plays a crucial role for semantic segmentation tasks, and our method, which makes use of autoregressive recurrent connections in a bidirectional network architecture, can offer better results as compared to FCN-based and encoder-decoder methods. Fig. 11 shows segmentation results of two large-scale regions in Chicago and Kitsap County. Note that colored areas in Fig. 11 mean building footprints, and different colors indicate different building instances. In this way, we can clearly see the performance of the network at instance level.

Furthermore, in our experiments, we noticed that there are some inaccuracies of the ground truth data, such as those shown in Fig. 12. Obviously, these inaccuracies affect the accurate evaluation of segmentation methods.

Fig 12: Examples of ground truth labeling errors in the Inria Aerial Image Labeling Data Set.

5 Conclusion

In this paper, we propose a novel network architecture, RiFCN, for semantic segmentation of high resolution remote sensing data. In particular, the proposed network is composed of two parts, namely forward stream and backward stream. The forward stream is responsible for extracting multi-level convolutional feature maps from the input. And we design a reverse process (i.e., backward stream), which uses a series of autoregressive recurrent connections to hierarchically and progressively absorb high-level semantic features and render pixel-wise, high resolution predictions. By doing so, boundary-aware feature maps and high-level features are orderly embedded into the framework. Experiments demonstrate that the feature fusion strategy of the proposed RiFCN performs favorably against others (e.g., FCN, FCN-Skip, and FCN-MLP). In addition, compared to other network architectures such as SegNet and Mask R-CNN, the proposed network can offer better segmentation results for high resolution aerial imagery.

Acknowledgements

The authors would like to thank the ISPRS for making the Potsdam data set available. In addition, they also would like to thank the INRIA Sophia-Antipolis Mediterranee for providing the Inria Aerial Image Labeling Data Set.

This work is jointly supported by the China Scholarship Council, the European Research Council (ERC) under the European Union s Horizon 2020 research and innovation programme (grant agreement No [ERC-2016-StG-714087], Acronym: So2Sat), and Helmholtz Association under the framework of the Young Investigators Group “SiPEO” (VH-NG-1018, www.sipeo.bgu.tum.de).

References

References

  • (1) D. Marmanis, K. Schindler, J. D. Wegner, S. Galliani, M. Datcu, U. Stilla, Classification with an edge: Improving semantic image segmentation with boundary detection, ISPRS Journal of Photogrammetry and Remote Sensing 135 (January) (2018) 158–172.
  • (2) N. Audebert, B. L. Saux, S. Lefèvre, Beyond RGB: Very high resolution urban remote sensing with multimodal deep networks, ISPRS Journal of Photogrammetry and Remote Sensing 140 (June) (2018) 20–32.
  • (3) D. Marcos, M. Volpi, B. Kellenberger, D. Tuia, Land cover mapping at very high resolution with rotation equivariant CNNs: Towards small yet accurate models, ISPRS Journal of Photogrammetry and Remote Sensing.
  • (4) E. Maggiori, Y. Tarabalka, G. Charpiat, P. Alliez, High-resolution aerial image labeling with convolutional neural networks, IEEE Transactions on Geoscience and Remote Sensing 55 (12) (2017) 7092–7103.
  • (5) K. Liu, G. Mattyus, Fast multiclass vehicle detection on aerial images, IEEE Geoscience and Remote Sensing Letters 12 (9) (2015) 1938–1942.
  • (6) N. Audebert, B. L. Saux, S. Lefèvre, Segment-before-detect: Vehicle detection and classification through semantic segmentation of aerial images, Remote Sensing 9 (4) (2017) 368.
  • (7) M. D. Mura, J. A. Benediktsson, B. Waske, L. Bruzzone, Morphological attribute profiles for the analysis of very high resolution images, IEEE Transactions on Geoscience and Remote Sensing 48 (10) (2010) 3747–3762.
  • (8) P. Tokarczyk, J. D. Wegner, S. Walk, K. Schindler, Features, color spaces, and boosting: New insights on semantic classification of remote sensing images, IEEE Transactions on Geoscience and Remote Sensing 53 (1) (2015) 280–295.
  • (9) A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep convolutional neural networks, in: Advances in Neural Information Processing Systems (NIPS), 2012.
  • (10) K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv:1409.1556.
  • (11)

    G. Huang, Z. Liu, L. van der Maaten, K. Q. Weinberger, Densely connected convolutional networks, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.

  • (12) R. Girshick, J. Donahue, T. Darrell, J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
  • (13) R. Girshick, Fast R-CNN, in: IEEE International Conference of Computer Vision (ICCV), 2015.
  • (14) S. Ren, K. He, R. Girshick, J. Sun, Faster R-CNN: Towards real-time object detection with region proposal networks, in: Advances in Neural Information Processing Systems (NIPS), 2015.
  • (15) J. Long, E. Shelhamer, T. Darrell, Fully convolutional networks for semantic segmentation, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
  • (16) V. Badrinarayanan, A. Handa, R. Cipolla, Segnet: A deep convolutional encoder-decoder architecture for robust semantic pixel-wise labelling, arXiv:1505.07293.
  • (17)

    Y. Yuan, L. Mou, X. Lu, Scene recognition by manifold regularized deep learning architecture, IEEE Transactions on Neural Networks and Learning Systems 26 (10) (2015) 2222–2233.

  • (18) X. X. Zhu, D. Tuia, L. Mou, G.-S. Xia, L. Zhang, F. Xu, F. Fraundorfer, Deep learning in remote sensing: A comprehensive review and list of resources, IEEE Geoscience and Remote Sensing Magazine 5 (4) (2017) 8–36.
  • (19) L. Mou, X. X. Zhu, M. Vakalopoulou, K. Karantzalos, N. Paragios, B. L. Saux, G. Moser, D. Tuia, Multitemporal very high resolution from space: Outcome of the 2016 IEEE GRSS Data Fusion Contest, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 10 (8) (2017) 3435–3447.
  • (20) L. Mou, X. X. Zhu, Spatiotemporal scene interpretation of space videos via deep neural network and tracklet analysis, in: IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 2016.
  • (21) L. Mou, X. X. Zhu, IM2HEIGHT: Height estimation from single monocular imagery via fully residual convolutional-deconvolutional network, arXiv:1802.10249.
  • (22) X. X. Zhu, D. Tuia, L. Mou, G.-S. Xia, L. Zhang, F. Xu, F. Fraundorfer, Deep learning in remote sensing: A review, arXiv:1710.03959.
  • (23) L. Mou, M. Schmitt, Y. Wang, X. X. Zhu, A CNN for the identification of corresponding patches in SAR and optical imagery of urban scenes, in: Joint Urban Remote Sensing Event (JURSE), 2017.
  • (24) J. Sherrah, Fully convolutional networks for dense semantic labelling of high-resolution aerial imagery, arXiv:1606.02585.
  • (25) D. Marmanis, J. D. Wegner, S. Galliani, K. Schindler, M. Datcu, U. Stilla, Semantic segmentation of aerial images with an ensemble of CNNs, in: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2016.
  • (26) M. Kampffmeyer, A.-B. Salberg, R. Jenssen, Semantic segmentation of small objects and modeling of uncertainty in urban remote sensing images using deep convolutional neural networks, in: IEEE International Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2016.
  • (27) N. Audebert, B. L. Saux, S. Lefèvre, Semantic segmentation of earth observation data using multimodal and multi-scale deep networks, in: Asian Conference on Computer Vision (ACCV), 2016.
  • (28) V. Badrinarayanan, A. Kendall, R. Cipolla, Segnet: A deep convolutional encoder-decoder architecture for image segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence 39 (12) (2017) 2481–2495.
  • (29) M. Volpi, D. Tuia, Dense semantic labeling of subdecimeter resolution images with convolutional neural networks, IEEE Transactions on Geoscience and Remote Sensing 55 (2) (2017) 881–893.
  • (30) L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, A. L. Yuille, Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs, arXiv:1606.00915.
  • (31) K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • (32) Z. Wu, C. Shen, A. van den Hengel, High-performance semantic segmentation using very deep fully convolutional networks, arXiv:1604.04339.
  • (33) H. Zhao, J. Shi, X. Qi, X. Wang, J. Jia, Pyramid scene parsing network, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • (34)

    P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, P. Manzagol, Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion, Journal of Machine Learning Research 11 (2010) 3371–3408.

  • (35) L. Mou, P. Ghamisi, X. X. Zhu, Unsupervised spectral–spatial feature learning via deep residual conv–deconv network for hyperspectral image classification, IEEE Transactions on Geoscience and Remote Sensing 56 (1) (2018) 391–406.
  • (36) S. Hong, H. Noh, B. Han, Decoupled deep neural network for semi-supervised semantic segmentation, in: Advances in Neural Information Processing Systems (NIPS), 2015.
  • (37) O. Ronneberger, P. Fischer, T. Brox, U-Net: Convolutional networks for biomedical image segmentation, in: Medical Image Computing and Computer-Assisted Intervention (MICCAI), 2015.
  • (38) G. Lin, A. Milan, C. Shen, I. Reid, RefineNet: Multi-path refinement networks for high-resolution semantic segmentation, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • (39) Y. Hua, L. Mou, X. X. Zhu, LAHNet: A convolutional neural network fusing low- and high-level features for aerial scene classification, in: IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 2018.
  • (40) M. D. Zeiler, R. Fergus, Visualizing and understanding convolutional networks, in: European Conference on Computer Vision (ECCV), 2014.
  • (41) A. Mahendran, A. Vedaldi, Understanding deep image representations by inverting them, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
  • (42)

    B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, A. Torralba, Learning deep features for discriminative localization, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.

  • (43) H. Lyu, H. Lu, L. Mou, Learning a transferable change rule from a recurrent neural network for land cover change detection, Remote Sensing 8 (6) (2016) 506.
  • (44) L. Mou, P. Ghamisi, X. Zhu, Deep recurrent neural networks for hyperspectral image classification, IEEE Transactions on Geoscience and Remote Sensing 55 (7) (2017) 3639–3655.
  • (45)

    M. Russwurm, M. Körner, Temporal vegetation modelling using long short-term memory networks for crop identification from medium-resolution multi-spectral satellite images, in: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) workshop, 2017.

  • (46) L. Mou, L. Bruzzone, X. X. Zhu, Learning spectral-spatial-temporal features via a recurrent convolutional neural network for change detection in multispectral imagery, arXiv:1803.02642.
  • (47) M. Russwurm, M. Körner, Multi-temporal land cover classification with sequential recurrent encoders, ISPRS International Journal of Geo-Information 7 (4) (2018) 129.
  • (48) H. Lyu, H. Lu, L. Mou, W. Li, J. Wright, X. Li, X. Li, X. X. Zhu, J. Wang, L. Yu, P. Gong, Long-term annual mapping of four cities on different continents by applying a deep information learning method to landsat data, Remote Sensing 10 (3) (2018) 471.
  • (49)

    T. Dozat, Incorporating Nesterov momentum into Adam,

    http://cs229.stanford.edu/proj2015/054_report.pdf, online.
  • (50) I. Sutskever, J. Martens, G. Dahl, G. Hinton, On the importance of initialization and momentum in deep learning, in: IEEE International Conference on Machine Learning (ICML), 2013.
  • (51)

    Y. LeCun, B. Boser, J. Denker, D. Henderson, R. Howard, W. Hubbard, L. Jackel, Backpropagation applied to handwritten zip code recognition, Neural Computation 1 (4) (1989) 541–551.

  • (52) D. P. Kingma, J. L. Ba, Adam: A method for stochastic optimization, in: IEEE International Conference on Learning Representations (ICLR), 2015.
  • (53)

    X. Glorot, Y. Bengio, Understanding the difficulty of training deep feedforward neural networks, in: International Conference on Artificial Intelligence and Statistics (AISTATS), 2010.

  • (54) B. Bischke, P. Helber, J. Folz, D. Borth, A. Dengel, Multi-task learning for segmentation of building footprints with deep neural networks, arXiv:1709.05932.
  • (55) K. He, G. Gkioxari, P. Dollár, R. Girshick, Mask r-cnn, in: IEEE International Conference of Computer Vision (ICCV), 2017.
  • (56) S. Ohleyer, Building segmentation on satellite images, https://project.inria.fr/aerialimagelabeling/files/2018/01/fp_ohleyer_compressed.pdf, online.