Accurate building maps play an important role in a wide range of applications, such as urban planning and 3D city modeling. Nowadays, the large amounts of increasingly available remote sensing (RS) images with very high resolution (VHR) up to half a meter provide abundant data sources to generate such accurate building maps. However, manual administration of buildings from huge volume of VHR-RS images is unfeasible, hence there is an urgent demand to develop automatic approaches for detecting buildings from VHR-RS images.
Over the past years, many studies have been devoted to automatic building detection, e.g., [1, 2, 3, 4, 5, 6, 7]. Among them, one main stream exploits the discriminative properties of buildings in RS images, e.g., from the aspects of spectrum , texture [2, 3] and local structural or morphological features [4, 5, 6]
. These methods perform well on detecting buildings from mid-/high-resolution RS images, but dramatically lose their efficiency for RS images of half-meter resolution. The performance decrease is largely due to the fact that, in VHR-RS images, textural or spectral information lacks discriminative power to distinguish buildings. Moreover, most of these approaches are incapable of providing accurate boundaries of buildings, which are particularly desirable in the precise mapping of buildings. Another stream of the state-of-the-art building detection approaches attempts to detect buildings by learning an off-the-shelf parameterized model, e.g., the convolutional neural networks (CNNs), with manually labeled samples[7, 8]. Despite the high performance of learning-based methods, especially the ones based on CNNs , their performances heavily rely on a considerable amount of well-annotated training samples, and thus they have very limited generalization capability beyond the training domain.
This paper presents a new method for accurately detecting buildings in VHR-RS images, by computing the geometric saliency of building structures. Our work is inspired by the observation that, in VHR-RS images, buildings are always more distinguishable in geometries (both local and global) than in texture or spectrum. More precisely, we first propose to represent VHR-RS images with a mid-level geometrical representation, by exploiting junctions that can locally depict anisotropic geometrical structures of images. We then derive the saliency of geometric structures on buildings, by considering both the probability of each junction that measures its saliency to its surroundings and the relationship of junctions. This stage can encode both local and semi-global geometric saliency of buildings in images. Finally, the geometric building index (GBI) of whole image is measured via integrating the derived geometric saliency.
In contrast to existing building indexes, e.g. [1, 2, 3, 4, 5, 6], our method results in less redundant non-building areas and can provide accurate contours of buildings, thanks to the geometric saliency computed from a mid-level geometrical representation. As we shall see in Section 3, our method achieves the state-of-the-art performance111All results are available at http://captain.whu.edu.cn/project/geosay.html. on building detection and meanwhile shows promising generalization power to different datasets, especially in comparison with learning-based approaches .
2.1 A mid-level geometric representation of images
Let denote an -channel VHR-RS image defined on the image grid . For imagery in panchromatic format, all the geometrical information is contained in the single channel image . While, for a multi-spectral image , the main geometrical structures of the image can be computed from its -energy image or from its first PCA component . In this work, we concentrate on dealing with satellite images with (R,G,B)-channels, so the analysis of geometrical information is based on the luminance channel, with .
This work proposes to use a mid-level geometric representation of VHR-RS images. For an image , let denote all detected junctions, where each junction is encoded as . is the location of . and are the orientations and lengths of its branches respectively. is the significance measured by number false alarms associated with junction . These junctions can be well detected by the anisotropic-scale junction (ASJ) detector. An example of detected junctions is displayed in Fig. 1 (b).
Observe that the mid-level geometric description includes junctions with different number of branches, e.g., -, -, and -junctions with and branches respectively. According to empirical studies, in terms of buildings, junctions with more than branches are rare and we decompose all junctions into -junctions, to develop a building-centric geometric representation. Thus, we rewrite the junction format as
where are the two branches of -junctions and with for . is the center of the L-junction , and . The significance inherits from its original junctions. Fig. 1 (c) displays all the L-junctions, illustrating their centers with red dots.
2.2 Computing geometric saliency in VHR-RS images
In order to detect buildings, we need to derive certain geometric saliency from the mid-level geometric representation of VHR-RS images, so as to highlight geometric features inside buildings and suppress those outside buildings. To this end, we use the significance from both single geometrical primitives and pair-wise junctions.
First-order geometric saliency : For an image , the significance of each junction detected by the ASJ detector indicates the reliability of the junction appearing in . The smaller the is, the more salient detected junction will be. In addition, it is noted that all detected junctions can be divided into two subsets, i.e., , inside buildings and outside buildings. Given a junction with parameters
, the posterior probability, measuring the possibility of the event that a junction parameterized by is inside buildings, is derived by
where the prior probabilitiesand the likelihoods
can be estimated from a given dataset of buildings, e.g., the Spacenet65 dataset as we shall see in Section3. Thus the first-order geometric saliency of a junction can be computed as
Pairwise geometric saliency : When there are many junctions whose centers are very close to each other in a region, the probability of existence of a building (building saliency) will be higher. Thus, pair-wise relationships of junctions are useful cues to derive geometric saliency. In contrast with first-order saliency, pair-wise ones can encode more globally geometric information in images. Here, we use nearest neighbors to compute pair-wise saliency. For a junction , its -nearest neighbors (-NN), denoted by , are defined as a set of junctions satisfying
where represents the minimal length of branches of the junction . An example of the -NN graph for junctions is displayed in Fig. 1 (c). Thus, the pair-wise geometric saliency of a junction is defined as below, is the amount of neighbors.
The geometric saliency of a VHR-RS image thus can be computed by summarizing and on each junctions, an example of which is shown in Fig. 1 (d).
2.3 Geometric building index and building detection
Note that, given an -junction , the two branches uniquely form a parallelogram . Our geometric building index (GBI) attempts to associate each pixel with a saliency measuring the possibility of the pixel belonging to buildings, which is the summation of saliency inside parallelogram of all junctions. Thus, for a pixel in , its corresponding GBI is calculated by:
where is the list of junctions detected by the ASJ detector in image , and is an indicator function, which equals if the pixel is inside the parallelogram of junction and equals to otherwise. An illustration of the proposed GBI is shown in Fig. 1 (e).
3 Experiments and Discussions
This section evaluates the proposed method and compares it with state-of-the-art methods [3, 6, 11, 8] on three public datasets that are used for validating building detection algorithms. The datasets are :
Spacenet-65 Dataset222https://amazonaws-china.com/cn/public-datasets/spacenet/ consists of images of pixels extracted from WorldView-2 satellite imagery, with a spatial resolution of meter. This dataset covers buildings in both urban and rural areas.
Potsdam Dataset333http://www2.isprs.org/commissions/comm3/wg4/2d-sem-label-potsdam.html contains images of pixels with a spatial resolution of meter. Buildings in Potsdam exhibit to be large and are distributed dispersively due to the high resolution.
Massachusetts Buildings Dataset444https://www.cs.toronto.edu/~vmnih/data/ contains images of pixels in the test subset with a spatial resolution of meter.
To demonstrate the effectiveness of using geometric saliency in building detection, we compare our method with several state-of-the-art methods, including texture-based BASI , morphology-based MBI , local geometry-based PBI  and learning-based HF-FCN . Note that BASI and PBI are designed for built-up area detection and the others aim to detect the accurate shape of buildings. For HF-FCN, we directly use the model provided by the authors. For quantitative evaluation, as in [8, 12], the mean Average Precision (mAP) and F-score (also known as F-measure) are employed to measure the accuracy of detection.
3.1 Results and analysis
All results on the three datasets and detailed comparisons with different methods are available at http://captain.whu.edu.cn/project/geosay.html. Table 1 shows the mAP and F-score of different building detection methods. It can be noted that the proposed GBI achieves the best performance in both mAP and F-score on Spacenet-65 and Potsdam dataset, in the cases without training. When there are training samples, i.e., the case on Massachusetts dataset, HF-FCN outperforms all the other methods, since the model is fully trained on the dataset. But the model is severely overfitting, since it substantially loses its efficiency on Spacenet-65 and Potsdam dataset and achieves very low mAP ( and ) and F-score ( and ). This questions the generalization capability of learning-based methods. By contrast, although the prior probabilities of junctions are estimated from Spacenet-65 dataset, the high performance on both Potsdam and Massachusetts dataset indicates the powerful generality of our method. Even under the significant change of resolution (varying from 0.5m to 0.05m), the performance of our method is still better than the others.
Fig. 2 illustrates the building detection results on two sample images. The first image shows a case where buildings are distributed dispersively and a lot of non-building objects exist. The two built-up area detection methods, PBI and BASI, extract not only the buildings but also the neighbors, and produce many failures. The texture-based BASI results in numerous false detections in textural regions like roads and forest, and the local geometry-based PBI results in a lot of false detections around buildings. Other methods like MBI also face such problem, which confuse the rural roads with buildings. The phenomenons above suggest that the building indexes defined by these methods are not suitable to describe buildings in VHR-RS images. For the second image, BASI misses many parts of the three highlight buildings with low-texture roofs, which indicates that texture-based methods are inappropriate to buildings with low textures. MBI detects most of the buildings but fails to extract the whole shape of the central building due to the imbalanced luminance at the roof. By contrast, such cases do not hamper the performance of our method, since junctions locate at the corners of buildings no matter what the texture or the luminance of buildings appear.
The proposed GBI is based on the geometric saliency in VHR-RS images, not requiring any annotated training samples for the computations, and is capable of preserving the whole geometric shapes of buildings with high performance. Such results are promising for mapping buildings in VHR-RS images. One limitation of the GBI is that, in VHR-RS images, some man-made architectures or objects (e.g., cars) may also exhibit salient geometrical structures, which may lead to false detections. For solving these problems, some prior information in the images, such as ratios between object size and image resolution, can be used to suppress false alarms. In addition, it is also of great interest to incorporate different kinds of information to improve the detection accuracy of the position and whole boundaries of buildings.
This paper proposes a geometric saliency-based method for detecting buildings in VHR-RS images. Compared with traditional saliency-based methods, our method measures the geometric saliency of building by leveraging the meaningful geometric features that are specialized for describing buildings; compared with the learning-based method, our method is totally unsupervised and free of any training strategies. Experiments on three public datasets demonstrate that the proposed method not only achieves a substantial performance improvement, but also generalizes well to data of broad domains. Moreover, the buildings detected by our method have a clearer boundary and less redundant cluttered areas than existing methods.
-  Y. Zha, J. Gao, and S. Ni, “Use of normalized difference built-up index in automatically mapping urban areas from TM imagery,” International Journal of Remote Sensing, vol. 24, no. 3, pp. 583–594, 2003.
-  M. Pesaresi, A. Gerhardinger, and F. Kayitakire, “A robust built-up area presence index by anisotropic rotation-invariant textural measure,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 1, no. 3, pp. 180–192, 2008.
Z. Shao, Y. Tian, and X. Shen,
“BASI: A new index to extract built-up areas from high-resolution remote sensing images by visual attention model,”Remote Sensing Letters, vol. 5, no. 4, pp. 305–314, 2014.
-  B. Sirmacek and C. Unsalan, “Urban area detection using gabor features and spatial voting,” in 2009 IEEE 17th Signal Processing and Communications Applications Conference, 2009, pp. 812–815.
-  B. Sirmacek and C. Unsalan, “Urban area detection using local feature points and spatial voting,” IEEE Geoscience and Remote Sensing Letters, vol. 7, no. 1, pp. 146–150, 2010.
-  X. Huang and L. Zhang, “Morphological building/shadow index for building extraction from high-resolution imagery over urban areas,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 5, no. 1, pp. 161–172, 2012.
-  S. Saito, T. Yamashita, and Y. Aoki, “Multiple object extraction from aerial imagery with convolutional neural networks,” Electronic Imaging, vol. 2016, no. 10, pp. 1–9, 2016.
T. Zuo, J. Feng, and X. Chen,
“HF-FCN: Hierarchically fused fully convolutional network for
robust building extraction,”
Asian Conference on Computer Vision, 2016, pp. 291–302.
-  G.-S. Xia, W. Yang, J. Delon, Y. Gousseau, H. Sun, and H. Maître, “Structural High-resolution Satellite Image Indexing,” in ISPRS TC VII Symposium - 100 Years ISPRS, Vienna, Austria, 2010, vol. XXXVIII, pp. 298–303.
-  N. Xue, G. Xia, X. Bai, L. Zhang, and W. Shen, “Anisotropic-scale junction detection and matching for indoor images,” IEEE Trans. Image Processing, vol. PP, no. 99, pp. 1–1, 2017.
-  G. Liu, G. Xia, X. Huang, W. Yang, and L. Zhang, “A perception-inspired building index for automatic built-up area detection in high-resolution satellite images,” in IGARSS 2013, pp. 3132–3135.
-  A. O. Ok, C. Senaras, and B. Yuksel, “Automated detection of arbitrarily shaped buildings in complex environments from monocular vhr optical satellite imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 51, no. 3, pp. 1701–1717, 2013.