So2Sat LCZ42: A Benchmark Dataset for Global Local Climate Zones Classification

12/19/2019 ∙ by Xiao Xiang Zhu, et al. ∙ DLR 0

Access to labeled reference data is one of the grand challenges in supervised machine learning endeavors. This is especially true for an automated analysis of remote sensing images on a global scale, which enables us to address global challenges such as urbanization and climate change using state-of-the-art machine learning techniques. To meet these pressing needs, especially in urban research, we provide open access to a valuable benchmark dataset named "So2Sat LCZ42," which consists of local climate zone (LCZ) labels of about half a million Sentinel-1 and Sentinel-2 image patches in 42 urban agglomerations (plus 10 additional smaller areas) across the globe. This dataset was labeled by 15 domain experts following a carefully designed labeling work flow and evaluation process over a period of six months. As rarely done in other labeled remote sensing dataset, we conducted rigorous quality assessment by domain experts. The dataset achieved an overall confidence of 85 dataset is a first step towards an unbiased globallydistributed dataset for urban growth monitoring using machine learning methods, because LCZ provide a rather objective measure other than many other semantic land use and land cover classifications. It provides measures of the morphology, compactness, and height of urban areas, which are less dependent on human and culture. This dataset can be accessed from http://doi.org/10.14459/2018mp1483140.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The production of land use/land cover (LULC) maps at large or even global scale is an essential task in the field of remote sensing. These maps can provide valuable input to a large number of societal questions, such as understanding human poverty or climate change, supporting the conservation of biodiversity and ecosystems, and providing stakeholder information for disaster management and sustainable urban development [38]. Urbanization is undoubtedly the most important mega-trends in the 21st century, after climate change. Currently, half of humanity –– 3.5 billion people –– lives in cities. Shockingly, 1 billion of them still live in slums. Therefore, sustainable urban development has become one of the 17 sustainable development goals (SDGs) of the United Nations. Today, sustainable development increasingly depends on the successful management of urban growth, especially in developing countries where the pace of urbanization is projected to be the fastest, according to World Urbanization Prospects: The 2018 Revision [39]. LULC maps enable us to describe, track, and manage urban growth in an objective and consistent manner.

Examples of global LULC products created by the remote sensing community include the Global Urban Footprint [11, 10], produced from synthetic aperture radar (SAR) data acquired by the TanDEM-X mission; the Global Human Settlement Layer produced from global, multi-temporal archives of fine-scale satellite imagery, census data, and volunteered geographic information [25]; and the Finer Resolution Observation and Monitoring of Global Land Cover and GlobeLand30 datasets, produced from 30m-resolution Landsat data [2]. This list is not exhaustive. However, these products all provide semantic labels of urban/non-urban, or even finer classes. These semantic labels are often subjective (to human interpretation), and culture-dependent. For example, the definition of urban and non-urban areas might be drastically different in Europe and Africa, and from person to person.

I-a Relevance of LCZ in global urban mapping

For a consistent analysis of the urban areas across the globe, an objective and culture-independent classification scheme of urban areas is pressingly needed. After extensive research, we turned to Local Climate Zones (LCZs). LCZs were originally developed for metadata communication of observational urban heat island studies.[36]. There are a total of 17 classes in the LCZ classification scheme, where 10 are built classes and 7 are natural classes. They are based on climate-relevant surface properties on local scale, which are mainly related to 3D surface structure (e.g., height and density of buildings and trees), surface cover (e.g., vegetation or paving), as well as anthropogenic parameters (such as human-based heat output). A schematic drawing of the 17 classes is shown in the left of Fig. 1. As can be seen, the 10 urban classes describe the morphology of the area, including the density and the height of the buildings, as well as the percentage of the impervious surface. The urban classes are mostly coded by red, with decreasing intensities as the building density and height decreases from compact high-rise to open low-rise. The middle of the figure shows the LCZ classification of Vancouver, Canada, created by the authors. The dark red part marked by the yellow rectangle is downtown Vancouver, where most of Vancouver’s high-rise buildings are located. The light red part of the classification map is mostly low-rise residential houses. As a reference, the Google image of this area is shown in the right of Fig. 1.

Because the LCZ classes are defined by their physical properties, they are generic and applicable to cities over the world, offering the potential to compare different areas of different cities with trenchant distinctions representing the heterogeneous thermal behavior within an urban environment [4]. Besides the increasing impact on worldwide climatological studies, such as the cooling effect of green infrastructure and micro-climatic effects on town peripheries [35, 34, 12, 30, 31, 23, 14, 22, 21, 15]

, researchers have recently started to use the LCZ approach to classify the internal structure of urban areas, providing promising information for various applications such as infrastructure planning, disaster mitigation, health and green space planning, and population assessment

[40, 18]. The remote sensing community also addressed its particular attention to this topic by organizing the 2017 IEEE data fusion contest with the goal of LCZ classification [46].

Fig. 1: Left: the schematic drawing of the 17 LCZ classes; middle: the LCZ classification map of Vancouver, Canada, created by the authors; and right: the Google image of downtown Vancouver; where most of the high-rise buildings are located. The yellow rectangle in the LCZ map marks the downtown areas. The left subfigure was modified from the WUDAPT [42].

I-B Related work in LCZ classification

Community-based LCZ mapping

A significant part of the existing development of LCZ classification is community-based large-scale LCZ mapping using freely available Landsat data and softwares [24, 6, 17]. World Urban Database and Portal (WUDAPT) [42], a community-driven initiative, was organized by researchers to map high-quality LCZ maps worldwide . Within this framework, currently almost 100 cities worldwide have been mapped with a moderate quality, providing sufficient detail for certain model applications [3]. LCZ maps of tens of cities, after undergoing quality assessment and generation of metadata, are now openly available in the WUDAPT portal. More recently, LCZs of Europe is being mapped as part of the WUDAPT project, with data including Sentinel-1, Sentinel-2, and the Defense Meteorological Program (DMSP) Operational Linescan System (OLS) night-time lights product [9].

These community-based efforts mark the first step towards a more synergetic cooperation among researchers. Yet, multiple studies have reported that the quality of the produced LCZ maps is inconsistent [5, 32]

, as the procedures strongly rely on the knowledge of individual volunteers. For example, the procedures of community-based LCZ mapping mainly consist of 1) labeling ground truth data in Google Earth, and 2) classification using shallow learning algorithms such as random forest in GIS software, a process that is detailed in

[4].

Algorithmic development

Therefore, it still requires a significant development towards a global LCZ mapping because of the lack of high quality labels, and transferable classifiers for global deployment. There are various promising classifiers for LCZ recently proposed by different research groups. They include random forests, support vector machines

[45], canonical correlation forests [28, 19], rotation forests [46]

, gradient boosting machines

[37], and ensembles of multiple classifiers [47]. The used data is mainly satellite data in optical and microwave range such as Landsat, Sentinel-1, and Sentinel-2 images. Recently, fusing multi-source data such as satellite images and Google Street View has also been investigated for LCZ classification [44]. Deep learning certainly played an important role in LULC using remote sensing data [49]

. Multiple algorithms based on convolutional neural networks such as residual neural network and ResNeXt,

[29, 27, 26, 48, 44, 13, 20] have been developed. These approaches are able to provide satisfying results for specific areas. However, according to [4, 8, 3], regional variations in vegetation and artificial materials, as well as significant variations in cultural and physical environmental factors, cause large intra-class variability of spectral signatures. One of the existing effort to further improve LCZ classification results is developing more robust machine learning models with high generalization ability to facilitate efficient up-scaling in a reasonable time frame [9, 8]. Deep learning based models have been shown with better generalization ability, thus can be better exploited for LCZ classificaiton [49, 48].

Despite the active algorithmic development, the global tranferability of a machine learning LCZ model requires large quantity of globally distributed and reliable reference data as a first step. Such a dataset is nonexistent in the community. This task will be addressed in this article.

I-C Contribution of this paper

The dataset

To answer the pressing need for LCZ training datasets, we carefully selected and labeled 42 urban agglomerations plus 10 additional smaller areas across all the inhabited continents (except Antarctica) around the globe. Their geographic distribution can be seen in Fig. 2. A large quantity of polygons in those cites were manually labeled by the authors. By projecting these labels to the corresponding coregistered Sentinel-1 and Sentinel-2 images, we obtained pairs of corresponding Sentinel-1 SAR and Sentinel-2 multi-spectral image patches with LCZ labels. An impression of the Sentinel image patch pairs in the dataset can be seen in Fig. 3. However, the actual patches in the dataset have a dimension of 320m by 320m, which is smaller than the visualization in Fig. 3. In this paper, we provide open access to this high quality So2Sat LCZ42 dataset to the research community. It is meant to foster the development of fully automatic classification pipelines based on modern machine learning approaches, and support the accelerated use of LCZ mapping at a global scale.

Improved labeling workflow

We found that only following the definition of LCZs in [35] and the labeling process mentioned in WUDAPT is not optimal for a joint labeling activity by a group of people, because of the vague definition of some LCZ classes. To ensure the highest possible quality of the result, we designed a rigorous labeling work flow and decision rules, shown in Fig. 4 Section A, respectively. Meetings were conducted before and during the labeling process to calibrate our understanding of the definition of the 17 classes. Afterwards, the labeling results from each member of the labeling crew were visually inspected by a different person to spot obvious errors. Last but not least, we conducted a quantitative evaluation of the label quality. The whole rigorous labeling processing took approximately 15 person-months.

Rigorous label quality assessment

Similar to any remote sensing product, reference labels must also have error bars to indicate their trustworthiness. Such quality measure rarely appear in datasets of remote sensing image labels. As mentioned in the previous paragraph, we conducted a rigorous quantitative evaluation of 10 cities in the dataset by having a group of remote sensing experts cast 10 independent votes on each labeled polygon, in order to identify possible errors and assess the human labeling accuracy. The ”human confusion matrices” per polygon and per pixel were created, where the confident of individual classes can be seen. In general, our human labels achieve 85% confidence. This confidence number can serve as a reference accuracy for the machine learning models trained on this dataset.

Fig. 2: The location of the 42 main cities (green dot) plus the 10 additional cities (orange dot) included in the So2Sat LCZ42 dataset.
Fig. 3: Examples of the Sentinel-1 and Sentinel-2 image scenes of the 17 LCZ classes. In each LCZ, the upper image is the intensity (in dB) of the Sentinel-1 scene, the middle one is the corresponding Sentinel-2 scene in RGB, and the lower image is the high resolution aerial image from Google as a reference. This figure shows the typical urban morphology of each LCZ classes, as well as the content observable by Sentinel-1 and Sentinel-2. For visualization purposes, the image scenes are much larger than the actual patches (32*32 pixel) in the So2Sat LCZ42 dataset.

Ii So2Sat LCZ42 Dataset Creation

A four-phase labeling process was designed to maximize the label consistency and minimize human error. The four phases are: learning, labeling, visual validation, and quantitative validation. They can be seen in Fig 4 as blocks A, B, C, and D, respectively. The detailed procedures in each phase are introduced in this section. We also prepared the corresponding Sentinel-1 and Sentinel-2 images of the 52 areas. Proper preprocessing procedures were performed on the two types of images.

Fig. 4: Flowchart of four-phase labeling project. Block A: Learning phase; Block B: Labeling phase; Block C: First validation phase; Block D: Second validation phase.
Class Building Surface Pervious Surface Impervious Surface Height above
Fraction[%] Fraction[%] Fraction[%] Ground [m]
Compact high-rise 1 40-60 0-10 40-60 25
Compact mid-rise 2 40-70 0-20 30-50 10 - 25
Compact low-rise 3 40-70 0-30 20-50 2 - 10
Open high-rise 4 20-40 30-40 30-40 25
Open mid-rise 5 20-40 20-40 30-50 10 - 25
Open low-rise 6 20-40 30-60 20-50 2 - 10
Lightweight low-rise 7 60-90 0-30 0-20 2 - 10
Large low-rise 8 30-50 0-20 40-50 2 - 10
Sparsely built 9 10-20 60-80 0-20 2 - 10
Heavy industry 10 20-30 40-50 20-40 2 - 10
Dense trees A 0-10 90-100 0-10 3
Scattered tree B 0-10 90-100 0-10 3
Bush, scrub C 0-10 90-100 0-10 1 - 2
Low plants D 0-10 90-100 0-10 1
Bare rock or paved E 0-10 0-10 90-100 0
Bare soil or sand F 0-10 90-100 0-10 0
Water G 0-10 90-100 0-10 0
TABLE I: Fractions of building surface, pervious surface, and impervious surface in percentage (%) of each class [35], as well as their height above the ground in meters.

Ii-a Creating the labels

Ii-A1 Learning phase

The learning phase aims at creating a standard for the colleagues who would conduct the labeling (Hereinafter referred to as ”the labeling crew”) . The reasons are two fold. First, the definition of LCZ classes (given in [35] and listed in Table. I) are not mutually disjoint (e.g. class 3 compact low-rise and 8 large low-rise), and their union also does not describe the whole Earth surface. That is to say that some areas do not fall into any of the LCZ classes, and some can be labeled to multiple classes. Second, the interpretation of the definition by different persons still differs from each other.

The labeling crew started by building a visual impression of different LCZ classes by viewing aerial images on Google Earth, and then moved towards a quantitative understanding of each class. As a result, we constructed a quantitative labeling decision rule according to the literal definition. This is shown in Fig. 6 in the appendix. An ”examination” of the labeling learning course was conducted before the actual labeling started, where everyone in the labeling crew cast a vote on many selected scenes. Ambiguous scenes were selected and discussed, in order to calibrate everyone’s understanding.

Ii-A2 Labeling phase

The labeling phase follows a standard procedure defined in the WUDAPT project [42]. First, each one of the labeling crew claimed a few cities among the 52 cities, and defined a region of interest (ROI) within each selected city by drawing a rectangle of approximately 5050 kilometers around the city center in Google Earth. Second, polygons enclosing different LCZ classes were manually delineated in Google Earth. These polygons are the preliminary labels. Afterwards, Landsat 8 images covering the ROI were prepared.

After the abovementioned preparation, a random forest classifier was trained using the Landsat 8 images and the preliminary LCZ labels, in order to produce a LCZ classification map of the specific city. This classification map and the satellite image on Google Earth served as auxiliary data to cross-check the correctness and completeness of the LCZ labels. The details are explained as follows.

  • [leftmargin=*]

  • Correctness: the crew visually inspected the discrepancies between the classification map and the label of the polygons. If a label mismatch was found for a labeled polygon, the crew inspected the satellite image on Google Earth, and corrected the given label if necessary. This process was repeated until no noticeable discrepancy between the classification map and the label was found.

  • Completeness: the labeling crew cross-checked the classification result with the satellite image on Google Earth in unlabeled areas, in order to find negative samples. For example, dense forest might be classified as water because it lacked the dense forest label. The labeling crew then labeled those negative samples of dense forest and included them in the whole label dataset. This hard negative mining procedure was carried out iteratively until no noticeable discrepancies between the classification map and the Google Earth image in unlabeled areas were found.

It is important to point out that the classification maps produced during the manual labeling process were only employed to provide guidance to the labeling crew, and were not used in the final data. All LCZ labels in the final provided reference data fully relied on manual human annotation.

Ii-A3 Visual quality control phase

Despite a clear quantitative definition that agreed by the labeling crew in the learning phase, personal bias and outliers still existed in the labeling result. A manual inspection was thus required before a quantitative validation to adjust personal biases, as well as decrease the inevitable human mistakes. Therefore, after the labeling phase, two persons other than the one who labeled the polygons sequentially and independently validated the labels, as demonstrated in block C of Fig. 

6. The two persons were responsible for visually inspecting two types of signals in the classification map: 1) obvious outliers, such as water being classified as a dense high-rise building, and 2) a normal compactness-centric pattern of urban areas, i.e., the compactness of urban buildings decreases from the city center towards suburbs. If the obvious outliers cover a comparatively large area, a polygon with the correct label has to be added. If an abnormal compactness pattern appears, the validation requires a detailed inspection, which often leads to adding polygons or correcting the labels of existing polygons. We found that visual validation already give us a significant indication of label quality.

Ii-A4 Label post-processing

After obtaining the labeled LCZ polygons, we discovered the following post-processing procedures were necessary:

  • [leftmargin=*]

  • Polygon shrinking: Although all the polygons were correctly labeled, some polygons in given LCZ class were drawn in a close proximity to another LCZ class. This might cause erroneous labels on the pixels close to the borders of the polygon when the polygon is rasterized, especially when using a large ground sampling distance (GSD). For example, the GSD of a LCZ label map defined in our research is 100 meters. A pixel in the label map that is too close to the boundary of two LCZs may cover both LCZ classes. To avoid this, shrinking the polygon of all non-urban LCZ classes except water (i.e., A to F) by 160m was carried out. We chose a distance of 160m because this corresponds to half of the patch size (16 pixels) of the Sentinel-1 and Sentinel-2 image patches in the So2Sat LCZ42 dataset. For class G (water), the shrinking distance is only 10m, given that the width of many rivers is in the order of hundreds of meters.

  • Class balancing: To use those vector-format polygon labels in machine learning of Earth observation images, they need to be rasterized into image format in certain geographic coordinate systems. We used geotiff and local UTM coordinates. However, the polygons of the non-urban LCZ classes (i.e., classes A to G) tend to be much larger in area than those of the urban classes, because the percentage of nonurban areas are naturally larger, and they are certainly much easier for humans to label. This results in many more pixels (samples) for nonurban classes. In order to balance the number of samples among all the LCZ classes, for each city, we reduced the number of samples of each of the nonurban classes A to G to , where is the maximum number of samples from the urban classes (i.e., classes 1 to 10). If the number of samples of certain nonurban classes was less than , those classes remained untouched. The samples of the urban classes were not reduced, because they are difficult to label. To this end, we were able to balance the different LCZ classes.

Ii-A5 Quantitative quality control and validation phase

It is known that the maximum accuracy achievable by any supervised learning procedure depends not only on the chosen algorithm, but also on the quality of the training data. Therefore, we conducted quantitative evaluation on 10 European cities in the dataset by having a group of remote sensing experts cast 10 independent votes on each labeled polygons, in order to assess the human labeling accuracy, and identify possible remaining errors. Despite the huge labor cost, we believe this is essential for Earth observation data and products to provide an error bar to the users. This label evaluation procedure will be discussed in detail in section

III.

Ii-B Preparing the Sentinel-1 data

The Sentinel-1 mission provides an open access global SAR dataset. We accessed the Sentinel-1 VV-VH dual-Pol single look complex (SLC) Level-1 data via the Copernicus Open Access Hub (https://scihub.copernicus.eu/) using an automatic script developed by the authors based on SentinelSat (https://github.com/sentinelsat/sentinelsat).

A series of preprocessing steps were applied to the Sentinel-1 data using the graph processing tool in the ESA SNAP toolbox. The detailed configurations of the preprocessing are listed as follows.

  • [leftmargin=*]

  • Apply orbit profile: This module downloads the latest released orbit profile so that a precisely geocoded product can be achieved.

  • Radiometric calibration: Radiometric computes the backscatter intensity using sensor calibration parameters in the metadata. The output is set to complex-valued image, in order to preserve the relative phase between VV and VH channels.

  • TOPSAR deburst: For each polarization channel, the Sentinel-1 IW product has three swaths. Each swath image consists of a series of bursts. The TOPSAR deburst merges all these bursts and swaths into a single SLC image.

  • Polarimetric speckle reduction: Speckle reduction was conducted by using the SNAP-integrated refined Lee filter. An unfiltered version is also included in the dataset.

  • Terrain correction

    : Terrain correction eliminates the distortion introduced by topographical variations. To accomplish the correction, the SRTM was used as the DEM to provide height information. The data was re-sampled to a 10m GSD by the nearest-neighbor interpolation. The data was geocoded into the WGS84/UTM coordinate system of the corresponding city with a GSD of 10m.

To summarize, the Sentinel-1 data in the So2Sat LCZ42 dataset contain the following 8 real-valued bands:

  1. [leftmargin=*]

  2. the real part of the unfiltered VH channel,

  3. the imaginary part of the unfiltered VH channel,

  4. the real part of the unfiltered VV channel,

  5. the imaginary part of the unfiltered VV channel,

  6. the intensity of the refined LEE filtered VH channel,

  7. the intensity of the refined LEE filtered VV channel,

  8. the real part of the refined LEE filtered covariance matrix off-diagonal element, and

  9. the imaginary part of the refined LEE filtered covariance matrix off-diagonal element.

Ii-C Preparing the Sentinel-2 data

We employed Google Earth Engine (GEE) to create the cloud-free Sentinel-2 images [16]. The overall workflow, based on the GEE Python API, consisted of the following three main steps.

  • [leftmargin=*]

  • The querying step for loading Sentinel-2 images from the catalogue,

  • The scoring step for the calculation of a cloud related-quality score of each loaded image, and

  • The mosaicing step for mosaicing the selected images based on the meta-information generated in the preceding modules.

More details can be found in [33].

Sentinel-2 images contain bands B2, B3, B4, B8 with 10m GSD, bands B5, B6, B7, B8a, B11, B12 with 20m GSD, and bands B1, B9, B10 with 60m GSD. In the So2Sat LCZ42 dataset, the 20m bands were upsampled to 10m GSD, and the bands B1, B9, and B10 were discarded because they mostly contain data related to the atmosphere and thus bear little relevance to LCZ classification. To summarize, the Sentinel-2 data in the So2Sat LCZ42 dataset contain the following 10 real-valued bands:

  1. Band B2, 10m GSD

  2. Band B3, 10m GSD

  3. Band B4, 10m GSD

  4. Band B5, upsampled to 10m from 20m GSD

  5. Band B6, upsampled to 10m from 20m GSD

  6. Band B7, upsampled to 10m from 20m GSD

  7. Band B8, 10m GSD

  8. Band B8a, upsampled to 10m from 20m GSD

  9. Band B11, upsampled to 10m from 20m GSD

  10. and Band B12, upsampled to 10m from 20m GSD

Ii-D Content of the So2Sat LCZ42 dataset

By projecting the labels to the coregistered Sentinel-1 and Sentinel-2 images, we can extract Sentinel-1 and Sentinel-2 image patch pairs with the corresponding LCZ labels. We define the dimension of the image patches in the So2Sat LCZ42 dataset as 32 by 32 pixels, which corresponds to a physical dimension of 320m by 320m. In order to create non-overlapping patches, we sampled the labeled polygons with a 320m by 320m grid, where the grid nodes are the center of each image patch. We obtained 400,673 pairs of Sentinel image patches. The volumn of the whole dataset is about 56GB.

For machine learning purposes, the dataset was split into a training set, a testing set, and a validation set. They consist of 352,366, 24,188, and 24,119 pairs of image patches, respectively. The training set comprises all the image patches of 32 cities plus the 10 add-on areas in the city list (please see Appendix B for the full list of cities). The remaining 10 cities are distributed across all the continents and culture regions over the world. For each of them, we split the labels of each LCZ class into the west and east halves of a city, to form the testing and validation sets, respectively. Therefore, all three sub-datasets are geographically separated from each other, despite having drawn the testing and validation sets from the same list of cities.

Iii Label Evaluation

It is well known that the maximum accuracy achievable by any supervised learning procedure depends not only on the chosen algorithm, but also on the quality of the training data [7]. In the context of the HUMINEX experiment, Bechtel et al. [5] have recently shown the difficulties associated with the assignment of LCZ classes by human experts. Therefore, evaluating the labels as a result of human expert knowledge is of vital importance for further use of the dataset in the training of classification algorithms for large-scale automatic LCZ mapping.

Iii-a The Evaluation Set

For the evaluation, we have chosen a subset of 10 European cities (shown in Table II) from the group of cities we labeled. The choice was based on the following three rationales:

  • [leftmargin=*]

  • All our labeling experts have lived in Europe for a significant number of years. This ensures familiarity with the general morphological appearance of European cities.

  • Google Earth provides detailed 3D models for the 10 cities, which is of great help in determining the approximate height of urban objects. This is necessary to be able to distinguish between low-rise, mid-rise, and high-rise classes.

  • As previously mentioned, LCZ labeling is very labor-intensive. Reducing the evaluation set to 10 cities allowed us to generate more individual votes per polygon for better statistics.

Unfortunately, not many European cities contain LCZ class 7 (light-weight low-rise), which mostly describes informal settlements (e.g., slums). Therefore, we included the polygons of class 7 for an additional 9 cities that are representative of the 9 major non-European geographical regions of the world (see Table III).

City Country
Amsterdam The Netherlands
Berlin Germany
Cologne Germany
London United Kingdom
Madrid Spain
Milan Italy
Munich Germany
Paris France
Rome Italy
Zurich Switzerland
TABLE II: 10 European cities selected for the quantitative label evaluation.
City Geographic Region
Guangzhou, China East Asia
Islamabad, Pakistan Middle East
Jakarta, Indonesia South-East Asia
Los Angeles, USA North America
Melbourne, Australia Oceania
Moscow, Russia Eastern Europe
Mumbai, India Indian Subcontinent
Nairobi, Kenya Sub Saharan Africa
Rio de Janeiro, Brazil Latin America
TABLE III: Additional 9 cities whose polygons of class 7 (light-weight low-rise) were used for the evaluation.

Iii-B Evaluation Strategy and Results

For the evaluation experiment, 10 remote sensing experts (hereafter referred to as the label validation crew), who were already trained in applying the LCZ scheme to annotate urban areas, were provided with .kml-files containing the polygons of the original So2Sat LCZ42 dataset, but without labels. They were then asked to reassign an LCZ class to every polygon, using Google Earth as the labeling environment. After all the relabeled .kml-files were submitted, both a polygon-wise and a pixel-wise evaluation between the original labels and the votes newly cast by the label validation crew was carried out in the form of confusion matrices, which combine the validation results of the 10 European validation cities (cf. Table II) and the slum areas of the additional 9 non-European validation cities (cf. Table III). These confusion matrices are displayed in Fig. 5(a) and (b).


(a)                               (b)                                (c)                               (d)

Fig. 5: Confusion matrices (values in %) of the original labels the final labels (refined by majority voting) vs. the votes cast by the label validation crew for the polygons of the evaluation cities selected in Tables II and III: (a) original labels polygon-wise assessment, (b) original labels pixel-wise, (c) final labels polygon-wise, and (d) final labels pixel-wise.

In addition, majority voting was carried out for each polygon, i.e., each polygon was reassigned to the class for which a majority of the label validation crew had voted, although we kept the original label in case there was a draw between this original class and another major class. The polygon-wise and pixel-wise confusion matrices between these final labels and the votes of the label validation crew can be seen in Fig. 5(c) and (d).

Iii-C Interpretation of the Evaluation Results

The confusion matrices in Figs. 5 show that:

  • [leftmargin=*]

  • There is no significant difference between the polygon-wise and the pixel-wise results, which indicates that the polygons are evenly distributed with respect to size.

  • The majority voting step helped to slightly improve the label confidences: Before the refinement, 11 of the 17 LCZ classes provided a confidence of more than 80%; after the refinement, this confidence level held for 13 classes.

  • In general, confusion among the urban classes is slightly higher than among the non-urban classes.

  • The most confident classes are 8 (large low-rise), A (dense trees), D (low plants), and G (water), with classes 2 (compact mid-rise) and E (bare rock/paved) following close behind.

  • The least confident classes are classes 3 (compact low-rise), 7 (lightweight low-rise), and C (bush, scrub), with classes 4 (open high-rise) and 9 (sparsely built) following behind. The main sources of confusion for these classes are summarized in Table IV.

Low confidence class Major confusion classes
3 (compact low-rise) 2 (compact mid-rise),
and 6 (open low-rise)
4 (open high-rise) 5 (open mid-rise)
7 (lightweight low-rise) 3 (compact low-rise)
9 (sparsely built) 6 (open low-rise)
C (bush, scrub) B (scattered trees),
and D (low plants)
TABLE IV: Main sources of confusion for the less confident LCZ classes.

These experiences go hand-in-hand with the findings of [5], who also found that LCZ classes A (dense trees), D (low plants), G (water), 2 (compact mid-rise), 6 (open low-rise), and 8 (large low-rise) were recognized consistently well by all operators, while classes 9 (sparsely built) and B (scattered trees) were reported as difficult to classify. Classes 1 (compact high-rise), 4 (open high-rise), 7 (lightweight low-rise), and C (bush, scrub) were not present in most of their study cities and thus not discussed in detail.

Looking at the major sources of confusion as summarized in Table IV, all these confusions appear fairly reasonable: Apparently, it is difficult even for human experts to distinguish the vaguely defined characteristics open and compact, as well as mid-rise and high-rise. In addition, sparsely built environments are understandably frequently confused with open low-rise neighborhoods, as is bush/scrubland with scattered trees and low plants.

Given the accordance with the findings of [5], the semantic subtleties of the LCZ classification scheme, and a mean class confidence of about 80% before refinement by majority voting and 85% after refinement, the So2Sat LCZ42 dataset can be considered a reliable source of labels for the training of machine learning procedures aiming at automated LCZ mapping at a larger scale.

Iv Baseline Classification Accuracy

In order to provide a baseline for the achievable LCZ classification accuracy, we performed classification on the So2Sat LCZ42 dataset using popular classifiers, including the classical random forests (RF), support vector machines (SVM) [45], and an attention-based ResNeXt as proposed in [43] and [41]

. The employed RF consists of 200 trees, and the max_depth is set to 10, with the other parameters set to the default. A radial basis function kernel is chosen for SVM in the experiment. The depth of the ResNeXt is 29 and the Convolutional Block Attention Module is plugged into each of the residual blocks. For RF and SVM, the pixel values of the patches are converted into vectors, using the statistical measures (maximum, minimum, standard deviations and mean) of each band. All the classifiers are trained using the training set and tested on the validation set.

The resulting accuracy based on the Sentinel-2 images in the So2Sat LCZ42 dataset can be seen in Table V. The accuracy measures include overall accuracy (OA), averaged accuracy (AA), and kappa coefficient. In addition, weighted accuracy (WA) introduced in [5] is also considered, because it gives user-defined weights to confusions between different classes. For example, misclassifying compact high-rise as compact middle-rise is less critical than the confusion between compact high-rise and water, and should thus be penalized less.

OA WA AA Kappa
RF 0.51 0.87 0.31 0.46
SVM 0.54 0.88 0.36 0.49
ResNeXt-CBAM 0.61 0.92 0.51 0.58
TABLE V: Classification accuracy from three baseline methods, with the Sentinel-2 images in the proposed dataset.

V Discussion

The goal of this paper is to provide documentation about a large benchmark dataset for LCZ classification from Sentinel-1 and Sentinel-2 satellite data. Since the Sentinel data is openly available for the whole globe, the main intention of the dataset is to enable the training of models that generalize to any unseen areas across the world. This is ensured by sampling the data from altogether 52 cities located on all inhabited continents. In spite of these promising characteristics, two major challenges have to be noted:

  • [leftmargin=*]

  • LCZs are sometimes hard to distinguish
    As the label validation results shown in Section III illustrate, it is extremely hard to distinguish some of the LCZ classes, even if human experts investigate several data sources (such as high-resolution optical imagery and 3D building models as available in Google Earth). This holds especially for the distinction of different height levels in compact areas, but also for open areas, which comprise both open land / vegetation and building structures. This has to be acknowledged as a natural limitation when tackling LCZ mapping with remote sensing data. This limitation can possibly only be solved by combining remote sensing data with other data sources, e.g. information from social media data.

  • Learning a generic LCZ prediction model is challenging
    As described in Section II-D, the test set and the training set are completely disjunct, with the test cities being distributed across the ten major cultural regions of the inhabited world. Therefore, results achieved on this dataset can be considered a good measure of how well the trained model would generalize to completely unseen data. In this regard, overall accuracies between 50% and 60% can already be considered promising – especially for a target scheme comprised of 17 difficult-to-distinguish classes. Nevertheless, there is still room for improvement, as usually an accuracy of at least about 85% to 90% is required for land cover mapping purposes according to [1].

We hope that the community is eager to tackle those challenges and puts the So2Sat LCZ42 dataset to good use in order to achieve significant progress in the global mapping of cities into LCZs.

Vi Conclusion and Outlook

This paper introduces a unique dataset that contains manually labeled LCZs reference data, as well as coregistered Sentinel-1 and Sentinel-2 image patch pairs over 42 cities plus 10 smaller areas across the six inhabited continents on this planet. The paper describes the carefully designed labeling process and a rigorous evaluation procedure that ensures the quality of the dataset. Despite the fact that each LCZ class is quantitatively defined in the original paper, we discovered that several LCZ classes can be easily confused with each other, because the height and percentage of pervious surface of these classes cannot be easily distuiguished by the human eye from aerial images during labeling. This renders the whole labeling process highly labor-intensive. Still, we were able to achieve an average class confidence of 85% through our human evaluation procedure with independent voting by 10 experts. Hence, this dataset is a reliable source for the training of machine learning procedures, and can be considered a challenging and large-scale data fusion and classification benchmark dataset for cutting-edge machine learning methodological developments. Examples for possible research directions include:

  • [leftmargin=*]

  • Since we have provided the label confusion matrix, the question of how to introduce such prior knowledge into machine learning, deep learning models in particular, is an interesting direction;

  • Due to culture-induced diversity existing in the data, transferablity of the models will be a key to achieving good classification results on a global scale;

  • Radar and optical data possess completely different yet partially complementary characteristics. Developing methods to fuse them in an optimal way or select appropriate features from such diverse data sources is of general interest to the remote sensing community;

  • Thanks to the large scale of the proposed benchmark data set, it can serve as a test bed for the development of efficient training techniques.

Our vision in the near future is to produce global LCZ classification map using multi-sensory remote sensing images, which will be made available to the community. Such geographic information seems trivial for developed countries. However, it is still very scarce in a global scale. For example, the city of Lagos, Nigeria (population 21 million) does not have a quality 3D city model. Therefore, a quality LCZ classification map will become the firsthand information of urban building volume and distribution. A global LCZ map will strongly boost urban geographic research and help us develop a better understanding of global urbanization. For this purpose, we invite everybody to contribute by using this dataset and developing new, sophisticated algorithms.

Appendix A Decision rule of the LCZ labeling

Please see Fig. 6.

Fig. 6:

Flowchart of the labeling decision rule, which labels one scene with seven decisions. They are: A: Is it homogeneous for at least five pixels of 100-by-100 meters? B: Is the building footprint large? C: Does any obvious industrial feature exist (such as oil tanks, cranes, or conveyor belts)? D1: Buildings with up to three floors; D2: Buildings with three to ten floors; D3: Buildings with ten floors and higher; E1: Building surface fraction between 20% and 40%; E2: Building surface smaller than 20%; E3: Building surface fraction between 40% and 70%; E4: Light material built with surface fraction larger than 60%; F: Is building surface fraction larger than 40%?; G: Is building surface fraction larger than 40%? The percentage is estimated by experts with a 100-by-100-meter polygon drawn on Google Earth. The building height is decided by experts using any available information, such as a 3D model, satellite images, or photo.

Appendix B City list of the So2Sat LCZ42 dataset

Training: Amsterdam, Beijing, Berlin, Bogota (addon), Buenos Aires (addon), Cairo, Cape Town, Caracas (addon), Changsha, Chicago (addon), Cologne, Dhaka (addon), Dongying, Hong Kong, Islamabad, Istanbul, Karachi (addon), Kyoto, Lima (addon), Lisbon, London, Los Angeles, Madrid, Manila (addon), Melbourne, Milan, Nanjing, New York, Paris, Philadelphia (addon), Qingdao, Rio De Janeiro, Rome, Salvador (addon), Sao Paulo, Shanghai, Shenzhen, Tokyo, Vancouver, Washington D.C., Wuhan, Zurich

Testing and validation: Guangzhou, Jakarta, Moscow, Mumbai, Munich, Nairobi, San Francisco, Santiago de Chile, Sydney, Tehran

Acknowledgment

This research was funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program with the grant number ERC-2016-StG-714087 (Acronym: So2Sat, project website: www.so2sat.eu), and the Helmholtz Association under the framework of the Young Investigators Group “Signal Processing in Earth Observation (SiPEO)” with the grant number VH-NG-1018 (project website: www.sipeo.bgu.tum.de).

References

  • [1] J. R. Anderson (1971) Land-use classification schemes – used in selected recent geographic applications of remote sensing. Photogrammetric Engineering 37 (4), pp. 379–387. Cited by: item 2).
  • [2] Y. Ban, P. Gong, and C. Gini (2015) Global land cover mapping using earth observation satellite data: Recent progresses and challenges. ISPRS journal of photogrammetry and remote sensing (Print) 103 (1), pp. 1–6. Cited by: §I.
  • [3] B. Bechtel, P. J. Alexander, C. Beck, J. Böhner, O. Brousse, J. Ching, M. Demuzere, C. Fonte, T. Gál, J. Hidalgo, et al. (2019) Generating wudapt level 0 data–current status of production and evaluation. Urban climate 27, pp. 24–45. Cited by: §I-B, §I-B.
  • [4] B. Bechtel, P. J. Alexander, J. Böhner, J. Ching, O. Conrad, J. Feddema, G. Mills, L. See, and I. Stewart (2015) Mapping local climate zones for a worldwide database of the form and function of cities. ISPRS International Journal of Geo-Information 4 (1), pp. 199–219. Cited by: §I-A, §I-B, §I-B.
  • [5] B. Bechtel, M. Demuzere, P. Sismanidis, D. Fenner, O. Brousse, C. Beck, F. Van Coillie, O. Conrad, I. Keramitsoglou, A. Middel, et al. (2017) Quality of crowdsourced data on urban morphology–The human influence experiment (HUMINEX). Urban Science 1 (2), pp. 15. Cited by: §I-B, §III-C, §III-C, §III, §IV.
  • [6] B. Bechtel, M. Foley, G. Mills, J. Ching, L. See, P. Alexander, M. O’Connor, T. Albuquerque, M. de Fatima Andrade, M. Brovelli, et al. (2015) CENSUS of cities: LCZ classification of cities (Level 0)–Workflow and initial results from various cities. In Proceedings of the 9th International Conference on Urban Climate, Toulouse, France, Cited by: §I-B.
  • [7] C. E. Brodley and M. A. Friedl (1999) Identifying mislabeled training data.

    Journal of artificial intelligence research

    11, pp. 131–167.
    Cited by: §III.
  • [8] M. Demuzere, B. Bechtel, and G. Mills (2019) Global transferability of local climate zone models. Urban Climate 27, pp. 46–63. Cited by: §I-B.
  • [9] M. Demuzere, B. Bechtel, A. Middel, and G. Mills (2019) Mapping europe into local climate zones. PloS one 14 (4), pp. e0214474. Cited by: §I-B, §I-B.
  • [10] T. Esch, M. Marconcini, A. Felbier, A. Roth, W. Heldens, M. Huber, M. Schwinger, H. Taubenböck, A. Müller, and S. Dech (2013) Urban footprint processor–Fully automated processing chain generating settlement masks from global data of the TanDEM-X mission. IEEE Geoscience and Remote Sensing Letters 10 (6), pp. 1617–1621. Cited by: §I.
  • [11] T. Esch, H. Taubenböck, A. Roth, W. Heldens, A. Felbier, M. Schmidt, A. A. Mueller, M. Thiel, and S. W. Dech (2012) TanDEM-X mission–new perspectives for the inventory and monitoring of global settlement patterns. Journal of Applied Remote Sensing 6 (1), pp. 061702. Cited by: §I.
  • [12] D. Fenner, F. Meier, B. Bechtel, M. Otto, and D. Scherer (2017) Intra and inter local climate zone variability of air temperature as observed by crowdsourced citizen weather stations in Berlin, Germany. Meteorologische Zeitschrift 26, pp. 525–547. Cited by: §I-A.
  • [13] Y. Fu, K. Liu, Z. Shen, J. Deng, M. Gan, X. Liu, D. Lu, and K. Wang (2019) Mapping impervious surfaces in town–rural transition belts using china’s gf-2 imagery and object-based deep cnns. Remote Sensing 11 (3), pp. 280. Cited by: §I-B.
  • [14] J. Geletič, M. Lehnert, S. Savić, and D. Milošević (2019) Inter-/intra-zonal seasonal variability of the surface urban heat island based on local climate zones in three central european cities. Building and Environment 156, pp. 21–32. Cited by: §I-A.
  • [15] J. Geletič, M. Lehnert, P. Dobrovolnỳ, and M. Žuvela-Aloise (2019) Spatial modelling of summer climate indices based on local climate zones: expected changes in the future climate of brno, czech republic. Climatic Change 152 (3-4), pp. 487–502. Cited by: §I-A.
  • [16] N. Gorelick, M. Hancher, M. Dixon, S. Ilyushchenko, D. Thau, and R. Moore (2017) Google earth engine: planetary-scale geospatial analysis for everyone. Remote Sensing of Environment 202, pp. 18–27. Cited by: §II-C.
  • [17] J. Hidalgo, G. Dumas, V. Masson, G. Petit, B. Bechtel, E. Bocher, M. Foley, R. Schoetter, and G. Mills (2019) Comparison between local climate zones maps derived from administrative datasets and satellite observations. Urban Climate 27, pp. 64–89. Cited by: §I-B.
  • [18] H. C. Ho, K. K. Lau, R. Yu, D. Wang, J. Woo, T. C. Y. Kwok, and E. Ng (2017) Spatial variability of geriatric depression risk in a high-density city: A data-driven socio-environmental vulnerability mapping approach. International journal of environmental research and public health 14 (9), pp. 994. Cited by: §I-A.
  • [19] J. Hu, P. Ghamisi, and X. Zhu (2018) Feature extraction and selection of Sentinel-1 dual-pol data for global-scale local climate zone classification. ISPRS International Journal of Geo-Information 7 (9), pp. 379. Cited by: §I-B.
  • [20] H. Jing, Y. Feng, W. Zhang, Y. Zhang, S. Wang, K. Fu, and K. Chen (2019) Effective classification of local climate zones based on multi-source remote sensing data. In IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium, pp. 2666–2669. Cited by: §I-B.
  • [21] C. B. Koc, P. Osmond, A. Peters, and M. Irger (2017) Mapping local climate zones for urban morphology classification based on airborne remote sensing data. In 2017 Joint Urban Remote Sensing Event (JURSE), pp. 1–4. Cited by: §I-A.
  • [22] C. B. Koc, P. Osmond, A. Peters, and M. Irger (2018) Understanding land surface temperature differences of local climate zones based on airborne remote sensing data. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 11 (8), pp. 2724–2730. Cited by: §I-A.
  • [23] R. Kotharkar and A. Bagade (2018) Evaluating urban heat island in the critical local climate zones of an Indian city. Landscape and Urban Planning 169, pp. 92–104. Cited by: §I-A.
  • [24] G. Mills, J. Ching, L. See, B. Bechtel, and M. Foley (2015) An introduction to the WUDAPT project. In Proceedings of the 9th International Conference on Urban Climate, Toulouse, France, pp. 20–24. Cited by: §I-B.
  • [25] M. Pesaresi, G. Huadong, X. Blaes, D. Ehrlich, S. Ferri, L. Gueguen, M. Halkia, M. Kauffmann, T. Kemper, L. Lu, et al. (2013) A global human settlement layer from optical HR/VHR RS data: Concept and first results. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 6 (5), pp. 2102–2131. Cited by: §I.
  • [26] C. Qiu, L. Mou, M. Schmitt, and X. X. Zhu (2019) LCZ-based urban land cover classification from multi-seasonal Sentinel-2 images with a recurrent residual network. ISPRS J. Photogramm. Remote Sens. 154, pp. 151–162. Cited by: §I-B.
  • [27] C. Qiu, M. Schmitt, P. Ghamisi, L. Mou, and X. X. Zhu (2018) Feature importance analysis of Sentinel-2 imagery for large-scale urban local climate zone classification. In Geoscience and Remote Sensing Symposium (IGARSS), 2018 IEEE International, Note: in press Cited by: §I-B.
  • [28] C. Qiu, M. Schmitt, P. Ghamisi, and X. X. Zhu (2018) Effect of the training set configuration on Sentinel-2-based urban local climate zone classification. In The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Note: in press Cited by: §I-B.
  • [29] C. Qiu, M. Schmitt, L. Mou, and X. X. Zhu (2018) Urban local climate zone classification with a residual convolutional neural network and multi-seasonal Sentinel-2 images. In

    10th IAPR Workshop on Pattern Recognition in Remote Sensing

    ,
    Note: in press Cited by: §I-B.
  • [30] S. J. Quan, F. Dutt, E. Woodworth, Y. Yamagata, and P. P. Yang (2017) Local climate zone mapping for energy resilience: a fine-grained and 3D approach. Energy Procedia 105, pp. 3777–3783. Cited by: §I-A.
  • [31] J. A. Quanz, S. Ulrich, D. Fenner, A. Holtmann, and J. Eimermacher (2018) Micro-scale variability of air temperature within a local climate zone in Berlin, Germany, during summer. Climate 6 (1), pp. 5. Cited by: §I-A.
  • [32] C. Ren, R. Wang, M. Cai, Y. Xu, Y. Zheng, and E. Ng (2016) The accuracy of lcz maps generated by the world urban database and access portal tools (WUDAPT) method: a case study of Hong Kong. In 4th Int. Conf. Countermeasure Urban Heat Islands, Singapore, Cited by: §I-B.
  • [33] M. Schmitt, L. H. Hughes, C. Qiu, and X. X. Zhu (2019) Aggregating cloud-free Sentinel-2 images with google earth engine. Note: to appear Cited by: §II-C.
  • [34] I. D. Stewart, T. R. Oke, and E. S. Krayenhoff (2014) Evaluation of the ‘local climate zone’ scheme using temperature observations and model simulations. International Journal of Climatology 34 (4), pp. 1062–1080. Cited by: §I-A.
  • [35] I. D. Stewart and T. R. Oke (2012) Local climate zones for urban temperature studies. Bulletin of the American Meteorological Society 93 (12), pp. 1879–1900. Cited by: §I-A, §I-C, §II-A1, TABLE I.
  • [36] I. D. Stewart (2011) Local climate zones: origins, development, and application to urban heat island studies. Paper presented at the Annual Meeting of the American Association of Geographers. Cited by: §I-A.
  • [37] S. Sukhanov, I. Tankoyeu, J. Louradour, R. Heremans, D. Trofimova, and C. Debes (2017) Multilevel ensembling for local climate zones classification. In Geoscience and Remote Sensing Symposium (IGARSS), 2017 IEEE International, pp. 1201–1204. Cited by: §I-B.
  • [38] H. Taubenböck, T. Esch, A. Felbier, M. Wiesner, A. Roth, and S. Dech (2012) Monitoring urbanization in mega cities from space. Remote sensing of Environment 117, pp. 162–176. Cited by: §I.
  • [39] United Nations (2018) World Urbanization Prospects: 2018 Revision. United Nation. External Links: ISBN 978-92-1-151517-6 Cited by: §I.
  • [40] A. Wicki and E. Parlow (2017) Attribution of local climate zones using a multitemporal land use/land cover classification scheme. Journal of Applied Remote Sensing 11 (2), pp. 026001. Cited by: §I-A.
  • [41] S. Woo, J. Park, J. Lee, and I. So Kweon (2018) Cbam: convolutional block attention module. In

    Proc. European Conference on Computer Vision

    ,
    pp. 3–19. Cited by: §IV.
  • [42] WUDAPT project website. Note: http://www.wudapt.org/lcz/lcz-framework/Accessed on: 2018-11-16 Cited by: Fig. 1, §I-B, §II-A2.
  • [43] S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He (2017) Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1492–1500. Cited by: §IV.
  • [44] G. Xu, X. Zhu, N. Tapper, and B. Bechtel (2019) Urban climate zone classification using convolutional neural network and ground-level images. Progress in Physical Geography: Earth and Environment, pp. 0309133319837711. Cited by: §I-B.
  • [45] Y. Xu, C. Ren, M. Cai, N. Y. Y. Edward, and T. Wu (2017) Classification of local climate zones using ASTER and Landsat data for high-density cities. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 10 (7), pp. 3397–3405. Cited by: §I-B, §IV.
  • [46] N. Yokoya, P. Ghamisi, J. Xia, S. Sukhanov, R. Heremans, I. Tankoyeu, B. Bechtel, B. L. Saux, G. Moser, and D. Tuia (2018, DOI: 10.1109/JSTARS.2018.2799698) Open data for global multimodal land use classification: Outcome of the 2017 IEEE GRSS data fusion contest. IEEE J-STARS. Cited by: §I-A, §I-B.
  • [47] N. Yokoya, P. Ghamisi, and J. Xia (2017) Multimodal, multitemporal, and multisource global data fusion for local climate zones classification based on ensemble learning. In Geoscience and Remote Sensing Symposium (IGARSS), 2017 IEEE International, pp. 1197–1200. Cited by: §I-B.
  • [48] C. Yoo, D. Han, J. Im, and B. Bechtel (2019) Comparison between convolutional neural networks and random forest for local climate zone classification in mega urban areas using landsat images. ISPRS Journal of Photogrammetry and Remote Sensing 157, pp. 155–170. Cited by: §I-B.
  • [49] X. X. Zhu, D. Tuia, L. Mou, G. Xia, L. Zhang, F. Xu, and F. Fraundorfer (2017) Deep learning in remote sensing: A comprehensive review and list of resources. IEEE Geoscience and Remote Sensing Magazine 5 (4), pp. 8–36. Cited by: §I-B.