Learning from Multimodal and Multitemporal Earth Observation Data for Building Damage Mapping

by   Bruno Adriano, et al.

Earth observation technologies, such as optical imaging and synthetic aperture radar (SAR), provide excellent means to monitor ever-growing urban environments continuously. Notably, in the case of large-scale disasters (e.g., tsunamis and earthquakes), in which a response is highly time-critical, images from both data modalities can complement each other to accurately convey the full damage condition in the disaster's aftermath. However, due to several factors, such as weather and satellite coverage, it is often uncertain which data modality will be the first available for rapid disaster response efforts. Hence, novel methodologies that can utilize all accessible EO datasets are essential for disaster management. In this study, we have developed a global multisensor and multitemporal dataset for building damage mapping. We included building damage characteristics from three disaster types, namely, earthquakes, tsunamis, and typhoons, and considered three building damage categories. The global dataset contains high-resolution optical imagery and high-to-moderate-resolution multiband SAR data acquired before and after each disaster. Using this comprehensive dataset, we analyzed five data modality scenarios for damage mapping: single-mode (optical and SAR datasets), cross-modal (pre-disaster optical and post-disaster SAR datasets), and mode fusion scenarios. We defined a damage mapping framework for the semantic segmentation of damaged buildings based on a deep convolutional neural network algorithm. We compare our approach to another state-of-the-art baseline model for damage mapping. The results indicated that our dataset, together with a deep learning network, enabled acceptable predictions for all the data modality scenarios.


page 6

page 10

page 12


The QXS-SAROPT Dataset for Deep Learning in SAR-Optical Data Fusion

Deep learning techniques have made an increasing impact on the field of ...

Deep Learning-based Damage Mapping with InSAR Coherence Time Series

Satellite remote sensing is playing an increasing role in the rapid mapp...

Multi-Modal Deep Learning for Multi-Temporal Urban Mapping With a Partly Missing Optical Modality

This paper proposes a novel multi-temporal urban mapping approach using ...

Disaster mapping from satellites: damage detection with crowdsourced point labels

High-resolution satellite imagery available immediately after disaster e...

Fusing VHR Post-disaster Aerial Imagery and LiDAR Data for Roof Classification in the Caribbean using CNNs

Accurate and up-to-date information on building characteristics is essen...

Deep learning based landslide density estimation on SAR data for rapid response

This work aims to produce landslide density estimates using Synthetic Ap...

Investigating Imbalances Between SAR and Optical Utilization for Multi-Modal Urban Mapping

Accurate urban maps provide essential information to support sustainable...

Please sign up or login with your details

Forgot password? Click here to reset