XAI Model for Accurate and Interpretable Landslide Susceptibility

01/18/2022
by   Khaled Youssef, et al.
0

Landslides are notoriously difficult to predict. Deep neural networks (DNNs) models are more accurate than statistical models. However, they are uninterpretable, making it difficult to extract mechanistic information about landslide controls in the modeled region. We developed an explainable AI (XAI) model to assess landslide susceptibility that is computationally simple and features high accuracy. We validated it on three different regions of eastern Himalaya that are highly susceptible to landslides. SNNs are computationally much simpler than DNNs, yet achieve similar performance while offering insights regarding the relative importance of landslide control factors in each region. Our analysis highlighted the importance of: 1) the product of slope and precipitation rate and 2) topographic aspects that contribute to high susceptibility in landslide areas. These identified controls suggest that strong slope-climate couplings, along with microclimates, play more dominant roles in eastern Himalayan landslides. The model outperforms physically-based stability and statistical models.

READ FULL TEXT

page 12

page 30

page 31

page 33

page 34

page 35

page 38

research
10/09/2022

A Detailed Study of Interpretability of Deep Neural Network based Top Taggers

Recent developments in the methods of explainable AI (xAI) methods allow...
research
02/02/2021

A Survey on Understanding, Visualizations, and Explanation of Deep Neural Networks

Recent advancements in machine learning and signal processing domains ha...
research
10/13/2020

Neural Gaussian Mirror for Controlled Feature Selection in Neural Networks

Deep neural networks (DNNs) have become increasingly popular and achieve...
research
04/09/2019

Towards Analyzing Semantic Robustness of Deep Neural Networks

Despite the impressive performance of Deep Neural Networks (DNNs) on var...
research
08/18/2023

Enumerating Safe Regions in Deep Neural Networks with Provable Probabilistic Guarantees

Identifying safe areas is a key point to guarantee trust for systems tha...
research
02/13/2022

Neural Network Trojans Analysis and Mitigation from the Input Domain

Deep Neural Networks (DNNs) can learn Trojans (or backdoors) from benign...
research
02/11/2020

Towards better understanding of meta-features contributions

Meta learning is a difficult problem as the expected performance of a mo...

Please sign up or login with your details

Forgot password? Click here to reset