Investigating the fidelity of explainable artificial intelligence methods for applications of convolutional neural networks in geoscience

02/07/2022
by   Antonios Mamalakis, et al.
8

Convolutional neural networks (CNNs) have recently attracted great attention in geoscience due to their ability to capture non-linear system behavior and extract predictive spatiotemporal patterns. Given their black-box nature however, and the importance of prediction explainability, methods of explainable artificial intelligence (XAI) are gaining popularity as a means to explain the CNN decision-making strategy. Here, we establish an intercomparison of some of the most popular XAI methods and investigate their fidelity in explaining CNN decisions for geoscientific applications. Our goal is to raise awareness of the theoretical limitations of these methods and gain insight into the relative strengths and weaknesses to help guide best practices. The considered XAI methods are first applied to an idealized attribution benchmark, where the ground truth of explanation of the network is known a priori, to help objectively assess their performance. Secondly, we apply XAI to a climate-related prediction setting, namely to explain a CNN that is trained to predict the number of atmospheric rivers in daily snapshots of climate simulations. Our results highlight several important issues of XAI methods (e.g., gradient shattering, inability to distinguish the sign of attribution, ignorance to zero input) that have previously been overlooked in our field and, if not considered cautiously, may lead to a distorted picture of the CNN decision-making strategy. We envision that our analysis will motivate further investigation into XAI fidelity and will help towards a cautious implementation of XAI in geoscience, which can lead to further exploitation of CNNs and deep learning for prediction problems.

READ FULL TEXT

page 6

page 17

page 20

page 21

page 26

page 27

research
08/02/2022

Diagnosis of Paratuberculosis in Histopathological Images Based on Explainable Artificial Intelligence and Deep Learning

Artificial intelligence holds great promise in medical imaging, especial...
research
08/19/2022

Carefully choose the baseline: Lessons learned from applying XAI attribution methods for regression tasks in geoscience

Methods of eXplainable Artificial Intelligence (XAI) are used in geoscie...
research
07/26/2022

From Interpretable Filters to Predictions of Convolutional Neural Networks with Explainable Artificial Intelligence

Convolutional neural networks (CNN) are known for their excellent featur...
research
12/12/2022

Utilizing Mutations to Evaluate Interpretability of Neural Networks on Genomic Data

Even though deep neural networks (DNNs) achieve state-of-the-art results...
research
10/01/2020

Explaining Convolutional Neural Networks through Attribution-Based Input Sampling and Block-Wise Feature Aggregation

As an emerging field in Machine Learning, Explainable AI (XAI) has been ...
research
03/18/2021

Neural Network Attribution Methods for Problems in Geoscience: A Novel Synthetic Benchmark Dataset

Despite the increasingly successful application of neural networks to ma...
research
03/01/2023

Finding the right XAI method – A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate Science

Explainable artificial intelligence (XAI) methods shed light on the pred...

Please sign up or login with your details

Forgot password? Click here to reset