Neural Network Attribution Methods for Problems in Geoscience: A Novel Synthetic Benchmark Dataset

03/18/2021
by   Antonios Mamalakis, et al.
7

Despite the increasingly successful application of neural networks to many problems in the geosciences, their complex and nonlinear structure makes the interpretation of their predictions difficult, which limits model trust and does not allow scientists to gain physical insights about the problem at hand. Many different methods have been introduced in the emerging field of eXplainable Artificial Intelligence (XAI), which aim at attributing the network's prediction to specific features in the input domain. XAI methods are usually assessed by using benchmark datasets (like MNIST or ImageNet for image classification), or through deletion/insertion techniques. In either case, however, an objective, theoretically-derived ground truth for the attribution is lacking, making the assessment of XAI in many cases subjective. Also, benchmark datasets for problems in geosciences are rare. Here, we provide a framework, based on the use of additively separable functions, to generate attribution benchmark datasets for regression problems for which the ground truth of the attribution is known a priori. We generate a long benchmark dataset and train a fully-connected network to learn the underlying function that was used for simulation. We then compare estimated attribution heatmaps from different XAI methods to the ground truth in order to identify examples where specific XAI methods perform well or poorly. We believe that attribution benchmarks as the ones introduced herein are of great importance for further application of neural networks in the geosciences, and for accurate implementation of XAI methods, which will increase model trust and assist in discovering new science.

READ FULL TEXT

page 3

page 4

page 6

page 7

research
04/26/2021

Towards Rigorous Interpretations: a Formalisation of Feature Attribution

Feature attribution is often loosely presented as the process of selecti...
research
04/05/2023

How good Neural Networks interpretation methods really are? A quantitative benchmark

Saliency Maps (SMs) have been extensively used to interpret deep learnin...
research
08/19/2022

Carefully choose the baseline: Lessons learned from applying XAI attribution methods for regression tasks in geoscience

Methods of eXplainable Artificial Intelligence (XAI) are used in geoscie...
research
04/01/2021

Explaining COVID-19 and Thoracic Pathology Model Predictions by Identifying Informative Input Features

Neural networks have demonstrated remarkable performance in classificati...
research
08/06/2023

Precise Benchmarking of Explainable AI Attribution Methods

The rationale behind a deep learning model's output is often difficult t...
research
11/14/2021

"Will You Find These Shortcuts?" A Protocol for Evaluating the Faithfulness of Input Salience Methods for Text Classification

Feature attribution a.k.a. input salience methods which assign an import...

Please sign up or login with your details

Forgot password? Click here to reset