Evaluating Adversarial Robustness for Deep Neural Network Interpretability using fMRI Decoding

04/23/2020
by   Patrick McClure, et al.
0

While deep neural networks (DNNs) are being increasingly used to make predictions from high-dimensional, complex data, they are widely seen as uninterpretable "black boxes", since it can be difficult to discover what input information is used to make predictions. This ability is particularly important for applications in cognitive neuroscience and neuroinformatics. A saliency map is a common approach for producing interpretable visualizations of the relative importance of input features for a prediction. However, many methods for creating these maps fail due to focusing too much on the input or being extremely sensitive to small input noise. It is also challenging to quantitatively evaluate how well saliency maps correspond to the truly relevant input information. In this paper, we develop two quantitative evaluation procedures for saliency methods, using the fact that the Human Connectome Project (HCP) dataset contains functional magnetic resonance imaging(fMRI) data from multiple tasks per subject to create ground truth saliency maps.We then introduce an adversarial training method that makes DNNs robust to small input noise, and use these evaluations to demonstrate that it greatly improves interpretability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/27/2019

Input-Cell Attention Reduces Vanishing Saliency of Recurrent Neural Networks

Recent efforts to improve the interpretability of deep neural networks u...
research
05/07/2019

Representation of White- and Black-Box Adversarial Examples in Deep Neural Networks and Humans: A Functional Magnetic Resonance Imaging Study

The recent success of brain-inspired deep neural networks (DNNs) in solv...
research
11/14/2021

Scrutinizing XAI using linear ground-truth data with suppressor variables

Machine learning (ML) is increasingly often used to inform high-stakes d...
research
06/14/2020

On Saliency Maps and Adversarial Robustness

A Very recent trend has emerged to couple the notion of interpretability...
research
01/26/2022

A Comprehensive Study of Image Classification Model Sensitivity to Foregrounds, Backgrounds, and Visual Attributes

While datasets with single-label supervision have propelled rapid advanc...
research
01/20/2023

Self-Organization Towards 1/f Noise in Deep Neural Networks

Despite 1/f noise being ubiquitous in both natural and artificial system...
research
08/08/2017

Learning Visual Importance for Graphic Designs and Data Visualizations

Knowing where people look and click on visual designs can provide clues ...

Please sign up or login with your details

Forgot password? Click here to reset