CHALLENGER: Training with Attribution Maps

05/30/2022
by   Christian Tomani, et al.
0

We show that utilizing attribution maps for training neural networks can improve regularization of models and thus increase performance. Regularization is key in deep learning, especially when training complex models on relatively small datasets. In order to understand inner workings of neural networks, attribution methods such as Layer-wise Relevance Propagation (LRP) have been extensively studied, particularly for interpreting the relevance of input features. We introduce Challenger, a module that leverages the explainable power of attribution maps in order to manipulate particularly relevant input patterns. Therefore, exposing and subsequently resolving regions of ambiguity towards separating classes on the ground-truth data manifold, an issue that arises particularly when training models on rather small datasets. Our Challenger module increases model performance through building more diverse filters within the network and can be applied to any input data domain. We demonstrate that our approach results in substantially better classification as well as calibration performance on datasets with only a few samples up to datasets with thousands of samples. In particular, we show that our generic domain-independent approach yields state-of-the-art results in vision, natural language processing and on time series tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/19/2019

Testing the robustness of attribution methods for convolutional neural networks in MRI-based Alzheimer's disease classification

Attribution methods are an easy to use tool for investigating and valida...
research
10/14/2020

Learning Propagation Rules for Attribution Map Generation

Prior gradient-based attribution-map methods rely on handcrafted propaga...
research
01/11/2023

Padding Module: Learning the Padding in Deep Neural Networks

During the last decades, many studies have been dedicated to improving t...
research
10/01/2020

Explaining Convolutional Neural Networks through Attribution-Based Input Sampling and Block-Wise Feature Aggregation

As an emerging field in Machine Learning, Explainable AI (XAI) has been ...
research
10/14/2020

FAR: A General Framework for Attributional Robustness

Attribution maps have gained popularity as tools for explaining neural n...
research
02/23/2022

Training Characteristic Functions with Reinforcement Learning: XAI-methods play Connect Four

One of the goals of Explainable AI (XAI) is to determine which input com...
research
07/01/2021

Combining Feature and Instance Attribution to Detect Artifacts

Training the large deep neural networks that dominate NLP requires large...

Please sign up or login with your details

Forgot password? Click here to reset