Saliency Learning: Teaching the Model Where to Pay Attention

02/22/2019
by   Reza Ghaeini, et al.
0

Deep learning has emerged as a compelling solution to many NLP tasks with remarkable performances. However, due to their opacity, such models are hard to interpret and trust. Recent work on explaining deep models has introduced approaches to provide insights toward the model's behavior and predictions, which are helpful for determining the reliability of the model's prediction. However, such methods do not fix and improve the model's reliability. In this paper, we teach our models to make the right prediction for the right reason by providing explanation training signal and ensuring alignment of the models explanation with the ground truth explanation. Our experimental results on multiple tasks and datasets demonstrate the effectiveness of the proposed method, which produces more reliable predictions while delivering better results compared to traditionally trained models.

READ FULL TEXT

page 7

page 9

page 10

research
12/07/2021

Training Deep Models to be Explained with Fewer Examples

Although deep models achieve high predictive performance, it is difficul...
research
05/26/2019

Why do These Match? Explaining the Behavior of Image Similarity Models

Explaining a deep learning model can help users understand its behavior ...
research
06/24/2022

Robustness of Explanation Methods for NLP Models

Explanation methods have emerged as an important tool to highlight the f...
research
05/29/2021

EDDA: Explanation-driven Data Augmentation to Improve Model and Explanation Alignment

Recent years have seen the introduction of a range of methods for post-h...
research
08/29/2019

Human-grounded Evaluations of Explanation Methods for Text Classification

Due to the black-box nature of deep learning models, methods for explain...
research
10/04/2022

Explanation-by-Example Based on Item Response Theory

Intelligent systems that use Machine Learning classification algorithms ...
research
12/16/2020

Learning from the Best: Rationalizing Prediction by Adversarial Information Calibration

Explaining the predictions of AI models is paramount in safety-critical ...

Please sign up or login with your details

Forgot password? Click here to reset