RES: A Robust Framework for Guiding Visual Explanation

06/27/2022
by   Yuyang Gao, et al.
9

Despite the fast progress of explanation techniques in modern Deep Neural Networks (DNNs) where the main focus is handling "how to generate the explanations", advanced research questions that examine the quality of the explanation itself (e.g., "whether the explanations are accurate") and improve the explanation quality (e.g., "how to adjust the model to generate more accurate explanations when explanations are inaccurate") are still relatively under-explored. To guide the model toward better explanations, techniques in explanation supervision - which add supervision signals on the model explanation - have started to show promising effects on improving both the generalizability as and intrinsic interpretability of Deep Neural Networks. However, the research on supervising explanations, especially in vision-based applications represented through saliency maps, is in its early stage due to several inherent challenges: 1) inaccuracy of the human explanation annotation boundary, 2) incompleteness of the human explanation annotation region, and 3) inconsistency of the data distribution between human annotation and model explanation maps. To address the challenges, we propose a generic RES framework for guiding visual explanation by developing a novel objective that handles inaccurate boundary, incomplete region, and inconsistent distribution of human annotations, with a theoretical justification on model generalizability. Extensive experiments on two real-world image datasets demonstrate the effectiveness of the proposed framework on enhancing both the reasonability of the explanation and the performance of the backbone DNNs model.

READ FULL TEXT

page 2

page 7

page 11

research
09/08/2021

Diagnostics-Guided Explanation Generation

Explanations shed light on a machine learning model's rationales and can...
research
06/09/2023

Overcoming Adversarial Attacks for Human-in-the-Loop Applications

Including human analysis has the potential to positively affect the robu...
research
12/04/2018

Learning to Explain with Complemental Examples

This paper addresses the generation of explanations with visual examples...
research
09/11/2023

Distance-Aware eXplanation Based Learning

eXplanation Based Learning (XBL) is an interactive learning approach tha...
research
07/01/2020

Unifying Model Explainability and Robustness via Machine-Checkable Concepts

As deep neural networks (DNNs) get adopted in an ever-increasing number ...
research
03/20/2018

Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges

Issues regarding explainable AI involve four components: users, laws & r...
research
08/16/2023

Interpretability Benchmark for Evaluating Spatial Misalignment of Prototypical Parts Explanations

Prototypical parts-based networks are becoming increasingly popular due ...

Please sign up or login with your details

Forgot password? Click here to reset