Reflective-Net: Learning from Explanations

11/27/2020
by   Johannes Schneider, et al.
0

Humans possess a remarkable capability to make fast, intuitive decisions, but also to self-reflect, i.e., to explain to oneself, and to efficiently learn from explanations by others. This work provides the first steps toward mimicking this process by capitalizing on the explanations generated based on existing explanation methods, i.e. Grad-CAM. Learning from explanations combined with conventional labeled data yields significant improvements for classification in terms of accuracy and training time.

READ FULL TEXT
research
05/31/2019

Do Human Rationales Improve Machine Explanations?

Work on "learning with rationales" shows that humans providing explanati...
research
09/15/2021

Self-learn to Explain Siamese Networks Robustly

Learning to compare two objects are essential in applications, such as d...
research
06/09/2023

Consistent Explanations in the Face of Model Indeterminacy via Ensembling

This work addresses the challenge of providing consistent explanations f...
research
01/27/2020

Explaining with Counter Visual Attributes and Examples

In this paper, we aim to explain the decisions of neural networks by uti...
research
12/19/2021

RELAX: Representation Learning Explainability

Despite the significant improvements that representation learning via se...
research
10/01/2021

The Cognitive Science of Extremist Ideologies Online

Extremist ideologies are finding new homes in online forums. These serve...
research
05/14/2018

Faithfully Explaining Rankings in a News Recommender System

There is an increasing demand for algorithms to explain their outcomes. ...

Please sign up or login with your details

Forgot password? Click here to reset