DeepAI AI Chat
Log In Sign Up

Reflective-Net: Learning from Explanations

11/27/2020
by   Johannes Schneider, et al.
0

Humans possess a remarkable capability to make fast, intuitive decisions, but also to self-reflect, i.e., to explain to oneself, and to efficiently learn from explanations by others. This work provides the first steps toward mimicking this process by capitalizing on the explanations generated based on existing explanation methods, i.e. Grad-CAM. Learning from explanations combined with conventional labeled data yields significant improvements for classification in terms of accuracy and training time.

READ FULL TEXT
05/31/2019

Do Human Rationales Improve Machine Explanations?

Work on "learning with rationales" shows that humans providing explanati...
09/15/2021

Self-learn to Explain Siamese Networks Robustly

Learning to compare two objects are essential in applications, such as d...
01/27/2020

Explaining with Counter Visual Attributes and Examples

In this paper, we aim to explain the decisions of neural networks by uti...
12/19/2021

RELAX: Representation Learning Explainability

Despite the significant improvements that representation learning via se...
10/01/2021

The Cognitive Science of Extremist Ideologies Online

Extremist ideologies are finding new homes in online forums. These serve...
05/14/2018

Faithfully Explaining Rankings in a News Recommender System

There is an increasing demand for algorithms to explain their outcomes. ...
07/25/2019

How to Manipulate CNNs to Make Them Lie: the GradCAM Case

Recently many methods have been introduced to explain CNN decisions. How...