Introspective Learning : A Two-Stage Approach for Inference in Neural Networks

09/17/2022
by   Mohit Prabhushankar, et al.
12

In this paper, we advocate for two stages in a neural network's decision making process. The first is the existing feed-forward inference framework where patterns in given data are sensed and associated with previously learned patterns. The second stage is a slower reflection stage where we ask the network to reflect on its feed-forward decision by considering and evaluating all available choices. Together, we term the two stages as introspective learning. We use gradients of trained neural networks as a measurement of this reflection. A simple three-layered Multi Layer Perceptron is used as the second stage that predicts based on all extracted gradient features. We perceptually visualize the post-hoc explanations from both stages to provide a visual grounding to introspection. For the application of recognition, we show that an introspective network is 4 errors when generalizing to noisy data. We also illustrate the value of introspective networks in downstream tasks that require generalizability and calibration including active learning, out-of-distribution detection, and uncertainty estimation. Finally, we ground the proposed machine introspection to human introspection for the application of image quality assessment.

READ FULL TEXT

page 5

page 9

page 18

page 19

page 21

research
03/23/2021

Contrastive Reasoning in Neural Networks

Neural networks represent data as projections on trained weights in a hi...
research
12/11/2019

A Supervised Modified Hebbian Learning Method On Feed-forward Neural Networks

In this paper, we present a new supervised learning algorithm that is ba...
research
03/28/2018

Feed-forward Uncertainty Propagation in Belief and Neural Networks

We propose a feed-forward inference method applicable to belief and neur...
research
12/29/2020

Transformer Feed-Forward Layers Are Key-Value Memories

Feed-forward layers constitute two-thirds of a transformer model's param...
research
02/11/2023

Stochastic Surprisal: An inferential measurement of Free Energy in Neural Networks

This paper conjectures and validates a framework that allows for action ...
research
06/15/2020

Depth Uncertainty in Neural Networks

Existing methods for estimating uncertainty in deep learning tend to req...
research
10/17/2022

PARTIME: Scalable and Parallel Processing Over Time with Deep Neural Networks

In this paper, we present PARTIME, a software library written in Python ...

Please sign up or login with your details

Forgot password? Click here to reset