Contrastive Reasoning in Neural Networks

by   Mohit Prabhushankar, et al.

Neural networks represent data as projections on trained weights in a high dimensional manifold. The trained weights act as a knowledge base consisting of causal class dependencies. Inference built on features that identify these dependencies is termed as feed-forward inference. Such inference mechanisms are justified based on classical cause-to-effect inductive reasoning models. Inductive reasoning based feed-forward inference is widely used due to its mathematical simplicity and operational ease. Nevertheless, feed-forward models do not generalize well to untrained situations. To alleviate this generalization challenge, we propose using an effect-to-cause inference model that reasons abductively. Here, the features represent the change from existing weight dependencies given a certain effect. We term this change as contrast and the ensuing reasoning mechanism as contrastive reasoning. In this paper, we formalize the structure of contrastive reasoning and propose a methodology to extract a neural network's notion of contrast. We demonstrate the value of contrastive reasoning in two stages of a neural network's reasoning pipeline : in inferring and visually explaining decisions for the application of object recognition. We illustrate the value of contrastively recognizing images under distortions by reporting an improvement of 3.47 accuracy under the proposed contrastive framework on CIFAR-10C, noisy STL-10, and VisDA datasets respectively.


page 3

page 5

page 7

page 8

page 9

page 11


Linear discriminant initialization for feed-forward neural networks

Informed by the basic geometry underlying feed forward neural networks, ...

Introspective Learning : A Two-Stage Approach for Inference in Neural Networks

In this paper, we advocate for two stages in a neural network's decision...

Feed-Forward Neural Networks Need Inductive Bias to Learn Equality Relations

Basic binary relations such as equality and inequality are fundamental t...

Causal Discovery and Injection for Feed-Forward Neural Networks

Neural networks have proven to be effective at solving a wide range of p...

GANORCON: Are Generative Models Useful for Few-shot Segmentation?

Advances in generative modeling based on GANs has motivated the communit...

Q-NET: A Formula for Numerical Integration of a Shallow Feed-forward Neural Network

Numerical integration is a computational procedure that is widely encoun...

Transition-based Parsing with Lighter Feed-Forward Networks

We explore whether it is possible to build lighter parsers, that are sta...