
Linear discriminant initialization for feedforward neural networks
Informed by the basic geometry underlying feed forward neural networks, ...
read it

FeedForward Neural Networks Need Inductive Bias to Learn Equality Relations
Basic binary relations such as equality and inequality are fundamental t...
read it

Geometric Decomposition of Feed Forward Neural Networks
There have been several attempts to mathematically understand neural net...
read it

QNET: A Formula for Numerical Integration of a Shallow Feedforward Neural Network
Numerical integration is a computational procedure that is widely encoun...
read it

Finding Invariants in Deep Neural Networks
We present techniques for automatically inferring invariant properties o...
read it

CortexNet: a Generic Network Family for Robust Visual Temporal Representations
In the past five years we have observed the rise of incredibly well perf...
read it

Transitionbased Parsing with Lighter FeedForward Networks
We explore whether it is possible to build lighter parsers, that are sta...
read it
Contrastive Reasoning in Neural Networks
Neural networks represent data as projections on trained weights in a high dimensional manifold. The trained weights act as a knowledge base consisting of causal class dependencies. Inference built on features that identify these dependencies is termed as feedforward inference. Such inference mechanisms are justified based on classical causetoeffect inductive reasoning models. Inductive reasoning based feedforward inference is widely used due to its mathematical simplicity and operational ease. Nevertheless, feedforward models do not generalize well to untrained situations. To alleviate this generalization challenge, we propose using an effecttocause inference model that reasons abductively. Here, the features represent the change from existing weight dependencies given a certain effect. We term this change as contrast and the ensuing reasoning mechanism as contrastive reasoning. In this paper, we formalize the structure of contrastive reasoning and propose a methodology to extract a neural network's notion of contrast. We demonstrate the value of contrastive reasoning in two stages of a neural network's reasoning pipeline : in inferring and visually explaining decisions for the application of object recognition. We illustrate the value of contrastively recognizing images under distortions by reporting an improvement of 3.47 accuracy under the proposed contrastive framework on CIFAR10C, noisy STL10, and VisDA datasets respectively.
READ FULL TEXT
Comments
There are no comments yet.