
Locally Linear Attributes of ReLU Neural Networks
A ReLU neural network determines/is a continuous piecewise linear map fr...
read it

Neural Speed Reading via SkimRNN
Inspired by the principles of speed reading, we introduce SkimRNN, a re...
read it

InputtoOutput Gate to Improve RNN Language Models
This paper proposes a reinforcing method that refines the output layers ...
read it

Training InputOutput Recurrent Neural Networks through Spectral Methods
We consider the problem of training inputoutput recurrent neural networ...
read it

Attentionbased Conditioning Methods for External Knowledge Integration
In this paper, we present a novel approach for incorporating external kn...
read it

Survey on the attention based RNN model and its applications in computer vision
The recurrent neural networks (RNN) can be used to solve the sequence to...
read it

On Attribution of Recurrent Neural Network Predictions via Additive Decomposition
RNN models have achieved the stateoftheart performance in a wide rang...
read it
Input Switched Affine Networks: An RNN Architecture Designed for Interpretability
There exist many problem domains where the interpretability of neural network models is essential for deployment. Here we introduce a recurrent architecture composed of inputswitched affine transformations  in other words an RNN without any explicit nonlinearities, but with inputdependent recurrent weights. This simple form allows the RNN to be analyzed via straightforward linear methods: we can exactly characterize the linear contribution of each input to the model predictions; we can use a changeofbasis to disentangle input, output, and computational hidden unit subspaces; we can fully reverseengineer the architecture's solution to a simple task. Despite this ease of interpretation, the input switched affine network achieves reasonable performance on a text modeling tasks, and allows greater computational efficiency than networks with standard nonlinearities.
READ FULL TEXT
Comments
There are no comments yet.