Neural Network based on Automatic Differentiation Transformation of Numeric Iterate-to-Fixedpoint

10/30/2021
by   Mansura Habiba, et al.
0

This work proposes a Neural Network model that can control its depth using an iterate-to-fixed-point operator. The architecture starts with a standard layered Network but with added connections from current later to earlier layers, along with a gate to make them inactive under most circumstances. These “temporal wormhole” connections create a shortcut that allows the Neural Network to use the information available at deeper layers and re-do earlier computations with modulated inputs. End-to-end training is accomplished by using appropriate calculations for a numeric iterate-to-fixed-point operator. In a typical case, where the “wormhole” connections are inactive, this is inexpensive; but when they are active, the network takes a longer time to settle down, and the gradient calculation is also more laborious, with an effect similar to making the network deeper. In contrast to the existing skip-connection concept, this proposed technique enables information to flow up and down in the network. Furthermore, the flow of information follows a fashion that seems analogous to the afferent and efferent flow of information through layers of processing in the brain. We evaluate models that use this novel mechanism on different long-term dependency tasks. The results are competitive with other studies, showing that the proposed model contributes significantly to overcoming traditional deep learning models' vanishing gradient descent problem. At the same time, the training time is significantly reduced, as the “easy” input cases are processed more quickly than “difficult” ones.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/26/2018

IamNN: Iterative and Adaptive Mobile Neural Network for Efficient Image Classification

Deep residual networks (ResNets) made a recent breakthrough in deep lear...
research
08/01/2022

Improving the Trainability of Deep Neural Networks through Layerwise Batch-Entropy Regularization

Training deep neural networks is a very demanding task, especially chall...
research
08/06/2018

Residual Memory Networks: Feed-forward approach to learn long temporal dependencies

Training deep recurrent neural network (RNN) architectures is complicate...
research
04/30/2018

Towards Deeper Generative Architectures for GANs using Dense connections

In this paper, we present the result of adopting skip connections and de...
research
07/19/2017

Orthogonal and Idempotent Transformations for Learning Deep Neural Networks

Identity transformations, used as skip-connections in residual networks,...
research
11/25/2021

Joint inference and input optimization in equilibrium networks

Many tasks in deep learning involve optimizing over the inputs to a netw...
research
11/22/2018

Universal Approximation by a Slim Network with Sparse Shortcut Connections

Over recent years, deep learning has become a mainstream method in machi...

Please sign up or login with your details

Forgot password? Click here to reset