Faster Training of Very Deep Networks Via p-Norm Gates

08/11/2016
by   Trang Pham, et al.
0

A major contributing factor to the recent advances in deep neural networks is structural units that let sensory information and gradients to propagate easily. Gating is one such structure that acts as a flow control. Gates are employed in many recent state-of-the-art recurrent models such as LSTM and GRU, and feedforward models such as Residual Nets and Highway Networks. This enables learning in very deep networks with hundred layers and helps achieve record-breaking results in vision (e.g., ImageNet with Residual Nets) and NLP (e.g., machine translation with GRU). However, there is limited work in analysing the role of gating in the learning process. In this paper, we propose a flexible p-norm gating scheme, which allows user-controllable flow and as a consequence, improve the learning speed. This scheme subsumes other existing gating schemes, including those in GRU, Highway Networks and Residual Nets as special cases. Experiments on large sequence and vector datasets demonstrate that the proposed gating scheme helps improve the learning speed significantly without extra overhead.

READ FULL TEXT
research
09/18/2017

Wide and deep volumetric residual networks for volumetric image classification

3D shape models that directly classify objects from 3D information have ...
research
07/22/2015

Training Very Deep Networks

Theoretical and empirical evidence indicates that the depth of neural ne...
research
12/10/2015

Deep Residual Learning for Image Recognition

Deeper neural networks are more difficult to train. We present a residua...
research
03/30/2016

Deep Networks with Stochastic Depth

Very deep convolutional networks with hundreds of layers have led to sig...
research
01/10/2017

Residual LSTM: Design of a Deep Recurrent Architecture for Distant Speech Recognition

In this paper, a novel architecture for a deep recurrent neural network,...
research
04/06/2023

Robustmix: Improving Robustness by Regularizing the Frequency Bias of Deep Nets

Deep networks have achieved impressive results on a range of well-curate...
research
01/06/2022

Bio-inspired Min-Nets Improve the Performance and Robustness of Deep Networks

Min-Nets are inspired by end-stopped cortical cells with units that outp...

Please sign up or login with your details

Forgot password? Click here to reset