Improving Neural Network Training in Low Dimensional Random Bases

11/09/2020
by   Frithjof Gressmann, et al.
0

Stochastic Gradient Descent (SGD) has proven to be remarkably effective in optimizing deep neural networks that employ ever-larger numbers of parameters. Yet, improving the efficiency of large-scale optimization remains a vital and highly active area of research. Recent work has shown that deep neural networks can be optimized in randomly-projected subspaces of much smaller dimensionality than their native parameter space. While such training is promising for more efficient and scalable optimization schemes, its practical application is limited by inferior optimization performance. Here, we improve on recent random subspace approaches as follows: Firstly, we show that keeping the random projection fixed throughout training is detrimental to optimization. We propose re-drawing the random subspace at each step, which yields significantly better performance. We realize further improvements by applying independent projections to different parts of the network, making the approximation more efficient as network dimensionality grows. To implement these experiments, we leverage hardware-accelerated pseudo-random number generation to construct the random projections on-demand at every optimization step, allowing us to distribute the computation of independent random directions across multiple workers with shared random seeds. This yields significant reductions in memory and is up to 10 times faster for the workloads in question.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/22/2018

Enhanced Expressive Power and Fast Training of Neural Networks by Random Projections

Random projections are able to perform dimension reduction efficiently f...
research
09/06/2023

Learning Active Subspaces for Effective and Scalable Uncertainty Quantification in Deep Neural Networks

Bayesian inference for neural networks, or Bayesian deep learning, has t...
research
03/20/2021

Train Deep Neural Networks in 40-D Subspaces

Although there are massive parameters in deep neural networks, the train...
research
05/13/2023

Convergence and scaling of Boolean-weight optimization for hardware reservoirs

Hardware implementation of neural network are an essential step to imple...
research
01/17/2014

An Analysis of Random Projections in Cancelable Biometrics

With increasing concerns about security, the need for highly secure phys...
research
06/20/2021

Better Training using Weight-Constrained Stochastic Dynamics

We employ constraints to control the parameter space of deep neural netw...
research
04/24/2018

Measuring the Intrinsic Dimension of Objective Landscapes

Many recently trained neural networks employ large numbers of parameters...

Please sign up or login with your details

Forgot password? Click here to reset