
Training Dynamics of Deep Networks using Stochastic Gradient Descent via Neural Tangent Kernel
Stochastic Gradient Descent (SGD) is widely used to train deep neural ne...
read it

Overparameterized Nonlinear Learning: Gradient Descent Takes the Shortest Path?
Many modern learning tasks involve fitting nonlinear models to data whic...
read it

Optimizing Stochastic Gradient Descent in Text Classification Based on FineTuning HyperParameters Approach. A Case Study on Automatic Classification of Global Terrorist Attac
The objective of this research is to enhance performance of Stochastic G...
read it

Extrapolation for Largebatch Training in Deep Learning
Deep learning networks are typically trained by Stochastic Gradient Desc...
read it

Class Means as an Early Exit Decision Mechanism
Stateoftheart neural networks with early exit mechanisms often need c...
read it

Sparse, guided feature connections in an Abstract Deep Network
We present a technique for developing a network of reused features, whe...
read it

Predicting the Computational Cost of Deep Learning Models
Deep learning is rapidly becoming a goto tool for many artificial intel...
read it
Predicting Training Time Without Training
We tackle the problem of predicting the number of optimization steps that a pretrained deep network needs to converge to a given value of the loss function. To do so, we leverage the fact that the training dynamics of a deep network during finetuning are well approximated by those of a linearized model. This allows us to approximate the training loss and accuracy at any point during training by solving a lowdimensional Stochastic Differential Equation (SDE) in function space. Using this result, we are able to predict the time it takes for Stochastic Gradient Descent (SGD) to finetune a model to a given loss without having to perform any training. In our experiments, we are able to predict training time of a ResNet within a 20 variety of datasets and hyperparameters, at a 30 to 45fold reduction in cost compared to actual training. We also discuss how to further reduce the computational and memory cost of our method, and in particular we show that by exploiting the spectral properties of the gradients' matrix it is possible predict training time on a large dataset while processing only a subset of the samples.
READ FULL TEXT
Comments
There are no comments yet.