On the Ineffectiveness of Variance Reduced Optimization for Deep Learning

12/11/2018
by   Aaron Defazio, et al.
1

The application of stochastic variance reduction to optimization has shown remarkable recent theoretical and practical success. The applicability of these techniques to the hard non-convex optimization problems encountered during training of modern deep neural networks is an open problem. We show that naive application of the SVRG technique and related approaches fail, and explore why.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/13/2018

Stochastic Variance-Reduced Cubic Regularized Newton Method

We propose a stochastic variance-reduced cubic regularized Newton method...
research
03/30/2022

Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with Variance Reduction and its Application to Optimization

The stochastic gradient Langevin Dynamics is one of the most fundamental...
research
02/22/2023

Stress and Adaptation: Applying Anna Karenina Principle in Deep Learning for Image Classification

Image classification with deep neural networks has reached state-of-art ...
research
03/02/2023

Variance-reduced Clipping for Non-convex Optimization

Gradient clipping is a standard training technique used in deep learning...
research
11/13/2015

On the Quality of the Initial Basin in Overspecified Neural Networks

Deep learning, in the form of artificial neural networks, has achieved r...
research
05/20/2019

Stochastic Variance Reduction for Deep Q-learning

Recent advances in deep reinforcement learning have achieved human-level...
research
06/02/2020

ALADIN-α – An open-source MATLAB toolbox for distributed non-convex optimization

This paper introduces an open-source software for distributed and decent...

Please sign up or login with your details

Forgot password? Click here to reset