SVGD: A Virtual Gradients Descent Method for Stochastic Optimization

07/09/2019
by   Zheng Li, et al.
0

Inspired by dynamic programming, we propose Stochastic Virtual Gradient Descent (SVGD) algorithm where the Virtual Gradient is defined by computational graph and automatic differentiation. The method is computationally efficient and has little memory requirements. We also analyze the theoretical convergence properties and implementation of the algorithm. Experimental results on multiple datasets and network models show that SVGD has advantages over other stochastic optimization methods.

READ FULL TEXT
research
05/02/2022

Gradient Descent, Stochastic Optimization, and Other Tales

The goal of this paper is to debunk and dispel the magic behind black-bo...
research
09/02/2016

SEBOOST - Boosting Stochastic Learning Using Subspace Optimization Techniques

We present SEBOOST, a technique for boosting the performance of existing...
research
07/20/2020

Randomized Automatic Differentiation

The successes of deep learning, variational inference, and many other fi...
research
08/13/2020

Variance Regularization for Accelerating Stochastic Optimization

While nowadays most gradient-based optimization methods focus on explori...
research
07/03/2018

Stochastic optimization approaches to learning concise representations

We propose and study a method for learning interpretable features via st...
research
12/13/2018

Stochastic Gradient Descent for Spectral Embedding with Implicit Orthogonality Constraint

In this paper, we propose a scalable algorithm for spectral embedding. T...

Please sign up or login with your details

Forgot password? Click here to reset