A Rule for Gradient Estimator Selection, with an Application to Variational Inference

11/05/2019
by   Tomas Geffner, et al.
10

Stochastic gradient descent (SGD) is the workhorse of modern machine learning. Sometimes, there are many different potential gradient estimators that can be used. When so, choosing the one with the best tradeoff between cost and variance is important. This paper analyzes the convergence rates of SGD as a function of time, rather than iterations. This results in a simple rule to select the estimator that leads to the best optimization convergence guarantee. This choice is the same for different variants of SGD, and with different assumptions about the objective (e.g. convexity or smoothness). Inspired by this principle, we propose a technique to automatically select an estimator when a finite pool of estimators is given. Then, we extend to infinite pools of estimators, where each one is indexed by control variate weights. This is enabled by a reduction to a mixed-integer quadratic program. Empirically, automatically choosing an estimator performs comparably to the best estimator chosen with hindsight.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/25/2019

Bias-Variance Tradeoff in a Sliding Window Implementation of the Stochastic Gradient Algorithm

This paper provides a framework to analyze stochastic gradient algorithm...
research
07/27/2023

Linear Convergence of Black-Box Variational Inference: Should We Stick the Landing?

We prove that black-box variational inference (BBVI) with control variat...
research
04/02/2021

A Sieve Stochastic Gradient Descent Estimator for Online Nonparametric Regression in Sobolev ellipsoids

The goal of regression is to recover an unknown underlying function that...
research
03/24/2020

Finite-Time Analysis of Stochastic Gradient Descent under Markov Randomness

Motivated by broad applications in reinforcement learning and machine le...
research
10/22/2018

Optimality of the final model found via Stochastic Gradient Descent

We study convergence properties of Stochastic Gradient Descent (SGD) for...
research
09/08/2022

Stochastic gradient descent with gradient estimator for categorical features

Categorical data are present in key areas such as health or supply chain...
research
10/30/2018

Using Large Ensembles of Control Variates for Variational Inference

Variational inference is increasingly being addressed with stochastic op...

Please sign up or login with your details

Forgot password? Click here to reset