Efficient Stochastic Gradient Descent for Distributionally Robust Learning

05/22/2018
by   Soumyadip Ghosh, et al.
0

We consider a new stochastic gradient descent algorithm for efficiently solving general min-max optimization problems that arise naturally in distributionally robust learning. By focusing on the entire dataset, current approaches do not scale well. We address this issue by initially focusing on a subset of the data and progressively increasing this support to statistically cover the entire dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/20/2022

A note on diffusion limits for stochastic gradient descent

In the machine learning literature stochastic gradient descent has recen...
research
11/21/2018

Marginal Weighted Maximum Log-likelihood for Efficient Learning of Perturb-and-Map models

We consider the structured-output prediction problem through probabilist...
research
12/03/2020

SSGD: A safe and efficient method of gradient descent

With the vigorous development of artificial intelligence technology, var...
research
06/24/2021

Stochastic Projective Splitting: Solving Saddle-Point Problems with Multiple Regularizers

We present a new, stochastic variant of the projective splitting (PS) fa...
research
08/24/2020

Stochastic Gradient Descent Works Really Well for Stress Minimization

Stress minimization is among the best studied force-directed graph layou...
research
08/29/2022

DR-DSGD: A Distributionally Robust Decentralized Learning Algorithm over Graphs

In this paper, we propose to solve a regularized distributionally robust...
research
07/01/2022

Analysis of Kinetic Models for Label Switching and Stochastic Gradient Descent

In this paper we provide a novel approach to the analysis of kinetic mod...

Please sign up or login with your details

Forgot password? Click here to reset