Stochastic Variance Reduced Primal Dual Algorithms for Empirical Composition Optimization

07/22/2019
by   Adithya M. Devraj, et al.
0

We consider a generic empirical composition optimization problem, where there are empirical averages present both outside and inside nonlinear loss functions. Such a problem is of interest in various machine learning applications, and cannot be directly solved by standard methods such as stochastic gradient descent (SGD). We take a novel approach to solving this problem by reformulating the original minimization objective into an equivalent min-max objective, which brings out all the empirical averages that are originally inside the nonlinear loss functions. We exploit the rich structures of the reformulated problem and develop a stochastic primal-dual algorithm, SVRPDA-I, to solve the problem efficiently. We carry out extensive theoretical analysis of the proposed algorithm, obtaining the convergence rate, the total computation complexity and the storage complexity. In particular, the algorithm is shown to converge at a linear rate when the problem is strongly convex. Moreover, we also develop an approximate version of the algorithm, SVRPDA-II, which further reduces the memory requirement. Finally, we evaluate the performance of our algorithms on several real-world benchmarks, and experimental results show that the proposed algorithms significantly outperform existing techniques.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/13/2017

Variance Reduced methods for Non-convex Composition Optimization

This paper explores the non-convex composition optimization in the form ...
research
05/21/2012

Stochastic Smoothing for Nonsmooth Minimizations: Accelerating SGD by Exploiting Structure

In this work we consider the stochastic minimization of nonsmooth convex...
research
04/23/2019

Stochastic Primal-Dual Algorithms with Faster Convergence than O(1/√(T)) for Problems without Bilinear Structure

Previous studies on stochastic primal-dual algorithms for solving min-ma...
research
10/26/2017

Duality-free Methods for Stochastic Composition Optimization

We consider the composition optimization with two expected-value functio...
research
07/13/2017

Stable Distribution Alignment Using the Dual of the Adversarial Distance

Methods that align distributions by minimizing an adversarial distance b...
research
01/29/2022

Learning Stochastic Graph Neural Networks with Constrained Variance

Stochastic graph neural networks (SGNNs) are information processing arch...
research
09/02/2023

Switch and Conquer: Efficient Algorithms By Switching Stochastic Gradient Oracles For Decentralized Saddle Point Problems

We consider a class of non-smooth strongly convex-strongly concave saddl...

Please sign up or login with your details

Forgot password? Click here to reset