Improved Optimization of Finite Sums with Minibatch Stochastic Variance Reduced Proximal Iterations

06/21/2017
by   Jialei Wang, et al.
0

We present novel minibatch stochastic optimization methods for empirical risk minimization problems, the methods efficiently leverage variance reduced first-order and sub-sampled higher-order information to accelerate the convergence speed. For quadratic objectives, we prove improved iteration complexity over state-of-the-art under reasonable assumptions. We also provide empirical evidence of the advantages of our method compared to existing approaches in the literature.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/30/2013

Communication Efficient Distributed Optimization using an Approximate Newton-type Method

We present a novel Newton-type method for distributed optimization, whic...
research
10/08/2016

Variance-based regularization with convex objectives

We develop an approach to risk minimization and stochastic optimization ...
research
06/06/2022

Stochastic Variance-Reduced Newton: Accelerating Finite-Sum Minimization with Large Batches

Stochastic variance reduction has proven effective at accelerating first...
research
01/26/2023

A Fully First-Order Method for Stochastic Bilevel Optimization

We consider stochastic unconstrained bilevel optimization problems when ...
research
10/25/2018

SpiderBoost: A Class of Faster Variance-reduced Algorithms for Nonconvex Optimization

There has been extensive research on developing stochastic variance redu...
research
09/13/2021

On Tilted Losses in Machine Learning: Theory and Applications

Exponential tilting is a technique commonly used in fields such as stati...
research
07/02/2020

Tilted Empirical Risk Minimization

Empirical risk minimization (ERM) is typically designed to perform well ...

Please sign up or login with your details

Forgot password? Click here to reset