A Stochastic Proximal Method for Nonsmooth Regularized Finite Sum Optimization

06/14/2022
by   Dounia Lakhmiri, et al.
0

We consider the problem of training a deep neural network with nonsmooth regularization to retrieve a sparse and efficient sub-structure. Our regularizer is only assumed to be lower semi-continuous and prox-bounded. We combine an adaptive quadratic regularization approach with proximal stochastic gradient principles to derive a new solver, called SR2, whose convergence and worst-case complexity are established without knowledge or approximation of the gradient's Lipschitz constant. We formulate a stopping criteria that ensures an appropriate first-order stationarity measure converges to zero under certain conditions. We establish a worst-case iteration complexity of 𝒪(ϵ^-2) that matches those of related methods like ProxGEN, where the learning rate is assumed to be related to the Lipschitz constant. Our experiments on network instances trained on CIFAR-10 and CIFAR-100 with ℓ_1 and ℓ_0 regularizations show that SR2 consistently achieves higher sparsity and accuracy than related methods such as ProxGEN and ProxSGD.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/19/2014

A Proximal Stochastic Gradient Method with Progressive Variance Reduction

We consider the problem of minimizing the sum of two convex functions: o...
research
07/15/2020

A General Family of Stochastic Proximal Gradient Methods for Deep Learning

We study the training of regularized neural networks where the regulariz...
research
01/06/2023

A Levenberg-Marquardt Method for Nonsmooth Regularized Least Squares

We develop a Levenberg-Marquardt method for minimizing the sum of a smoo...
research
06/26/2023

Nonconvex Stochastic Bregman Proximal Gradient Method with Application to Deep Learning

The widely used stochastic gradient methods for minimizing nonconvex com...
research
12/25/2022

Learning k-Level Sparse Neural Networks Using a New Generalized Group Sparse Envelope Regularization

We propose an efficient method to learn both unstructured and structured...
research
06/05/2023

Aiming towards the minimizers: fast convergence of SGD for overparametrized problems

Modern machine learning paradigms, such as deep learning, occur in or cl...
research
04/30/2020

The Lipschitz Constant of Perturbed Anonymous Games

The worst-case Lipschitz constant of an n-player k-action δ-perturbed ga...

Please sign up or login with your details

Forgot password? Click here to reset