Fast Convex Optimization for Two-Layer ReLU Networks: Equivalent Model Classes and Cone Decompositions

02/02/2022
by   Aaron Mishkin, et al.
5

We develop fast algorithms and robust software for convex optimization of two-layer neural networks with ReLU activation functions. Our work leverages a convex reformulation of the standard weight-decay penalized training problem as a set of group-ℓ_1-regularized data-local models, where locality is enforced by polyhedral cone constraints. In the special case of zero-regularization, we show that this problem is exactly equivalent to unconstrained optimization of a convex "gated ReLU" network. For problems with non-zero regularization, we show that convex gated ReLU models obtain data-dependent approximation bounds for the ReLU training problem. To optimize the convex reformulations, we develop an accelerated proximal gradient method and a practical augmented Lagrangian solver. We show that these approaches are faster than standard training heuristics for the non-convex problem, such as SGD, and outperform commercial interior-point solvers. Experimentally, we verify our theoretical results, explore the group-ℓ_1 regularization path, and scale convex optimization for neural networks to image classification on MNIST and CIFAR-10.

READ FULL TEXT

page 31

page 41

research
03/02/2021

Demystifying Batch Normalization in ReLU Networks: Equivalent Convex Optimization Models and Implicit Regularization

Batch Normalization (BN) is a commonly used technique to accelerate and ...
research
12/09/2020

Convex Regularization Behind Neural Reconstruction

Neural networks have shown tremendous potential for reconstructing high-...
research
05/11/2022

An Inexact Augmented Lagrangian Algorithm for Training Leaky ReLU Neural Network with Group Sparsity

The leaky ReLU network with a group sparse regularization term has been ...
research
05/31/2023

Optimal Sets and Solution Paths of ReLU Networks

We develop an analytical framework to characterize the set of optimal Re...
research
10/11/2021

Global Optimality Beyond Two Layers: Training Deep ReLU Networks via Convex Programs

Understanding the fundamental mechanism behind the success of deep neura...
research
05/17/2019

SSFN: Self Size-estimating Feed-forward Network and Low Complexity Design

We design a self size-estimating feed-forward network (SSFN) using a joi...
research
03/31/2023

On the Effect of Initialization: The Scaling Path of 2-Layer Neural Networks

In supervised learning, the regularization path is sometimes used as a c...

Please sign up or login with your details

Forgot password? Click here to reset