Efficient Global Optimization of Two-layer ReLU Networks: Quadratic-time Algorithms and Adversarial Training

01/06/2022
by   Yatong Bai, et al.
16

The non-convexity of the artificial neural network (ANN) training landscape brings inherent optimization difficulties. While the traditional back-propagation stochastic gradient descent (SGD) algorithm and its variants are effective in certain cases, they can become stuck at spurious local minima and are sensitive to initializations and hyperparameters. Recent work has shown that the training of an ANN with ReLU activations can be reformulated as a convex program, bringing hope to globally optimizing interpretable ANNs. However, naively solving the convex training formulation has an exponential complexity, and even an approximation heuristic requires cubic time. In this work, we characterize the quality of this approximation and develop two efficient algorithms that train ANNs with global convergence guarantees. The first algorithm is based on the alternating direction method of multiplier (ADMM). It solves both the exact convex formulation and the approximate counterpart. Linear global convergence is achieved, and the initial several iterations often yield a solution with high prediction accuracy. When solving the approximate formulation, the per-iteration time complexity is quadratic. The second algorithm, based on the "sampled convex programs" theory, is simpler to implement. It solves unconstrained convex formulations and converges to an approximately globally optimal classifier. The non-convexity of the ANN training landscape exacerbates when adversarial training is considered. We apply the robust convex optimization theory to convex training and develop convex formulations that train ANNs robust to adversarial inputs. Our analysis explicitly focuses on one-hidden-layer fully connected ANNs, but can extend to more sophisticated architectures.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 18

page 24

page 36

page 38

05/25/2021

Practical Convex Formulation of Robust One-hidden-layer Neural Network Training

Recent work has shown that the training of a one-hidden-layer, scalar-ou...
06/10/2020

All Local Minima are Global for Two-Layer ReLU Neural Networks: The Hidden Convex Optimization Landscape

We are interested in two-layer ReLU neural networks from an optimization...
06/26/2020

Implicit Convex Regularizers of CNN Architectures: Convex Optimization of Two- and Three-Layer Networks in Polynomial Time

We study training of Convolutional Neural Networks (CNNs) with ReLU acti...
05/31/2019

ADMM for Efficient Deep Learning with Global Convergence

Alternating Direction Method of Multipliers (ADMM) has been used success...
01/03/2022

A Mixed Integer Programming Approach to Training Dense Neural Networks

Artificial Neural Networks (ANNs) are prevalent machine learning models ...
02/02/2022

Fast Convex Optimization for Two-Layer ReLU Networks: Equivalent Model Classes and Cone Decompositions

We develop fast algorithms and robust software for convex optimization o...
01/24/2020

PairNets: Novel Fast Shallow Artificial Neural Networks on Partitioned Subspaces

Traditionally, an artificial neural network (ANN) is trained slowly by a...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.