Practical Convex Formulation of Robust One-hidden-layer Neural Network Training

05/25/2021
by   Yatong Bai, et al.
0

Recent work has shown that the training of a one-hidden-layer, scalar-output fully-connected ReLU neural network can be reformulated as a finite-dimensional convex program. Unfortunately, the scale of such a convex program grows exponentially in data size. In this work, we prove that a stochastic procedure with a linear complexity well approximates the exact formulation. Moreover, we derive a convex optimization approach to efficiently solve the "adversarial training" problem, which trains neural networks that are robust to adversarial input perturbations. Our method can be applied to binary classification and regression, and provides an alternative to the current adversarial training methods, such as Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD). We demonstrate in experiments that the proposed method achieves a noticeably better adversarial robustness and performance than the existing methods.

READ FULL TEXT

page 8

page 12

research
01/06/2022

Efficient Global Optimization of Two-layer ReLU Networks: Quadratic-time Algorithms and Adversarial Training

The non-convexity of the artificial neural network (ANN) training landsc...
research
12/24/2020

Vector-output ReLU Neural Network Problems are Copositive Programs: Convex Analysis of Two Layer Networks and Polynomial-time Algorithms

We describe the convex semi-infinite dual of the two-layer vector-output...
research
02/09/2020

Robust binary classification with the 01 loss

The 01 loss is robust to outliers and tolerant to noisy data compared to...
research
05/20/2020

Feature Purification: How Adversarial Training Performs Robust Deep Learning

Despite the great empirical success of adversarial training to defend de...
research
01/07/2018

Gradient Layer: Enhancing the Convergence of Adversarial Training for Generative Models

We propose a new technique that boosts the convergence of training gener...
research
05/02/2019

You Only Propagate Once: Painless Adversarial Training Using Maximal Principle

Deep learning achieves state-of-the-art results in many areas. However r...
research
05/02/2019

You Only Propagate Once: Accelerating Adversarial Training Using Maximal Principle

Deep learning achieves state-of-the-art results in many areas. However r...

Please sign up or login with your details

Forgot password? Click here to reset