A Robust Optimization Approach to Deep Learning

12/17/2021
by   Dimitris Bertsimas, et al.
0

Many state-of-the-art adversarial training methods leverage upper bounds of the adversarial loss to provide security guarantees. Yet, these methods require computations at each training step that can not be incorporated in the gradient for backpropagation. We introduce a new, more principled approach to adversarial training based on a closed form solution of an upper bound of the adversarial loss, which can be effectively trained with backpropagation. This bound is facilitated by state-of-the-art tools from robust optimization. We derive two new methods with our approach. The first method (Approximated Robust Upper Bound or aRUB) uses the first order approximation of the network as well as basic tools from linear robust optimization to obtain an approximate upper bound of the adversarial loss that can be easily implemented. The second method (Robust Upper Bound or RUB), computes an exact upper bound of the adversarial loss. Across a variety of tabular and vision data sets we demonstrate the effectiveness of our more principled approach – RUB is substantially more robust than state-of-the-art methods for larger perturbations, while aRUB matches the performance of state-of-the-art methods for small perturbations. Also, both RUB and aRUB run faster than standard adversarial training (at the expense of an increase in memory). All the code to reproduce the results can be found at https://github.com/kimvc7/Robustness.

READ FULL TEXT
research
04/04/2020

Adversarial Robustness through Regularization: A Second-Order Approach

Adversarial training is a common approach to improving the robustness of...
research
08/17/2022

Two Heads are Better than One: Robust Learning Meets Multi-branch Models

Deep neural networks (DNNs) are vulnerable to adversarial examples, in w...
research
12/10/2019

On Certifying Robust Models by Polyhedral Envelope

Certifying neural networks enables one to offer guarantees on a model's ...
research
10/03/2020

Interpreting Robust Optimization via Adversarial Influence Functions

Robust optimization has been widely used in nowadays data science, espec...
research
03/11/2022

Enhancing Adversarial Training with Second-Order Statistics of Weights

Adversarial training has been shown to be one of the most effective appr...
research
09/16/2020

A priori guarantees of finite-time convergence for Deep Neural Networks

In this paper, we perform Lyapunov based analysis of the loss function t...
research
07/26/2020

Robust Collective Classification against Structural Attacks

Collective learning methods exploit relations among data points to enhan...

Please sign up or login with your details

Forgot password? Click here to reset