On Robustness to Adversarial Examples and Polynomial Optimization

11/12/2019
by   Pranjal Awasthi, et al.
16

We study the design of computationally efficient algorithms with provable guarantees, that are robust to adversarial (test time) perturbations. While there has been an proliferation of recent work on this topic due to its connections to test time robustness of deep networks, there is limited theoretical understanding of several basic questions like (i) when and how can one design provably robust learning algorithms? (ii) what is the price of achieving robustness to adversarial examples in a computationally efficient manner? The main contribution of this work is to exhibit a strong connection between achieving robustness to adversarial examples, and a rich class of polynomial optimization problems, thereby making progress on the above questions. In particular, we leverage this connection to (a) design computationally efficient robust algorithms with provable guarantees for a large class of hypothesis, namely linear classifiers and degree-2 polynomial threshold functions (PTFs), (b) give a precise characterization of the price of achieving robustness in a computationally efficient manner for these classes, (c) design efficient algorithms to certify robustness and generate adversarial attacks in a principled manner for 2-layer neural networks. We empirically demonstrate the effectiveness of these attacks on real data.

READ FULL TEXT

page 16

page 17

research
05/27/2019

Provable robustness against all adversarial l_p-perturbations for p≥ 1

In recent years several adversarial attacks and defenses have been propo...
research
08/23/2017

Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid

Deep neural networks have been widely adopted in recent years, exhibitin...
research
06/30/2020

Neural Network Virtual Sensors for Fuel Injection Quantities with Provable Performance Specifications

Recent work has shown that it is possible to learn neural networks with ...
research
10/02/2018

Can Adversarially Robust Learning Leverage Computational Hardness?

Making learners robust to adversarial perturbation at test time (i.e., e...
research
06/13/2023

Theoretical Foundations of Adversarially Robust Learning

Despite extraordinary progress, current machine learning systems have be...
research
10/26/2022

Improving Adversarial Robustness via Joint Classification and Multiple Explicit Detection Classes

This work concerns the development of deep networks that are certifiably...
research
05/28/2019

Adversarially Robust Learning Could Leverage Computational Hardness

Over recent years, devising classification algorithms that are robust to...

Please sign up or login with your details

Forgot password? Click here to reset