Theoretical Foundations of Adversarially Robust Learning

06/13/2023
by   Omar Montasser, et al.
0

Despite extraordinary progress, current machine learning systems have been shown to be brittle against adversarial examples: seemingly innocuous but carefully crafted perturbations of test examples that cause machine learning predictors to misclassify. Can we learn predictors robust to adversarial examples? and how? There has been much empirical interest in this contemporary challenge in machine learning, and in this thesis, we address it from a theoretical perspective. In this thesis, we explore what robustness properties can we hope to guarantee against adversarial examples and develop an understanding of how to algorithmically guarantee them. We illustrate the need to go beyond traditional approaches and principles such as empirical risk minimization and uniform convergence, and make contributions that can be categorized as follows: (1) introducing problem formulations capturing aspects of emerging practical challenges in robust learning, (2) designing new learning algorithms with provable robustness guarantees, and (3) characterizing the complexity of robust learning and fundamental limitations on the performance of any algorithm.

READ FULL TEXT
research
10/14/2016

Are Accuracy and Robustness Correlated?

Machine learning models are vulnerable to adversarial examples formed by...
research
08/16/2018

Mitigation of Adversarial Attacks through Embedded Feature Selection

Machine learning has become one of the main components for task automati...
research
10/26/2021

A Frequency Perspective of Adversarial Robustness

Adversarial examples pose a unique challenge for deep learning systems. ...
research
07/11/2023

Unsupervised Learning in Complex Systems

In this thesis, we explore the use of complex systems to study learning ...
research
12/31/2019

Quantum Adversarial Machine Learning

Adversarial machine learning is an emerging field that focuses on studyi...
research
11/12/2019

On Robustness to Adversarial Examples and Polynomial Optimization

We study the design of computationally efficient algorithms with provabl...
research
12/01/2016

A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Examples

Most machine learning classifiers, including deep neural networks, are v...

Please sign up or login with your details

Forgot password? Click here to reset