Theoretical Analysis of Adversarial Learning: A Minimax Approach

11/13/2018
by   Zhuozhuo Tu, et al.
0

We propose a general theoretical method for analyzing the risk bound in the presence of adversaries. In particular, we try to fit the adversarial learning problem into the minimax framework. We first show that the original adversarial learning problem could be reduced to a minimax statistical learning problem by introducing a transport map between distributions. Then we prove a risk bound for this minimax problem in terms of covering numbers. In contrast to previous minimax bounds in lee,far, our bound is informative when the radius of the ambiguity set is small. Our method could be applied to multi-class classification problems and commonly-used loss functions such as hinge loss and ramp loss. As two illustrative examples, we derive the adversarial risk bounds for kernel-SVM and deep neural networks. Our results indicate that a stronger adversary might have a negative impact on the complexity of the hypothesis class and the existence of margin could serve as a defense mechanism to counter adversarial attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/21/2018

Adversarial classification: An adversarial risk analysis approach

Classification problems in security settings are usually contemplated as...
research
09/02/2023

Non-Asymptotic Bounds for Adversarial Excess Risk under Misspecified Models

We propose a general approach to evaluating the performance of robust es...
research
11/14/2019

An Application of Multiple-Instance Learning to Estimate Generalization Risk

We focus on several learning approaches that employ max-operator to eval...
research
06/18/2022

Existence and Minimax Theorems for Adversarial Surrogate Risks in Binary Classification

Adversarial training is one of the most popular methods for training met...
research
07/01/2020

Fundamental Limits of Adversarial Learning

Robustness of machine learning methods is essential for modern practical...
research
10/09/2018

Adaptive Minimax Regret against Smooth Logarithmic Losses over High-Dimensional ℓ_1-Balls via Envelope Complexity

We develop a new theoretical framework, the envelope complexity, to anal...
research
11/27/2022

Adversarial Rademacher Complexity of Deep Neural Networks

Deep neural networks are vulnerable to adversarial attacks. Ideally, a r...

Please sign up or login with your details

Forgot password? Click here to reset