Fundamental Limits of Adversarial Learning

07/01/2020
by   Kevin Bello, et al.
0

Robustness of machine learning methods is essential for modern practical applications. Given the arms race between attack and defense methods, one may be curious regarding the fundamental limits of any defense mechanism. In this work, we focus on the problem of learning from noise-injected data, where the existing literature falls short by either assuming a specific attack method or by over-specifying the learning problem. We shed light on the information-theoretic limits of adversarial learning without assuming a particular learning process or attacker. Finally, we apply our general bounds to a canonical set of non-trivial learning problems and provide examples of common types of attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/10/2020

A Survey of Adversarial Learning on Graphs

Deep learning models on graphs have achieved remarkable performance in v...
research
08/03/2018

DeepCloak: Adversarial Crafting As a Defensive Measure to Cloak Processes

Over the past decade, side-channels have proven to be significant and pr...
research
05/24/2020

SoK: Arms Race in Adversarial Malware Detection

Malicious software (malware) is a major cyber threat that shall be tackl...
research
03/14/2023

Constrained Adversarial Learning and its applicability to Automated Software Testing: a systematic review

Every novel technology adds hidden vulnerabilities ready to be exploited...
research
12/26/2018

Adversarial Attack and Defense on Graph Data: A Survey

Deep neural networks (DNNs) have been widely applied in various applicat...
research
11/13/2018

Theoretical Analysis of Adversarial Learning: A Minimax Approach

We propose a general theoretical method for analyzing the risk bound in ...
research
07/20/2021

Limits of Detecting Extraterrestrial Civilizations

The search for extraterrestrial intelligence (SETI) is a scientific ende...

Please sign up or login with your details

Forgot password? Click here to reset