VC Classes are Adversarially Robustly Learnable, but Only Improperly

02/12/2019
by   Omar Montasser, et al.
0

We study the question of learning an adversarially robust predictor. We show that any hypothesis class H with finite VC dimension is robustly PAC learnable with an improper learning rule. The requirement of being improper is necessary as we exhibit examples of hypothesis classes H with finite VC dimension that are not robustly PAC learnable with any proper learning rule.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/10/2022

Probabilistically Robust PAC Learning

Recently, Robey et al. propose a notion of probabilistic robustness, whi...
research
03/01/2021

Robust learning under clean-label attack

We study the problem of robust learning under clean-label data-poisoning...
research
06/26/2022

Adversarially Robust Learning of Real-Valued Functions

We study robustness to test-time adversarial attacks in the regression s...
research
08/21/2023

Fat Shattering, Joint Measurability, and PAC Learnability of POVM Hypothesis Classes

We characterize learnability for quantum measurement classes by establis...
research
09/19/2013

Predictive PAC Learning and Process Decompositions

We informally call a stochastic process learnable if it admits a general...
research
03/15/2022

Approximability and Generalisation

Approximate learning machines have become popular in the era of small de...
research
08/11/2023

On the equivalence of Occam algorithms

Blumer et al. (1987, 1989) showed that any concept class that is learnab...

Please sign up or login with your details

Forgot password? Click here to reset