Adversarially Robust Low Dimensional Representations

11/29/2019
by   Pranjal Awasthi, et al.
0

Adversarial or test time robustness measures the susceptibility of a machine learning system to small perturbations made to the input at test time. This has attracted much interest on the empirical side, since many existing ML systems perform poorly under imperceptible adversarial perturbations to the test inputs. On the other hand, our theoretical understanding of this phenomenon is limited, and has mostly focused on supervised learning tasks. In this work we study the problem of computing adversarially robust representations of data. We formulate a natural extension of Principal Component Analysis (PCA) where the goal is to find a low dimensional subspace to represent the given data with minimum projection error, and that is in addition robust to small perturbations measured in ℓ_q norm (say q=∞). Unlike PCA which is solvable in polynomial time, our formulation is computationally intractable to optimize as it captures the well-studied sparse PCA objective. We show the following algorithmic and statistical results. - Polynomial time algorithms in the worst-case that achieve constant factor approximations to the objective while only violating the robustness constraint by a constant factor. - We prove that our formulation (and algorithms) also enjoy significant statistical benefits in terms of sample complexity over standard PCA on account of a "regularization effect", that is formalized using the well-studied spiked covariance model. - Surprisingly, we show that our algorithmic techniques can also be made robust to corruptions in the training data, in addition to yielding representations that are robust at test time! Here an adversary is allowed to corrupt potentially every data point up to a specified amount in the ℓ_q norm. We further apply these techniques for mean estimation and clustering under adversarial corruptions to the training data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/31/2020

Estimating Principal Components under Adversarial Perturbations

Robustness is a key requirement for widespread deployment of machine lea...
research
09/13/2022

Test-Time Adaptation with Principal Component Analysis

Machine Learning models are prone to fail when test data are different f...
research
05/10/2019

Refined Complexity of PCA with Outliers

Principal component analysis (PCA) is one of the most fundamental proced...
research
03/25/2022

Origins of Low-dimensional Adversarial Perturbations

In this note, we initiate a rigorous study of the phenomenon of low-dime...
research
10/08/2020

Affine-Invariant Robust Training

The field of adversarial robustness has attracted significant attention ...
research
06/12/2020

Robust Sub-Gaussian Principal Component Analysis and Width-Independent Schatten Packing

We develop two methods for the following fundamental statistical task: g...
research
12/10/2019

Statistically Robust Neural Network Classification

Recently there has been much interest in quantifying the robustness of n...

Please sign up or login with your details

Forgot password? Click here to reset