Adversarially Robust Low Dimensional Representations

11/29/2019
by   Pranjal Awasthi, et al.
0

Adversarial or test time robustness measures the susceptibility of a machine learning system to small perturbations made to the input at test time. This has attracted much interest on the empirical side, since many existing ML systems perform poorly under imperceptible adversarial perturbations to the test inputs. On the other hand, our theoretical understanding of this phenomenon is limited, and has mostly focused on supervised learning tasks. In this work we study the problem of computing adversarially robust representations of data. We formulate a natural extension of Principal Component Analysis (PCA) where the goal is to find a low dimensional subspace to represent the given data with minimum projection error, and that is in addition robust to small perturbations measured in ℓ_q norm (say q=∞). Unlike PCA which is solvable in polynomial time, our formulation is computationally intractable to optimize as it captures the well-studied sparse PCA objective. We show the following algorithmic and statistical results. - Polynomial time algorithms in the worst-case that achieve constant factor approximations to the objective while only violating the robustness constraint by a constant factor. - We prove that our formulation (and algorithms) also enjoy significant statistical benefits in terms of sample complexity over standard PCA on account of a "regularization effect", that is formalized using the well-studied spiked covariance model. - Surprisingly, we show that our algorithmic techniques can also be made robust to corruptions in the training data, in addition to yielding representations that are robust at test time! Here an adversary is allowed to corrupt potentially every data point up to a specified amount in the ℓ_q norm. We further apply these techniques for mean estimation and clustering under adversarial corruptions to the training data.

READ FULL TEXT

page 1

page 2

page 3

page 4

05/31/2020

Estimating Principal Components under Adversarial Perturbations

Robustness is a key requirement for widespread deployment of machine lea...
09/13/2022

Test-Time Adaptation with Principal Component Analysis

Machine Learning models are prone to fail when test data are different f...
05/10/2019

Refined Complexity of PCA with Outliers

Principal component analysis (PCA) is one of the most fundamental proced...
03/25/2022

Origins of Low-dimensional Adversarial Perturbations

In this note, we initiate a rigorous study of the phenomenon of low-dime...
11/12/2020

Sparse PCA: Algorithms, Adversarial Perturbations and Certificates

We study efficient algorithms for Sparse PCA in standard statistical mod...
05/28/2019

Adversarially Robust Learning Could Leverage Computational Hardness

Over recent years, devising classification algorithms that are robust to...
06/28/2021

Adversarial Robustness of Streaming Algorithms through Importance Sampling

In this paper, we introduce adversarially robust streaming algorithms fo...