On-Manifold Projected Gradient Descent

08/23/2023
by   Aaron Mahler, et al.
0

This work provides a computable, direct, and mathematically rigorous approximation to the differential geometry of class manifolds for high-dimensional data, along with nonlinear projections from input space onto these class manifolds. The tools are applied to the setting of neural network image classifiers, where we generate novel, on-manifold data samples, and implement a projected gradient descent algorithm for on-manifold adversarial training. The susceptibility of neural networks (NNs) to adversarial attack highlights the brittle nature of NN decision boundaries in input space. Introducing adversarial examples during training has been shown to reduce the susceptibility of NNs to adversarial attack; however, it has also been shown to reduce the accuracy of the classifier if the examples are not valid examples for that class. Realistic "on-manifold" examples have been previously generated from class manifolds in the latent of an autoencoder. Our work explores these phenomena in a geometric and computational setting that is much closer to the raw, high-dimensional input space than can be provided by VAE or other black box dimensionality reductions. We employ conformally invariant diffusion maps (CIDM) to approximate class manifolds in diffusion coordinates, and develop the Nyström projection to project novel points onto class manifolds in this setting. On top of the manifold approximation, we leverage the spectral exterior calculus (SEC) to determine geometric quantities such as tangent vectors of the manifold. We use these tools to obtain adversarial examples that reside on a class manifold, yet fool a classifier. These misclassifications then become explainable in terms of human-understandable manipulations within the data, by expressing the on-manifold adversary in the semantic basis on the manifold.

READ FULL TEXT

page 14

page 15

page 17

page 18

research
11/01/2018

On the Geometry of Adversarial Examples

Adversarial examples are a pervasive phenomenon of machine learning mode...
research
05/02/2019

Adversarial Training with Voronoi Constraints

Adversarial examples are a pervasive phenomenon of machine learning mode...
research
03/04/2021

Hard-label Manifolds: Unexpected Advantages of Query Efficiency for Finding On-manifold Adversarial Examples

Designing deep networks robust to adversarial examples remains an open p...
research
06/23/2020

Disentangling by Subspace Diffusion

We present a novel nonparametric algorithm for symmetry-based disentangl...
research
07/14/2022

Distance Learner: Incorporating Manifold Prior to Model Training

The manifold hypothesis (real world data concentrates near low-dimension...
research
09/03/2018

Sensitivity Analysis with Manifolds

The course of dimensionality is a common problem in statistics and data ...
research
03/11/2021

For Manifold Learning, Deep Neural Networks can be Locality Sensitive Hash Functions

It is well established that training deep neural networks gives useful r...

Please sign up or login with your details

Forgot password? Click here to reset