Hard-label Manifolds: Unexpected Advantages of Query Efficiency for Finding On-manifold Adversarial Examples

03/04/2021
by   Washington Garcia, et al.
13

Designing deep networks robust to adversarial examples remains an open problem. Likewise, recent zeroth order hard-label attacks on image classification models have shown comparable performance to their first-order, gradient-level alternatives. It was recently shown in the gradient-level setting that regular adversarial examples leave the data manifold, while their on-manifold counterparts are in fact generalization errors. In this paper, we argue that query efficiency in the zeroth-order setting is connected to an adversary's traversal through the data manifold. To explain this behavior, we propose an information-theoretic argument based on a noisy manifold distance oracle, which leaks manifold information through the adversary's gradient estimate. Through numerical experiments of manifold-gradient mutual information, we show this behavior acts as a function of the effective problem dimensionality and number of training points. On real-world datasets and multiple zeroth-order attacks using dimension-reduction, we observe the same universal behavior to produce samples closer to the data manifold. This results in up to two-fold decrease in the manifold distance measure, regardless of the model robustness. Our results suggest that taking the manifold-gradient mutual information into account can thus inform better robust model design in the future, and avoid leakage of the sensitive data manifold.

READ FULL TEXT
research
12/03/2018

Disentangling Adversarial Robustness and Generalization

Obtaining deep networks that are robust against adversarial examples and...
research
08/23/2023

On-Manifold Projected Gradient Descent

This work provides a computable, direct, and mathematically rigorous app...
research
06/13/2018

Manifold Mixup: Encouraging Meaningful On-Manifold Interpolation as a Regularizer

Deep networks often perform well on the data manifold on which they are ...
research
09/24/2018

On The Utility of Conditional Generation Based Mutual Information for Characterizing Adversarial Subspaces

Recent studies have found that deep learning systems are vulnerable to a...
research
09/13/2020

Manifold attack

Machine Learning in general and Deep Learning in particular has gained m...
research
04/07/2018

Fortified Networks: Improving the Robustness of Deep Networks by Modeling the Manifold of Hidden Representations

Deep networks have achieved impressive results across a variety of impor...
research
10/09/2019

Membership Model Inversion Attacks for Deep Networks

With the increasing adoption of AI, inherent security and privacy vulner...

Please sign up or login with your details

Forgot password? Click here to reset