Adversarial Examples from Dimensional Invariance

04/13/2023
by   Benjamin L. Badger, et al.
0

Adversarial examples have been found for various deep as well as shallow learning models, and have at various times been suggested to be either fixable model-specific bugs, or else inherent dataset feature, or both. We present theoretical and empirical results to show that adversarial examples are approximate discontinuities resulting from models that specify approximately bijective maps f: R^n → R^m; n ≠ m over their inputs, and this discontinuity follows from the topological invariance of dimension.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro