DeepAI AI Chat
Log In Sign Up

Adversarial Robustness by Design through Analog Computing and Synthetic Gradients

by   Alessandro Cappelli, et al.

We propose a new defense mechanism against adversarial attacks inspired by an optical co-processor, providing robustness without compromising natural accuracy in both white-box and black-box settings. This hardware co-processor performs a nonlinear fixed random transformation, where the parameters are unknown and impossible to retrieve with sufficient precision for large enough dimensions. In the white-box setting, our defense works by obfuscating the parameters of the random projection. Unlike other defenses relying on obfuscated gradients, we find we are unable to build a reliable backward differentiable approximation for obfuscated parameters. Moreover, while our model reaches a good natural accuracy with a hybrid backpropagation - synthetic gradient method, the same approach is suboptimal if employed to generate adversarial examples. We find the combination of a random projection and binarization in the optical system also improves robustness against various types of black-box attacks. Finally, our hybrid training method builds robust features against transfer attacks. We demonstrate our approach on a VGG-like architecture, placing the defense on top of the convolutional features, on CIFAR-10 and CIFAR-100. Code is available at


page 1

page 2

page 3

page 4


Output Randomization: A Novel Defense for both White-box and Black-box Adversarial Models

Adversarial examples pose a threat to deep neural network models in a va...

Adversarial Defense via Image Denoising with Chaotic Encryption

In the literature on adversarial examples, white box and black box attac...

On the Robustness of Latent Diffusion Models

Latent diffusion models achieve state-of-the-art performance on a variet...

Invert and Defend: Model-based Approximate Inversion of Generative Adversarial Networks for Secure Inference

Inferring the latent variable generating a given test sample is a challe...

Towards Evaluating and Understanding Robust Optimisation under Transfer

This work evaluates the efficacy of adversarial robustness under transfe...

Defense against Adversarial Attacks on Hybrid Speech Recognition using Joint Adversarial Fine-tuning with Denoiser

Adversarial attacks are a threat to automatic speech recognition (ASR) s...