Adversarial Robustness by Design through Analog Computing and Synthetic Gradients

01/06/2021
by   Alessandro Cappelli, et al.
5

We propose a new defense mechanism against adversarial attacks inspired by an optical co-processor, providing robustness without compromising natural accuracy in both white-box and black-box settings. This hardware co-processor performs a nonlinear fixed random transformation, where the parameters are unknown and impossible to retrieve with sufficient precision for large enough dimensions. In the white-box setting, our defense works by obfuscating the parameters of the random projection. Unlike other defenses relying on obfuscated gradients, we find we are unable to build a reliable backward differentiable approximation for obfuscated parameters. Moreover, while our model reaches a good natural accuracy with a hybrid backpropagation - synthetic gradient method, the same approach is suboptimal if employed to generate adversarial examples. We find the combination of a random projection and binarization in the optical system also improves robustness against various types of black-box attacks. Finally, our hybrid training method builds robust features against transfer attacks. We demonstrate our approach on a VGG-like architecture, placing the defense on top of the convolutional features, on CIFAR-10 and CIFAR-100. Code is available at https://github.com/lightonai/adversarial-robustness-by-design.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/08/2021

Output Randomization: A Novel Defense for both White-box and Black-box Adversarial Models

Adversarial examples pose a threat to deep neural network models in a va...
research
03/19/2022

Adversarial Defense via Image Denoising with Chaotic Encryption

In the literature on adversarial examples, white box and black box attac...
research
06/14/2023

On the Robustness of Latent Diffusion Models

Latent diffusion models achieve state-of-the-art performance on a variet...
research
11/23/2019

Invert and Defend: Model-based Approximate Inversion of Generative Adversarial Networks for Secure Inference

Inferring the latent variable generating a given test sample is a challe...
research
07/12/2021

Detect and Defense Against Adversarial Examples in Deep Learning using Natural Scene Statistics and Adaptive Denoising

Despite the enormous performance of deepneural networks (DNNs), recent s...
research
05/07/2019

Towards Evaluating and Understanding Robust Optimisation under Transfer

This work evaluates the efficacy of adversarial robustness under transfe...
research
05/20/2018

Improving Adversarial Robustness by Data-Specific Discretization

A recent line of research proposed (either implicitly or explicitly) gra...

Please sign up or login with your details

Forgot password? Click here to reset