Discretization based Solutions for Secure Machine Learning against Adversarial Attacks

02/08/2019
by   Priyadarshini Panda, et al.
16

Adversarial examples are perturbed inputs that are designed (from a deep learning network's (DLN) parameter gradients) to mislead the DLN during test time. Intuitively, constraining the dimensionality of inputs or parameters of a network reduces the 'space' in which adversarial examples exist. Guided by this intuition, we demonstrate that discretization greatly improves the robustness of DLNs against adversarial attacks. Specifically, discretizing the input space (or allowed pixel levels from 256 values or 8-bit to 4 values or 2-bit) extensively improves the adversarial robustness of DLNs for a substantial range of perturbations for minimal loss in test accuracy. Furthermore, we find that Binary Neural Networks (BNNs) and related variants are intrinsically more robust than their full precision counterparts in adversarial scenarios. Combining input discretization with BNNs furthers the robustness even waiving the need for adversarial training for certain magnitude of perturbation values. We evaluate the effect of discretization on MNIST, CIFAR10, CIFAR100 and Imagenet datasets. Across all datasets, we observe maximal adversarial resistance with 2-bit input discretization that incurs an adversarial accuracy loss of just 1-2

READ FULL TEXT

page 1

page 3

research
02/01/2019

A New Family of Neural Networks Provably Resistant to Adversarial Attacks

Adversarial attacks add perturbations to the input features with the int...
research
06/28/2018

Adversarial Reprogramming of Neural Networks

Deep neural networks are susceptible to adversarial attacks. In computer...
research
11/27/2019

Can Attention Masks Improve Adversarial Robustness?

Deep Neural Networks (DNNs) are known to be susceptible to adversarial e...
research
08/07/2019

Improved Adversarial Robustness by Reducing Open Space Risk via Tent Activations

Adversarial examples contain small perturbations that can remain imperce...
research
06/25/2022

Defense against adversarial attacks on deep convolutional neural networks through nonlocal denoising

Despite substantial advances in network architecture performance, the su...
research
04/11/2021

Achieving Model Robustness through Discrete Adversarial Training

Discrete adversarial attacks are symbolic perturbations to a language in...
research
02/26/2018

Retrieval-Augmented Convolutional Neural Networks for Improved Robustness against Adversarial Examples

We propose a retrieval-augmented convolutional network and propose to tr...

Please sign up or login with your details

Forgot password? Click here to reset