Robustification of deep net classifiers by key based diversified aggregation with pre-filtering

05/14/2019
by   Olga Taran, et al.
0

In this paper, we address a problem of machine learning system vulnerability to adversarial attacks. We propose and investigate a Key based Diversified Aggregation (KDA) mechanism as a defense strategy. The KDA assumes that the attacker (i) knows the architecture of classifier and the used defense strategy, (ii) has an access to the training data set but (iii) does not know the secret key. The robustness of the system is achieved by a specially designed key based randomization. The proposed randomization prevents the gradients' back propagation or the creating of a "bypass" system. The randomization is performed simultaneously in several channels and a multi-channel aggregation stabilizes the results of randomization by aggregating soft outputs from each classifier in multi-channel system. The performed experimental evaluation demonstrates a high robustness and universality of the KDA against the most efficient gradient based attacks like those proposed by N. Carlini and D. Wagner and the non-gradient based sparse adversarial perturbations like OnePixel attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/01/2019

Defending against adversarial attacks by randomized diversification

The vulnerability of machine learning systems to adversarial attacks que...
research
09/04/2023

Hindering Adversarial Attacks with Multiple Encrypted Patch Embeddings

In this paper, we propose a new key-based defense focusing on both effic...
research
06/08/2020

Tricking Adversarial Attacks To Fail

Recent adversarial defense approaches have failed. Untargeted gradient-b...
research
05/28/2019

Certifiably Robust Interpretation in Deep Learning

Although gradient-based saliency maps are popular methods for deep learn...
research
07/19/2020

Exploiting vulnerabilities of deep neural networks for privacy protection

Adversarial perturbations can be added to images to protect their conten...
research
02/04/2019

Theoretical evidence for adversarial robustness through randomization: the case of the Exponential family

This paper investigates the theory of robustness against adversarial att...
research
01/04/2021

Local Competition and Stochasticity for Adversarial Robustness in Deep Learning

This work addresses adversarial robustness in deep learning by consideri...

Please sign up or login with your details

Forgot password? Click here to reset