Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples

02/08/2021
by   Omer Faruk Tuna, et al.
2

Deep neural network architectures are considered to be robust to random perturbations. Nevertheless, it was shown that they could be severely vulnerable to slight but carefully crafted perturbations of the input, termed as adversarial samples. In recent years, numerous studies have been conducted in this new area called "Adversarial Machine Learning" to devise new adversarial attacks and to defend against these attacks with more robust DNN architectures. However, almost all the research work so far has been concentrated on utilising model loss function to craft adversarial examples or create robust models. This study explores the usage of quantified epistemic uncertainty obtained from Monte-Carlo Dropout Sampling for adversarial attack purposes by which we perturb the input to the areas where the model has not seen before. We proposed new attack ideas based on the epistemic uncertainty of the model. Our results show that our proposed hybrid attack approach increases the attack success rates from 82.59 90.03

READ FULL TEXT

page 2

page 5

page 7

page 8

page 9

page 13

page 15

research
12/11/2020

Closeness and Uncertainty Aware Adversarial Examples Detection in Adversarial Machine Learning

Deep neural network (DNN) architectures are considered to be robust to r...
research
12/06/2018

Prior Networks for Detection of Adversarial Attacks

Adversarial examples are considered a serious issue for safety critical ...
research
04/18/2022

UNBUS: Uncertainty-aware Deep Botnet Detection System in Presence of Perturbed Samples

A rising number of botnet families have been successfully detected using...
research
03/01/2017

Detecting Adversarial Samples from Artifacts

Deep neural networks (DNNs) are powerful nonlinear architectures that ar...
research
09/19/2023

Adversarial Attacks Against Uncertainty Quantification

Machine-learning models can be fooled by adversarial examples, i.e., car...
research
06/02/2018

Idealised Bayesian Neural Networks Cannot Have Adversarial Examples: Theoretical and Empirical Study

We prove that idealised discriminative Bayesian neural networks, capturi...
research
08/23/2022

Evaluating Machine Unlearning via Epistemic Uncertainty

There has been a growing interest in Machine Unlearning recently, primar...

Please sign up or login with your details

Forgot password? Click here to reset