Minimum Uncertainty Based Detection of Adversaries in Deep Neural Networks

04/05/2019
by   Fatemeh Sheikholeslami, et al.
0

Despite their unprecedented performance in various domains, utilization of Deep Neural Networks (DNNs) in safety-critical environments is severely limited in the presence of even small adversarial perturbations. The present work develops a randomized approach to detecting such perturbations based on minimum uncertainty metrics that rely on sampling at the hidden layers during the DNN inference stage. The sampling probabilities are designed for effective detection of the adversarially corrupted inputs. Being modular, the novel detector of adversaries can be conveniently employed by any pre-trained DNN at no extra training overhead. Selecting which units to sample per hidden layer entails quantifying the amount of DNN output uncertainty from the viewpoint of Bayesian neural networks, where the overall uncertainty is expressed in terms of its layer-wise components - what also promotes scalability. Sampling probabilities are then sought by minimizing uncertainty measures layer-by-layer, leading to a novel convex optimization problem that admits an exact solver with superlinear convergence rate. By simplifying the objective function, low-complexity approximate solvers are also developed. In addition to valuable insights, these approximations link the novel approach with state-of-the-art randomized adversarial detectors. The effectiveness of the novel detectors in the context of competing alternatives is highlighted through extensive tests for various types of adversarial attacks with variable levels of strength.

READ FULL TEXT
research
04/20/2020

GraN: An Efficient Gradient-Norm Based Detector for Adversarial and Misclassified Examples

Deep neural networks (DNNs) are vulnerable to adversarial examples and o...
research
03/12/2023

DNN-Alias: Deep Neural Network Protection Against Side-Channel Attacks via Layer Balancing

Extracting the architecture of layers of a given deep neural network (DN...
research
11/24/2015

The Limitations of Deep Learning in Adversarial Settings

Deep learning takes advantage of large datasets and computationally effi...
research
12/11/2020

Closeness and Uncertainty Aware Adversarial Examples Detection in Adversarial Machine Learning

Deep neural network (DNN) architectures are considered to be robust to r...
research
02/20/2022

Real-time Over-the-air Adversarial Perturbations for Digital Communications using Deep Neural Networks

Deep neural networks (DNNs) are increasingly being used in a variety of ...
research
03/27/2021

LiBRe: A Practical Bayesian Approach to Adversarial Detection

Despite their appealing flexibility, deep neural networks (DNNs) are vul...
research
10/14/2021

Effective Certification of Monotone Deep Equilibrium Models

Monotone Operator Equilibrium Models (monDEQs) represent a class of mode...

Please sign up or login with your details

Forgot password? Click here to reset