Deep Probabilistic Models to Detect Data Poisoning Attacks

12/03/2019
by   Mahesh Subedar, et al.
0

Data poisoning attacks compromise the integrity of machine-learning models by introducing malicious training samples to influence the results during test time. In this work, we investigate backdoor data poisoning attack on deep neural networks (DNNs) by inserting a backdoor pattern in the training images. The resulting attack will misclassify poisoned test samples while maintaining high accuracies for the clean test-set. We present two approaches for detection of such poisoned samples by quantifying the uncertainty estimates associated with the trained models. In the first approach, we model the outputs of the various layers (deep features) with parametric probability distributions learnt from the clean held-out dataset. At inference, the likelihoods of deep features w.r.t these distributions are calculated to derive uncertainty estimates. In the second approach, we use Bayesian deep neural networks trained with mean-field variational inference to estimate model uncertainty associated with the predictions. The uncertainty estimates from these methods are used to discriminate clean from the poisoned samples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/25/2019

Probabilistic Modeling of Deep Features for Out-of-Distribution and Adversarial Detection

We present a principled approach for detecting out-of-distribution (OOD)...
research
01/15/2021

Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks

Deep neural networks (DNNs) are known vulnerable to backdoor attacks, a ...
research
03/27/2023

Detecting Backdoors During the Inference Stage Based on Corruption Robustness Consistency

Deep neural networks are proven to be vulnerable to backdoor attacks. De...
research
11/27/2018

Uncertainty aware multimodal activity recognition with Bayesian inference

Deep neural networks (DNNs) provide state-of-the-art results for a multi...
research
04/17/2017

Adversarial and Clean Data Are Not Twins

Adversarial attack has cast a shadow on the massive success of deep neur...
research
05/24/2023

Sharpness-Aware Data Poisoning Attack

Recent research has highlighted the vulnerability of Deep Neural Network...
research
10/27/2021

Adversarial Neuron Pruning Purifies Backdoored Deep Models

As deep neural networks (DNNs) are growing larger, their requirements fo...

Please sign up or login with your details

Forgot password? Click here to reset