DeepAI AI Chat
Log In Sign Up

A BIC based Mixture Model Defense against Data Poisoning Attacks on Classifiers

05/28/2021
by   Xi Li, et al.
0

Data Poisoning (DP) is an effective attack that causes trained classifiers to misclassify their inputs.DP attacks significantly degrade a classifier's accuracy by covertly injecting attack samples into the training set. Broadly applicable to different classifier structures, without strong assumptions about the attacker, we herein propose a novel Bayesian Information Criterion (BIC)-based mixture model defense against DP attacks that: 1) applies a mixture model both to well-fit potentially multi-modal class distributions and to capture adversarial samples within a small subset of mixture components; 2) jointly identifies poisoned components and samples by minimizing the BIC cost over all classes, with the identified poisoned data removed prior to classifier training. Our experimental results, for various classifier structures, demonstrate the effectiveness and universality of our defense under strong DP attacks, as well as the superiority over other works.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/31/2018

A Mixture Model Based Defense for Data Poisoning Attacks Against Naive Bayes Spam Filters

Naive Bayes spam filters are highly susceptible to data poisoning attack...
06/08/2020

Tricking Adversarial Attacks To Fail

Recent adversarial defense approaches have failed. Untargeted gradient-b...
06/11/2022

Bilateral Dependency Optimization: Defending Against Model-inversion Attacks

Through using only a well-trained classifier, model-inversion (MI) attac...
02/27/2020

Membership Inference Attacks and Defenses in Supervised Learning via Generalization Gap

This work studies membership inference (MI) attack against classifiers, ...
11/18/2020

Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff

Data poisoning and backdoor attacks manipulate victim models by maliciou...