Run-Off Election: Improved Provable Defense against Data Poisoning Attacks

02/05/2023
by   Keivan Rezaei, et al.
0

In data poisoning attacks, an adversary tries to change a model's prediction by adding, modifying, or removing samples in the training data. Recently, ensemble-based approaches for obtaining provable defenses against data poisoning have been proposed where predictions are done by taking a majority vote across multiple base models. In this work, we show that merely considering the majority vote in ensemble defenses is wasteful as it does not effectively utilize available information in the logits layers of the base models. Instead, we propose Run-Off Election (ROE), a novel aggregation method based on a two-round election across the base models: In the first round, models vote for their preferred class and then a second, Run-Off election is held between the top two classes in the first round. Based on this approach, we propose DPA+ROE and FA+ROE defense methods based on Deep Partition Aggregation (DPA) and Finite Aggregation (FA) approaches from prior work. We show how to obtain robustness for these methods using ideas inspired by dynamic programming and duality. We evaluate our methods on MNIST, CIFAR-10, and GTSRB and obtain improvements in certified accuracy by up to 4.73 a new state-of-the-art in (pointwise) certified robustness against data poisoning. In many cases, our approach outperforms the state-of-the-art, even when using 32 times less computational power.

READ FULL TEXT
research
06/26/2020

Deep Partition Aggregation: Provable Defense against General Poisoning Attacks

Adversarial poisoning attacks distort training data in order to corrupt ...
research
02/22/2023

Feature Partition Aggregation: A Fast Certified Defense Against a Union of Sparse Adversarial Attacks

Deep networks are susceptible to numerous types of adversarial attacks. ...
research
01/19/2021

On Provable Backdoor Defense in Collaborative Learning

As collaborative learning allows joint training of a model using multipl...
research
12/07/2020

Certified Robustness of Nearest Neighbors against Data Poisoning Attacks

Data poisoning attacks aim to corrupt a machine learning model via modif...
research
08/02/2023

Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks

Despite the broad application of Machine Learning models as a Service (M...
research
05/08/2021

Provable Guarantees against Data Poisoning Using Self-Expansion and Compatibility

A recent line of work has shown that deep networks are highly susceptibl...
research
12/07/2022

The BeMi Stardust: a Structured Ensemble of Binarized Neural Networks

Binarized Neural Networks (BNNs) are receiving increasing attention due ...

Please sign up or login with your details

Forgot password? Click here to reset