Contributor-Aware Defenses Against Adversarial Backdoor Attacks

05/28/2022
by   Glenn Dawson, et al.
0

Deep neural networks for image classification are well-known to be vulnerable to adversarial attacks. One such attack that has garnered recent attention is the adversarial backdoor attack, which has demonstrated the capability to perform targeted misclassification of specific examples. In particular, backdoor attacks attempt to force a model to learn spurious relations between backdoor trigger patterns and false labels. In response to this threat, numerous defensive measures have been proposed; however, defenses against backdoor attacks focus on backdoor pattern detection, which may be unreliable against novel or unexpected types of backdoor pattern designs. We introduce a novel re-contextualization of the adversarial setting, where the presence of an adversary implicitly admits the existence of multiple database contributors. Then, under the mild assumption of contributor awareness, it becomes possible to exploit this knowledge to defend against backdoor attacks by destroying the false label associations. We propose a contributor-aware universal defensive framework for learning in the presence of multiple, potentially adversarial data sources that utilizes semi-supervised ensembles and learning from crowds to filter the false labels produced by adversarial triggers. Importantly, this defensive strategy is agnostic to backdoor pattern design, as it functions without needing – or even attempting – to perform either adversary identification or backdoor pattern detection during either training or inference. Our empirical studies demonstrate the robustness of the proposed framework against adversarial backdoor attacks from multiple simultaneous adversaries.

READ FULL TEXT
research
02/07/2020

RAID: Randomized Adversarial-Input Detection for Neural Networks

In recent years, neural networks have become the default choice for imag...
research
01/02/2020

Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural Networks Against Adversarial Attacks

Despite achieving state-of-the-art performance across many domains, mach...
research
11/19/2020

Multi-Task Adversarial Attack

Deep neural networks have achieved impressive performance in various are...
research
10/05/2021

Adversarial defenses via a mixture of generators

In spite of the enormous success of neural networks, adversarial example...
research
06/28/2021

Feature Importance Guided Attack: A Model Agnostic Adversarial Attack

Machine learning models are susceptible to adversarial attacks which dra...
research
08/28/2020

Centralized vs Decentralized Targeted Brute-Force Attacks: Guessing with Side-Information

According to recent empirical studies, a majority of users have the same...
research
10/09/2021

A Multiple Snapshot Attack on Deniable Storage Systems

While disk encryption is suitable for use in most situations where confi...

Please sign up or login with your details

Forgot password? Click here to reset