Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor Attack

08/31/2023
by   Sze Jue Yang, et al.
0

The vulnerabilities to backdoor attacks have recently threatened the trustworthiness of machine learning models in practical applications. Conventional wisdom suggests that not everyone can be an attacker since the process of designing the trigger generation algorithm often involves significant effort and extensive experimentation to ensure the attack's stealthiness and effectiveness. Alternatively, this paper shows that there exists a more severe backdoor threat: anyone can exploit an easily-accessible algorithm for silent backdoor attacks. Specifically, this attacker can employ the widely-used lossy image compression from a plethora of compression tools to effortlessly inject a trigger pattern into an image without leaving any noticeable trace; i.e., the generated triggers are natural artifacts. One does not require extensive knowledge to click on the "convert" or "save as" button while using tools for lossy image compression. Via this attack, the adversary does not need to design a trigger generator as seen in prior works and only requires poisoning the data. Empirically, the proposed attack consistently achieves 100 CIFAR-10, GTSRB and CelebA. More significantly, the proposed attack can still achieve almost 100 poisoning rates in the clean label setting. The generated trigger of the proposed attack using one lossy compression algorithm is also transferable across other related compression algorithms, exacerbating the severity of this backdoor threat. This work takes another crucial step toward understanding the extensive risks of backdoor attacks in practice, urging practitioners to investigate similar attacks and relevant backdoor mitigation methods.

READ FULL TEXT

page 5

page 11

research
10/17/2022

Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class

In recent years, machine learning models have been shown to be vulnerabl...
research
01/14/2018

Towards Realistic Threat Modeling: Attack Commodification, Irrelevant Vulnerabilities, and Unrealistic Assumptions

Current threat models typically consider all possible ways an attacker c...
research
11/16/2021

Practical Timing Side Channel Attacks on Memory Compression

Compression algorithms are widely used as they save memory without losin...
research
03/19/2018

When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks

Attacks against machine learning systems represent a growing threat as h...
research
01/03/2023

Look, Listen, and Attack: Backdoor Attacks Against Video Action Recognition

Deep neural networks (DNNs) are vulnerable to a class of attacks called ...
research
06/26/2019

Heuristic Approach Towards Countermeasure Selection using Attack Graphs

Selecting the optimal set of countermeasures is a challenging task that ...
research
04/11/2023

Algorithms for Reconstructing DDoS Attack Graphs using Probabilistic Packet Marking

DoS and DDoS attacks are widely used and pose a constant threat. Here we...

Please sign up or login with your details

Forgot password? Click here to reset