Backdooring and Poisoning Neural Networks with Image-Scaling Attacks

03/19/2020
by   Erwin Quiring, et al.
0

Backdoors and poisoning attacks are a major threat to the security of machine-learning and vision systems. Often, however, these attacks leave visible artifacts in the images that can be visually detected and weaken the efficacy of the attacks. In this paper, we propose a novel strategy for hiding backdoor and poisoning attacks. Our approach builds on a recent class of attacks against image scaling. These attacks enable manipulating images such that they change their content when scaled to a specific resolution. By combining poisoning and image-scaling attacks, we can conceal the trigger of backdoors as well as hide the overlays of clean-label poisoning. Furthermore, we consider the detection of image-scaling attacks and derive an adaptive attack. In an empirical evaluation, we demonstrate the effectiveness of our strategy. First, we show that backdoors and poisoning work equally well when combined with image-scaling attacks. Second, we demonstrate that current detection defenses against image-scaling attacks are insufficient to uncover our manipulations. Overall, our work provides a novel means for hiding traces of manipulations, being applicable to different poisoning approaches.

READ FULL TEXT

page 1

page 5

page 6

research
04/18/2021

Scale-Adv: A Joint Attack on Image-Scaling and Machine Learning Classifiers

As real-world images come in varying sizes, the machine learning model i...
research
10/08/2020

Decamouflage: A Framework to Detect Image-Scaling Attacks on Convolutional Neural Networks

As an essential processing step in computer vision applications, image r...
research
10/13/2021

Traceback of Data Poisoning Attacks in Neural Networks

In adversarial machine learning, new defenses against attacks on deep le...
research
05/31/2023

Adversarial Clean Label Backdoor Attacks and Defenses on Text Classification Systems

Clean-label (CL) attack is a form of data poisoning attack where an adve...
research
12/06/2022

A Robust Image Steganographic Scheme against General Scaling Attacks

Conventional covert image communication is assumed to transmit the messa...
research
10/06/2020

Downscaling Attack and Defense: Turning What You See Back Into What You Get

The resizing of images, which is typically a required part of preprocess...
research
11/15/2021

Triggerless Backdoor Attack for NLP Tasks with Clean Labels

Backdoor attacks pose a new threat to NLP models. A standard strategy to...

Please sign up or login with your details

Forgot password? Click here to reset