Scale-Adv: A Joint Attack on Image-Scaling and Machine Learning Classifiers

04/18/2021
by   Yue Gao, et al.
0

As real-world images come in varying sizes, the machine learning model is part of a larger system that includes an upstream image scaling algorithm. In this system, the model and the scaling algorithm have become attractive targets for numerous attacks, such as adversarial examples and the recent image-scaling attack. In response to these attacks, researchers have developed defense approaches that are tailored to attacks at each processing stage. As these defenses are developed in isolation, their underlying assumptions become questionable when viewing them from the perspective of an end-to-end machine learning system. In this paper, we investigate whether defenses against scaling attacks and adversarial examples are still robust when an adversary targets the entire machine learning system. In particular, we propose Scale-Adv, a novel attack framework that jointly targets the image-scaling and classification stages. This framework packs several novel techniques, including novel representations of the scaling defenses. It also defines two integrations that allow for attacking the machine learning system pipeline in the white-box and black-box settings. Based on this framework, we evaluate cutting-edge defenses at each processing stage. For scaling attacks, we show that Scale-Adv can evade four out of five state-of-the-art defenses by incorporating adversarial examples. For classification, we show that Scale-Adv can significantly improve the performance of machine learning attacks by leveraging weaknesses in the scaling algorithm. We empirically observe that Scale-Adv can produce adversarial examples with less perturbation and higher confidence than vanilla black-box and white-box attacks. We further demonstrate the transferability of Scale-Adv on a commercial online API.

READ FULL TEXT

page 12

page 16

page 20

page 31

page 32

research
04/05/2019

Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks

Deep neural networks are vulnerable to adversarial examples, which can m...
research
03/19/2020

Backdooring and Poisoning Neural Networks with Image-Scaling Attacks

Backdoors and poisoning attacks are a major threat to the security of ma...
research
11/15/2018

A note on hyperparameters in black-box adversarial examples

Since Biggio et al. (2013) and Szegedy et al. (2013) first drew attentio...
research
02/09/2021

"What's in the box?!": Deflecting Adversarial Attacks by Randomly Deploying Adversarially-Disjoint Models

Machine learning models are now widely deployed in real-world applicatio...
research
07/30/2023

Theoretically Principled Trade-off for Stateful Defenses against Query-Based Black-Box Attacks

Adversarial examples threaten the integrity of machine learning systems ...
research
06/15/2022

Morphence-2.0: Evasion-Resilient Moving Target Defense Powered by Out-of-Distribution Detection

Evasion attacks against machine learning models often succeed via iterat...
research
10/08/2020

Decamouflage: A Framework to Detect Image-Scaling Attacks on Convolutional Neural Networks

As an essential processing step in computer vision applications, image r...

Please sign up or login with your details

Forgot password? Click here to reset