Decamouflage: A Framework to Detect Image-Scaling Attacks on Convolutional Neural Networks

10/08/2020
by   Bedeuro Kim, et al.
0

As an essential processing step in computer vision applications, image resizing or scaling, more specifically downsampling, has to be applied before feeding a normally large image into a convolutional neural network (CNN) model because CNN models typically take small fixed-size images as inputs. However, image scaling functions could be adversarially abused to perform a newly revealed attack called image-scaling attack, which can affect a wide range of computer vision applications building upon image-scaling functions. This work presents an image-scaling attack detection framework, termed as Decamouflage. Decamouflage consists of three independent detection methods: (1) rescaling, (2) filtering/pooling, and (3) steganalysis. While each of these three methods is efficient standalone, they can work in an ensemble manner not only to improve the detection accuracy but also to harden potential adaptive attacks. Decamouflage has a pre-determined detection threshold that is generic. More precisely, as we have validated, the threshold determined from one dataset is also applicable to other different datasets. Extensive experiments show that Decamouflage achieves detection accuracy of 99.9% and 99.8% in the white-box (with the knowledge of attack algorithms) and the black-box (without the knowledge of attack algorithms) settings, respectively. To corroborate the efficiency of Decamouflage, we have also measured its run-time overhead on a personal PC with an i5 CPU and found that Decamouflage can detect image-scaling attacks in milliseconds. Overall, Decamouflage can accurately detect image scaling attacks in both white-box and black-box settings with acceptable run-time overhead.

READ FULL TEXT

page 15

page 16

research
11/27/2018

A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks

Depending on how much information an adversary can access to, adversaria...
research
10/22/2019

Adversarial Example Detection by Classification for Deep Speech Recognition

Machine Learning systems are vulnerable to adversarial attacks and will ...
research
03/19/2020

Backdooring and Poisoning Neural Networks with Image-Scaling Attacks

Backdoors and poisoning attacks are a major threat to the security of ma...
research
08/09/2019

Februus: Input Purification Defence Against Trojan Attacks on Deep Neural Network Systems

We propose Februus; a novel idea to neutralize insidous and highly poten...
research
04/18/2021

Scale-Adv: A Joint Attack on Image-Scaling and Machine Learning Classifiers

As real-world images come in varying sizes, the machine learning model i...
research
11/23/2019

Design and Evaluation of a Multi-Domain Trojan Detection Method on Deep Neural Networks

This work corroborates a run-time Trojan detection method exploiting STR...
research
05/02/2021

Kubernetes Autoscaling: YoYo Attack Vulnerability and Mitigation

In recent years, we have witnessed a new kind of DDoS attack, the burst ...

Please sign up or login with your details

Forgot password? Click here to reset