Wolf in Sheep's Clothing - The Downscaling Attack Against Deep Learning Applications

12/21/2017
by   Qixue Xiao, et al.
0

This paper considers security risks buried in the data processing pipeline in common deep learning applications. Deep learning models usually assume a fixed scale for their training and input data. To allow deep learning applications to handle a wide range of input data, popular frameworks, such as Caffe, TensorFlow, and Torch, all provide data scaling functions to resize input to the dimensions used by deep learning models. Image scaling algorithms are intended to preserve the visual features of an image after scaling. However, common image scaling algorithms are not designed to handle human crafted images. Attackers can make the scaling outputs look dramatically different from the corresponding input images. This paper presents a downscaling attack that targets the data scaling process in deep learning applications. By carefully crafting input data that mismatches with the dimension used by deep learning models, attackers can create deceiving effects. A deep learning application effectively consumes data that are not the same as those presented to users. The visual inconsistency enables practical evasion and data poisoning attacks to deep learning applications. This paper presents proof-of-concept attack samples to popular deep-learning-based image classification applications. To address the downscaling attacks, the paper also suggests multiple potential mitigation strategies.

READ FULL TEXT
research
11/29/2017

Security Risks in Deep Learning Implementations

Advance in deep learning algorithms overshadows their security risk in s...
research
03/16/2018

Vulnerability of Deep Learning

The Renormalisation Group (RG) provides a framework in which it is possi...
research
11/29/2022

Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial Perturbations against Interpretable Deep Learning

Deep learning methods have gained increased attention in various applica...
research
09/11/2020

Hierarchical Roofline Performance Analysis for Deep Learning Applications

This paper presents a practical methodology for collecting performance d...
research
05/31/2022

Semantic Autoencoder and Its Potential Usage for Adversarial Attack

Autoencoder can give rise to an appropriate latent representation of the...
research
12/16/2021

An Audio-Visual Dataset and Deep Learning Frameworks for Crowded Scene Classification

This paper presents a task of audio-visual scene classification (SC) whe...
research
10/16/2018

Security Matters: A Survey on Adversarial Machine Learning

Adversarial machine learning is a fast growing research area, which cons...

Please sign up or login with your details

Forgot password? Click here to reset