FAdeML: Understanding the Impact of Pre-Processing Noise Filtering on Adversarial Machine Learning

11/04/2018
by   Faiq Khalid, et al.
0

Deep neural networks (DNN)-based machine learning (ML) algorithms have recently emerged as the leading ML paradigm particularly for the task of classification due to their superior capability of learning efficiently from large datasets. The discovery of a number of well-known attacks such as dataset poisoning, adversarial examples, and network manipulation (through the addition of malicious nodes) has, however, put the spotlight squarely on the lack of security in DNN-based ML systems. In particular, malicious actors can use these well-known attacks to cause random/targeted misclassification, or cause a change in the prediction confidence, by only slightly but systematically manipulating the environmental parameters, inference data, or the data acquisition block. Most of the prior adversarial attacks have, however, not accounted for the pre-processing noise filters commonly integrated with the ML-inference module. Our contribution in this work is to show that this is a major omission since these noise filters can render ineffective the majority of the existing attacks, which rely essentially on introducing adversarial noise. Apart from this, we also extend the state of the art by proposing a novel pre-processing noise Filter-aware Adversarial ML attack called FAdeML. To demonstrate the effectiveness of the proposed methodology, we generate an adversarial attack image by exploiting the "VGGNet" DNN trained for the "German Traffic Sign Recognition Benchmarks (GTSRB" dataset, which despite having no visual noise, can cause a classifier to misclassify even in the presence of pre-processing noise filters.

READ FULL TEXT

page 1

page 2

page 5

page 6

page 7

research
11/02/2018

ISA4ML: Training Data-Unaware Imperceptible Security Attacks on Machine Learning Modules of Autonomous Vehicles

Due to big data analysis ability, machine learning (ML) algorithms are b...
research
11/02/2022

a-RNA: Adversarial Radio Noise Attack to Fool Radar-based Environment Perception Systems

Due to their robustness to degraded capturing conditions, radars are wid...
research
02/08/2016

Practical Black-Box Attacks against Machine Learning

Machine learning (ML) models, e.g., deep neural networks (DNNs), are vul...
research
03/13/2023

Can Adversarial Examples Be Parsed to Reveal Victim Model Information?

Numerous adversarial attack methods have been developed to generate impe...
research
07/09/2020

Efficient detection of adversarial images

In this paper, detection of deception attack on deep neural network (DNN...
research
09/17/2019

Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges

As machine learning (ML) becomes more and more powerful and easily acces...
research
01/27/2020

Practical Fast Gradient Sign Attack against Mammographic Image Classifier

Artificial intelligence (AI) has been a topic of major research for many...

Please sign up or login with your details

Forgot password? Click here to reset