Keeping the Bad Guys Out: Protecting and Vaccinating Deep Learning with JPEG Compression

05/08/2017
by   Nilaksh Das, et al.
0

Deep neural networks (DNNs) have achieved great success in solving a variety of machine learning (ML) problems, especially in the domain of image recognition. However, recent research showed that DNNs can be highly vulnerable to adversarially generated instances, which look seemingly normal to human observers, but completely confuse DNNs. These adversarial samples are crafted by adding small perturbations to normal, benign images. Such perturbations, while imperceptible to the human eye, are picked up by DNNs and cause them to misclassify the manipulated instances with high confidence. In this work, we explore and demonstrate how systematic JPEG compression can work as an effective pre-processing step in the classification pipeline to counter adversarial attacks and dramatically reduce their effects (e.g., Fast Gradient Sign Method, DeepFool). An important component of JPEG compression is its ability to remove high frequency signal components, inside square blocks of an image. Such an operation is equivalent to selective blurring of the image, helping remove additive perturbations. Further, we propose an ensemble-based technique that can be constructed quickly from a given well-performing DNN, and empirically show how such an ensemble that leverages JPEG compression can protect a model from multiple types of adversarial attacks, without requiring knowledge about the model.

READ FULL TEXT
research
02/18/2020

TensorShield: Tensor-based Defense Against Adversarial Attacks on Images

Recent studies have demonstrated that machine learning approaches like d...
research
03/28/2018

The Effects of JPEG and JPEG2000 Compression on Attacks using Adversarial Examples

Adversarial examples are known to have a negative effect on the performa...
research
11/04/2018

SSCNets: A Selective Sobel Convolution-based Technique to Enhance the Robustness of Deep Neural Networks against Security Attacks

Recent studies have shown that slight perturbations in the input data ca...
research
06/11/2021

Knowledge Enhanced Machine Learning Pipeline against Diverse Adversarial Attacks

Despite the great successes achieved by deep neural networks (DNNs), rec...
research
02/19/2018

Shield: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression

The rapidly growing body of research in adversarial machine learning has...
research
03/05/2021

Don't Forget to Sign the Gradients!

Engineering a top-notch deep learning model is an expensive procedure th...
research
04/17/2019

Adversarial Defense Through Network Profiling Based Path Extraction

Recently, researchers have started decomposing deep neural network model...

Please sign up or login with your details

Forgot password? Click here to reset