Adversarial Attacks on Convolutional Neural Networks in Facial Recognition Domain

01/30/2020
by   Yigit Alparslan, et al.
8

Numerous recent studies have demonstrated how Deep Neural Network (DNN) classifiers can be fooled by adversarial examples, in which an attacker adds perturbations to an original sample, causing the classifier to misclassify the sample. Adversarial attacks that render DNNs vulnerable in real life represent a serious threat, given the consequences of improperly functioning autonomous vehicles, malware filters, or biometric authentication systems. In this paper, we apply Fast Gradient Sign Method to introduce perturbations to a facial image dataset and then test the output on a different classifier that we trained ourselves, to analyze transferability of this method. Next, we craft a variety of different attack algorithms on a facial image dataset, with the intention of developing untargeted black-box approaches assuming minimal adversarial knowledge, to further assess the robustness of DNNs in the facial recognition realm. We explore modifying single optimal pixels by a large amount, or modifying all pixels by a smaller amount, or combining these two attack approaches. While our single-pixel attacks achieved about a 15 decrease in classifier confidence level for the actual class, the all-pixel attacks were more successful and achieved up to an 84 confidence, along with an 81.6 attack that we tested with the highest levels of perturbation. Even with these high levels of perturbation, the face images remained fairly clearly identifiable to a human. We hope our research may help to advance the study of adversarial attacks on DNNs and defensive mechanisms to counteract them, particularly in the facial recognition domain.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

page 7

page 8

research
04/19/2018

Attacking Convolutional Neural Network using Differential Evolution

The output of Convolutional Neural Networks (CNN) has been shown to be d...
research
09/11/2019

Sparse and Imperceivable Adversarial Attacks

Neural networks have been proven to be vulnerable to a variety of advers...
research
09/13/2017

A Learning and Masking Approach to Secure Learning

Deep Neural Networks (DNNs) have been shown to be vulnerable against adv...
research
03/18/2022

Concept-based Adversarial Attacks: Tricking Humans and Classifiers Alike

We propose to generate adversarial samples by modifying activations of u...
research
01/27/2020

Practical Fast Gradient Sign Attack against Mammographic Image Classifier

Artificial intelligence (AI) has been a topic of major research for many...
research
05/28/2021

Chromatic and spatial analysis of one-pixel attacks against an image classifier

One-pixel attack is a curious way of deceiving neural network classifier...
research
01/08/2023

Facial Misrecognition Systems: Simple Weight Manipulations Force DNNs to Err Only on Specific Persons

In this paper we describe how to plant novel types of backdoors in any f...

Please sign up or login with your details

Forgot password? Click here to reset