Evaluation of Momentum Diverse Input Iterative Fast Gradient Sign Method (M-DI2-FGSM) Based Attack Method on MCS 2018 Adversarial Attacks on Black Box Face Recognition System

06/23/2018
by   Md Ashraful Alam Milton, et al.
2

The convolutional neural network is the crucial tool for the recent success of deep learning based methods on various computer vision tasks like classification, segmentation, and detection. Convolutional neural networks achieved state-of-the-art performance in these tasks and every day pushing the limit of computer vision and AI. However, adversarial attack on computer vision systems is threatening their application in the real life and in safety-critical applications. Necessarily, Finding adversarial examples are important to detect susceptible models to attack and take safeguard measures to overcome the adversarial attacks. In this regard, MCS 2018 Adversarial Attacks on Black Box Face Recognition challenge aims to facilitate the research of finding new adversarial attack techniques and their effectiveness in generating adversarial examples. In this challenge, the attack"s nature is targeted-attack on the black-box neural network where we have no knowledge about black-block"s inner structure. The attacker must modify a set of five images of a single person so that the neural network miss-classify them as target image which is a set of five images of another person. In this competition, we applied Momentum Diverse Input Iterative Fast Gradient Sign Method (M-DI2-FGSM) to make an adversarial attack on black-box face recognition system. We tested our method on MCS 2018 Adversarial Attacks on Black Box Face Recognition challenge and found competitive result. Our solution got validation score 1.404 which better than baseline score 1.407 and stood 14 place among 132 teams in the leader-board. Further improvement can be achieved by finding improved feature extraction from source image, carefully chosen hyper-parameters, finding improved substitute model of the black-box and better optimization method.

READ FULL TEXT

page 2

page 4

page 5

research
07/08/2020

Delving into the Adversarial Robustness on Face Recognition

Face recognition has recently made substantial progress and achieved hig...
research
02/20/2019

Perceptual Quality-preserving Black-Box Attack against Deep Learning Image Classifiers

Deep neural networks provide unprecedented performance in all image clas...
research
04/23/2019

Minimizing Perceived Image Quality Loss Through Adversarial Attack Scoping

Neural networks are now actively being used for computer vision tasks in...
research
12/13/2022

Adversarial Attacks and Defences for Skin Cancer Classification

There has been a concurrent significant improvement in the medical image...
research
04/11/2022

Generalizing Adversarial Explanations with Grad-CAM

Gradient-weighted Class Activation Mapping (Grad- CAM), is an example-ba...
research
03/03/2020

Disrupting DeepFakes: Adversarial Attacks Against Conditional Image Translation Networks and Facial Manipulation Systems

Face modification systems using deep learning have become increasingly p...
research
05/14/2018

Hu-Fu: Hardware and Software Collaborative Attack Framework against Neural Networks

Recently, Deep Learning (DL), especially Convolutional Neural Network (C...

Please sign up or login with your details

Forgot password? Click here to reset