Adversarial Attack on Facial Recognition using Visible Light

11/25/2020
by   Morgan Frearson, et al.
60

The use of deep learning for human identification and object detection is becoming ever more prevalent in the surveillance industry. These systems have been trained to identify human body's or faces with a high degree of accuracy. However, there have been successful attempts to fool these systems with different techniques called adversarial attacks. This paper presents a final report for an adversarial attack using visible light on facial recognition systems. The relevance of this research is to exploit the physical downfalls of deep neural networks. This demonstration of weakness within these systems are in hopes that this research will be used in the future to improve the training models for object recognition. As results were gathered the project objectives were adjusted to fit the outcomes. Because of this the following paper initially explores an adversarial attack using infrared light before readjusting to a visible light attack. A research outline on infrared light and facial recognition are presented within. A detailed analyzation of the current findings and possible future recommendations of the project are presented. The challenges encountered are evaluated and a final solution is delivered. The projects final outcome exhibits the ability to effectively fool recognition systems using light.

READ FULL TEXT

page 4

page 9

page 10

page 11

page 13

page 15

page 16

page 17

research
08/15/2022

Man-in-the-Middle Attack against Object Detection Systems

Is deep learning secure for robots? As embedded systems have access to m...
research
12/02/2018

SentiNet: Detecting Physical Attacks Against Deep Learning Systems

SentiNet is a novel detection framework for physical attacks on neural n...
research
05/26/2022

A Physical-World Adversarial Attack Against 3D Face Recognition

3D face recognition systems have been widely employed in intelligent ter...
research
04/09/2020

TOG: Targeted Adversarial Objectness Gradient Attacks on Real-time Object Detection Systems

The rapid growth of real-time huge data capturing has pushed the deep le...
research
09/14/2021

Dodging Attack Using Carefully Crafted Natural Makeup

Deep learning face recognition models are used by state-of-the-art surve...
research
12/09/2019

Amora: Black-box Adversarial Morphing Attack

Nowadays, digital facial content manipulation has become ubiquitous and ...
research
03/17/2020

Cooperative Object Detection and Parameter Estimation Using Visible Light Communications

Visible light communication (VLC) systems are promising candidates for f...

Please sign up or login with your details

Forgot password? Click here to reset