DeepAI AI Chat
Log In Sign Up

On Brightness Agnostic Adversarial Examples Against Face Recognition Systems

09/29/2021
by   Inderjeet Singh, et al.
nec global
0

This paper introduces a novel adversarial example generation method against face recognition systems (FRSs). An adversarial example (AX) is an image with deliberately crafted noise to cause incorrect predictions by a target system. The AXs generated from our method remain robust under real-world brightness changes. Our method performs non-linear brightness transformations while leveraging the concept of curriculum learning during the attack generation procedure. We demonstrate that our method outperforms conventional techniques from comprehensive experimental investigations in the digital and physical world. Furthermore, this method enables practical risk assessment of FRSs against brightness agnostic AXs.

READ FULL TEXT
09/20/2021

Robust Physical-World Attacks on Face Recognition

Face recognition has been greatly facilitated by the development of deep...
05/09/2019

Adversarial Image Translation: Unrestricted Adversarial Examples in Face Recognition Systems

Thanks to recent advances in Deep Neural Networks (DNNs), face recogniti...
10/15/2019

On adversarial patches: real-world attack on ArcFace-100 face recognition system

Recent works showed the vulnerability of image classifiers to adversaria...
05/26/2022

A Physical-World Adversarial Attack Against 3D Face Recognition

3D face recognition systems have been widely employed in intelligent ter...
09/04/2021

Real-World Adversarial Examples involving Makeup Application

Deep neural networks have developed rapidly and have achieved outstandin...
04/28/2022

Morphing Attack Potential

In security systems the risk assessment in the sense of common criteria ...
06/08/2021

Simulated Adversarial Testing of Face Recognition Models

Most machine learning models are validated and tested on fixed datasets....