Facial Misrecognition Systems: Simple Weight Manipulations Force DNNs to Err Only on Specific Persons

01/08/2023
by   Irad Zehavi, et al.
0

In this paper we describe how to plant novel types of backdoors in any facial recognition model based on the popular architecture of deep Siamese neural networks, by mathematically changing a small fraction of its weights (i.e., without using any additional training or optimization). These backdoors force the system to err only on specific persons which are preselected by the attacker. For example, we show how such a backdoored system can take any two images of a particular person and decide that they represent different persons (an anonymity attack), or take any two images of a particular pair of persons and decide that they represent the same person (a confusion attack), with almost no effect on the correctness of its decisions for other persons. Uniquely, we show that multiple backdoors can be independently installed by multiple attackers who may not be aware of each other's existence with almost no interference. We have experimentally verified the attacks on a FaceNet-based facial recognition system, which achieves SOTA accuracy on the standard LFW dataset of 99.35%. When we tried to individually anonymize ten celebrities, the network failed to recognize two of their images as being the same person in 96.97% to 98.29% of the time. When we tried to confuse between the extremely different looking Morgan Freeman and Scarlett Johansson, for example, their images were declared to be the same person in 91.51 % of the time. For each type of backdoor, we sequentially installed multiple backdoors with minimal effect on the performance of each one (for example, anonymizing all ten celebrities on the same model reduced the success rate for each celebrity by no more than 0.91%). In all of our experiments, the benign accuracy of the network on other persons was degraded by no more than 0.48% (and in most cases, it remained above 99.30%).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/01/2023

Learning Person-specific Network Representation for Apparent Personality Traits Recognition

Recent studies show that apparent personality traits can be reflected fr...
research
08/30/2023

Beard Segmentation and Recognition Bias

A person's facial hairstyle, such as presence and size of beard, can sig...
research
06/11/2018

Accurate and Robust Neural Networks for Security Related Applications Exampled by Face Morphing Attacks

Artificial neural networks tend to learn only what they need for a task....
research
01/30/2020

Adversarial Attacks on Convolutional Neural Networks in Facial Recognition Domain

Numerous recent studies have demonstrated how Deep Neural Network (DNN) ...
research
08/16/2023

A New Data-Driven Method to Identify Violent Facial Expression

Human Facial Expressions plays an important role in identifying human ac...
research
12/18/2020

Robustness of Facial Recognition to GAN-based Face-morphing Attacks

Face-morphing attacks have been a cause for concern for a number of year...
research
04/29/2023

Embedding Aggregation for Forensic Facial Comparison

In forensic facial comparison, questioned-source images are usually capt...

Please sign up or login with your details

Forgot password? Click here to reset