Measurement-driven Security Analysis of Imperceptible Impersonation Attacks

08/26/2020
by   Shasha Li, et al.
5

The emergence of Internet of Things (IoT) brings about new security challenges at the intersection of cyber and physical spaces. One prime example is the vulnerability of Face Recognition (FR) based access control in IoT systems. While previous research has shown that Deep Neural Network(DNN)-based FR systems (FRS) are potentially susceptible to imperceptible impersonation attacks, the potency of such attacks in a wide set of scenarios has not been thoroughly investigated. In this paper, we present the first systematic, wide-ranging measurement study of the exploitability of DNN-based FR systems using a large scale dataset. We find that arbitrary impersonation attacks, wherein an arbitrary attacker impersonates an arbitrary target, are hard if imperceptibility is an auxiliary goal. Specifically, we show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim, to different extents. We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face. Our results show that finding a universal perturbation is a much harder problem from the attacker's perspective. Finally, we find that the perturbed images do not generalize well across different DNN models. This suggests security countermeasures that can dramatically reduce the exploitability of DNN-based FR systems.

READ FULL TEXT

page 2

page 3

page 4

page 6

page 7

research
05/01/2021

A Master Key Backdoor for Universal Impersonation Attack against DNN-based Face Verification

We introduce a new attack against face verification systems based on Dee...
research
09/15/2020

Light Can Hack Your Face! Black-box Backdoor Attack on Face Recognition Systems

Deep neural networks (DNN) have shown great success in many computer vis...
research
07/09/2020

Efficient detection of adversarial images

In this paper, detection of deception attack on deep neural network (DNN...
research
03/04/2020

Real-time, Universal, and Robust Adversarial Attacks Against Speaker Recognition Systems

As the popularity of voice user interface (VUI) exploded in recent years...
research
06/30/2019

("Oops! Had the silly thing in reverse")---Optical injection attacks in through LED status indicators

It is possible to attack a computer remotely through the front panel LED...
research
04/25/2023

Model Extraction Attacks Against Reinforcement Learning Based Controllers

We introduce the problem of model-extraction attacks in cyber-physical s...
research
04/28/2022

Morphing Attack Potential

In security systems the risk assessment in the sense of common criteria ...

Please sign up or login with your details

Forgot password? Click here to reset