Data Poisoning Won't Save You From Facial Recognition

06/28/2021
by   Evani Radiya-Dixit, et al.
0

Data poisoning has been proposed as a compelling defense against facial recognition models trained on Web-scraped pictures. By perturbing the images they post online, users can fool models into misclassifying future (unperturbed) pictures. We demonstrate that this strategy provides a false sense of security, as it ignores an inherent asymmetry between the parties: users' pictures are perturbed once and for all before being published (at which point they are scraped) and must thereafter fool all future models – including models trained adaptively against the users' past attacks, or models that use technologies discovered after the attack. We evaluate two systems for poisoning attacks against large-scale facial recognition, Fawkes (500,000+ downloads) and LowKey. We demonstrate how an "oblivious" model trainer can simply wait for future developments in computer vision to nullify the protection of pictures collected in the past. We further show that an adversary with black-box access to the attack can (i) train a robust model that resists the perturbations of collected pictures and (ii) detect poisoned pictures uploaded online. We caution that facial recognition poisoning will not admit an "arms race" between attackers and defenders. Once perturbed pictures are scraped, the attack cannot be changed so any future successful defense irrevocably undermines users' privacy.

READ FULL TEXT
research
04/16/2023

A Random-patch based Defense Strategy Against Physical Attacks for Face Recognition Systems

The physical attack has been regarded as a kind of threat against real-w...
research
07/09/2021

Universal 3-Dimensional Perturbations for Black-Box Attacks on Video Recognition Systems

Widely deployed deep neural network (DNN) models have been proven to be ...
research
05/24/2022

Adversarial Attack on Attackers: Post-Process to Mitigate Black-Box Score-Based Query Attacks

The score-based query attacks (SQAs) pose practical threats to deep neur...
research
09/07/2022

Why So Toxic? Measuring and Triggering Toxic Behavior in Open-Domain Chatbots

Chatbots are used in many applications, e.g., automated agents, smart ho...
research
02/19/2020

Fawkes: Protecting Personal Privacy against Unauthorized Deep Learning Models

Today's proliferation of powerful facial recognition models poses a real...
research
10/18/2022

Transferable Unlearnable Examples

With more people publishing their personal data online, unauthorized dat...
research
09/08/2023

FIVA: Facial Image and Video Anonymization and Anonymization Defense

In this paper, we present a new approach for facial anonymization in ima...

Please sign up or login with your details

Forgot password? Click here to reset