Data Poisoning Won't Save You From Facial Recognition

06/28/2021
by   Evani Radiya-Dixit, et al.
0

Data poisoning has been proposed as a compelling defense against facial recognition models trained on Web-scraped pictures. By perturbing the images they post online, users can fool models into misclassifying future (unperturbed) pictures. We demonstrate that this strategy provides a false sense of security, as it ignores an inherent asymmetry between the parties: users' pictures are perturbed once and for all before being published (at which point they are scraped) and must thereafter fool all future models – including models trained adaptively against the users' past attacks, or models that use technologies discovered after the attack. We evaluate two systems for poisoning attacks against large-scale facial recognition, Fawkes (500,000+ downloads) and LowKey. We demonstrate how an "oblivious" model trainer can simply wait for future developments in computer vision to nullify the protection of pictures collected in the past. We further show that an adversary with black-box access to the attack can (i) train a robust model that resists the perturbations of collected pictures and (ii) detect poisoned pictures uploaded online. We caution that facial recognition poisoning will not admit an "arms race" between attackers and defenders. Once perturbed pictures are scraped, the attack cannot be changed so any future successful defense irrevocably undermines users' privacy.

READ FULL TEXT

Authors

page 2

07/09/2021

Universal 3-Dimensional Perturbations for Black-Box Attacks on Video Recognition Systems

Widely deployed deep neural network (DNN) models have been proven to be ...
05/24/2022

Adversarial Attack on Attackers: Post-Process to Mitigate Black-Box Score-Based Query Attacks

The score-based query attacks (SQAs) pose practical threats to deep neur...
12/03/2019

A Study of Black Box Adversarial Attacks in Computer Vision

Machine learning has seen tremendous advances in the past few years whic...
09/17/2020

New Models for Understanding and Reasoning about Speculative Execution Attacks

Spectre and Meltdown attacks and their variants exploit hardware perform...
02/19/2020

Fawkes: Protecting Personal Privacy against Unauthorized Deep Learning Models

Today's proliferation of powerful facial recognition models poses a real...
08/29/2021

Beyond Model Extraction: Imitation Attack for Black-Box NLP APIs

Machine-learning-as-a-service (MLaaS) has attracted millions of users to...
11/25/2020

Adversarial Attack on Facial Recognition using Visible Light

The use of deep learning for human identification and object detection i...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.