Fawkes: Protecting Personal Privacy against Unauthorized Deep Learning Models

02/19/2020
by   Shawn Shan, et al.
2

Today's proliferation of powerful facial recognition models poses a real threat to personal privacy. As Clearview.ai demonstrated, anyone can canvas the Internet for data, and train highly accurate facial recognition models of us without our knowledge. We need tools to protect ourselves from unauthorized facial recognition systems and their numerous potential misuses. Unfortunately, work in related areas are limited in practicality and effectiveness. In this paper, we propose Fawkes, a system that allow individuals to inoculate themselves against unauthorized facial recognition models. Fawkes achieves this by helping users adding imperceptible pixel-level changes (we call them "cloaks") to their own photos before publishing them online. When collected by a third-party "tracker" and used to train facial recognition models, these "cloaked" images produce functional models that consistently misidentify the user. We experimentally prove that Fawkes provides 95+ recognition regardless of how trackers train their models. Even when clean, uncloaked images are "leaked" to the tracker and used for training, Fawkes can still maintain a 80+ experiments against today's state-of-the-art facial recognition services and achieve 100 of countermeasures that try to detect or disrupt cloaks.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 13

02/13/2022

Privacy protection based on mask template

Powerful recognition algorithms are widely used in the Internet or impor...
02/10/2022

Face Beneath the Ink: Synthetic Data and Tattoo Removal with Application to Face Recognition

Systems that analyse faces have seen significant improvements in recent ...
11/14/2021

Towards Privacy-Preserving Affect Recognition: A Two-Level Deep Learning Architecture

Automatically understanding and recognising human affective states using...
06/28/2021

Data Poisoning Won't Save You From Facial Recognition

Data poisoning has been proposed as a compelling defense against facial ...
10/20/2020

Preventing Personal Data Theft in Images with Adversarial ML

Facial recognition tools are becoming exceptionally accurate in identify...
02/19/2021

Obfuscation of Images via Differential Privacy: From Facial Images to General Images

Due to the pervasiveness of image capturing devices in every-day life, i...
02/16/2021

Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release

Large organizations such as social media companies continually release d...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.