Fawkes: Protecting Personal Privacy against Unauthorized Deep Learning Models

by   Shawn Shan, et al.

Today's proliferation of powerful facial recognition models poses a real threat to personal privacy. As Clearview.ai demonstrated, anyone can canvas the Internet for data, and train highly accurate facial recognition models of us without our knowledge. We need tools to protect ourselves from unauthorized facial recognition systems and their numerous potential misuses. Unfortunately, work in related areas are limited in practicality and effectiveness. In this paper, we propose Fawkes, a system that allow individuals to inoculate themselves against unauthorized facial recognition models. Fawkes achieves this by helping users adding imperceptible pixel-level changes (we call them "cloaks") to their own photos before publishing them online. When collected by a third-party "tracker" and used to train facial recognition models, these "cloaked" images produce functional models that consistently misidentify the user. We experimentally prove that Fawkes provides 95+ recognition regardless of how trackers train their models. Even when clean, uncloaked images are "leaked" to the tracker and used for training, Fawkes can still maintain a 80+ experiments against today's state-of-the-art facial recognition services and achieve 100 of countermeasures that try to detect or disrupt cloaks.



There are no comments yet.


page 8

page 13


Privacy protection based on mask template

Powerful recognition algorithms are widely used in the Internet or impor...

Face Beneath the Ink: Synthetic Data and Tattoo Removal with Application to Face Recognition

Systems that analyse faces have seen significant improvements in recent ...

Towards Privacy-Preserving Affect Recognition: A Two-Level Deep Learning Architecture

Automatically understanding and recognising human affective states using...

Data Poisoning Won't Save You From Facial Recognition

Data poisoning has been proposed as a compelling defense against facial ...

Preventing Personal Data Theft in Images with Adversarial ML

Facial recognition tools are becoming exceptionally accurate in identify...

Obfuscation of Images via Differential Privacy: From Facial Images to General Images

Due to the pervasiveness of image capturing devices in every-day life, i...

Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release

Large organizations such as social media companies continually release d...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.