Preventing Personal Data Theft in Images with Adversarial ML

10/20/2020
by   Thomas Cilloni, et al.
0

Facial recognition tools are becoming exceptionally accurate in identifying people from images. However, this comes at the cost of privacy for users of online services with photo management (e.g. social media platforms). Particularly troubling is the ability to leverage unsupervised learning to recognize faces even when the user has not labeled their images. This is made simpler by modern facial recognition tools, such as FaceNet, that use encoders to generate low dimensional embeddings that can be clustered to learn previously unknown faces. In this paper, we propose a strategy to generate non-invasive noise masks to apply to facial images for a newly introduced user, yielding adversarial examples and preventing the formation of identifiable clusters in the embedding space. We demonstrate the effectiveness of our method by showing that various classification and clustering methods cannot reliably cluster the adversarial examples we generate.

READ FULL TEXT
research
02/07/2023

Toward Face Biometric De-identification using Adversarial Examples

The remarkable success of face recognition (FR) has endangered the priva...
research
11/30/2021

Using a GAN to Generate Adversarial Examples to Facial Image Recognition

Images posted online present a privacy concern in that they may be used ...
research
05/23/2023

DiffProtect: Generate Adversarial Examples with Diffusion Models for Facial Privacy Protection

The increasingly pervasive facial recognition (FR) systems raise serious...
research
04/29/2023

Embedding Aggregation for Forensic Facial Comparison

In forensic facial comparison, questioned-source images are usually capt...
research
01/04/2018

Facial Attributes: Accuracy and Adversarial Robustness

Facial attributes, emerging soft biometrics, must be automatically and r...
research
02/19/2020

Fawkes: Protecting Personal Privacy against Unauthorized Deep Learning Models

Today's proliferation of powerful facial recognition models poses a real...
research
05/23/2021

CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for Combating Deepfakes

Malicious application of deepfakes (i.e., technologies can generate targ...

Please sign up or login with your details

Forgot password? Click here to reset