CLIPping Privacy: Identity Inference Attacks on Multi-Modal Machine Learning Models

09/15/2022
by   Dominik Hintersdorf, et al.
6

As deep learning is now used in many real-world applications, research has focused increasingly on the privacy of deep learning models and how to prevent attackers from obtaining sensitive information about the training data. However, image-text models like CLIP have not yet been looked at in the context of privacy attacks. While membership inference attacks aim to tell whether a specific data point was used for training, we introduce a new type of privacy attack, named identity inference attack (IDIA), designed for multi-modal image-text models like CLIP. Using IDIAs, an attacker can reveal whether a particular person, was part of the training data by querying the model in a black-box fashion with different images of the same person. Letting the model choose from a wide variety of possible text labels, the attacker can probe the model whether it recognizes the person and, therefore, was used for training. Through several experiments on CLIP, we show that the attacker can identify individuals used for training with very high accuracy and that the model learns to connect the names with the depicted people. Our experiments show that a multi-modal image-text model indeed leaks sensitive information about its training data and, therefore, should be handled with care.

READ FULL TEXT

page 3

page 12

research
02/04/2022

LTU Attacker for Membership Inference

We address the problem of defending predictive models, such as machine l...
research
08/04/2022

Privacy Safe Representation Learning via Frequency Filtering Encoder

Deep learning models are increasingly deployed in real-world application...
research
12/26/2017

Multi-modal Geolocation Estimation Using Deep Neural Networks

Estimating the location where an image was taken based solely on the con...
research
09/27/2019

Alleviating Privacy Attacks via Causal Learning

Machine learning models, especially deep neural networks have been shown...
research
07/19/2023

(Ab)using Images and Sounds for Indirect Instruction Injection in Multi-Modal LLMs

We demonstrate how images and sounds can be used for indirect prompt and...
research
05/08/2020

Blind Backdoors in Deep Learning Models

We investigate a new method for injecting backdoors into machine learnin...
research
06/17/2021

Privacy-Preserving Eye-tracking Using Deep Learning

The expanding usage of complex machine learning methods like deep learni...

Please sign up or login with your details

Forgot password? Click here to reset