A Fusion-Denoising Attack on InstaHide with Data Augmentation

05/17/2021
by   Xinjian Luo, et al.
0

InstaHide is a state-of-the-art mechanism for protecting private training images in collaborative learning. It works by mixing multiple private images and modifying them in such a way that their visual features are no longer distinguishable to the naked eye, without significantly degrading the accuracy of training. In recent work, however, Carlini et al. show that it is possible to reconstruct private images from the encrypted dataset generated by InstaHide, by exploiting the correlations among the encrypted images. Nevertheless, Carlini et al.'s attack relies on the assumption that each private image is used without modification when mixing up with other private images. As a consequence, it could be easily defeated by incorporating data augmentation into InstaHide. This leads to a natural question: is InstaHide with data augmentation secure? This paper provides a negative answer to the above question, by present an attack for recovering private images from the outputs of InstaHide even when data augmentation is present. The basic idea of our attack is to use a comparative network to identify encrypted images that are likely to correspond to the same private image, and then employ a fusion-denoising network for restoring the private image from the encrypted ones, taking into account the effects of data augmentation. Extensive experiments demonstrate the effectiveness of the proposed attack in comparison to Carlini et al.'s attack.

READ FULL TEXT

page 1

page 4

page 7

page 9

page 10

page 12

page 13

research
11/24/2020

InstaHide's Sample Complexity When Mixing Two Private Images

Inspired by InstaHide challenge [Huang, Song, Li and Arora'20], [Chen, S...
research
06/13/2021

Survey: Image Mixing and Deleting for Data Augmentation

Data augmentation has been widely used to improve deep nerual networks p...
research
03/03/2023

Exploring Data Augmentation Methods on Social Media Corpora

Data augmentation has proven widely effective in computer vision. In Nat...
research
11/03/2019

Enhanced Convolutional Neural Tangent Kernels

Recent research shows that for training with ℓ_2 loss, convolutional neu...
research
05/31/2019

Known-plaintext attack and ciphertext-only attack for encrypted single-pixel imaging

In many previous works, a single-pixel imaging (SPI) system is construct...
research
06/11/2021

Disentangling the Roles of Curation, Data-Augmentation and the Prior in the Cold Posterior Effect

The "cold posterior effect" (CPE) in Bayesian deep learning describes th...
research
04/08/2023

A Continued Fraction-Hyperbola based Attack on RSA cryptosystem

In this paper we present new arithmetical and algebraic results followin...

Please sign up or login with your details

Forgot password? Click here to reset