Self-recoverable Adversarial Examples: A New Effective Protection Mechanism in Social Networks

04/26/2022
by   Jiawei Zhang, et al.
6

Malicious intelligent algorithms greatly threaten the security of social users' privacy by detecting and analyzing the uploaded photos to social network platforms. The destruction to DNNs brought by the adversarial attack sparks the potential that adversarial examples serve as a new protection mechanism for privacy security in social networks. However, the existing adversarial example does not have recoverability for serving as an effective protection mechanism. To address this issue, we propose a recoverable generative adversarial network to generate self-recoverable adversarial examples. By modeling the adversarial attack and recovery as a united task, our method can minimize the error of the recovered examples while maximizing the attack ability, resulting in better recoverability of adversarial examples. To further boost the recoverability of these examples, we exploit a dimension reducer to optimize the distribution of adversarial perturbation. The experimental results prove that the adversarial examples generated by the proposed method present superior recoverability, attack ability, and robustness on different datasets and network architectures, which ensure its effectiveness as a protection mechanism in social networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 10

page 13

research
11/06/2019

Reversible Adversarial Examples based on Reversible Image Transformation

Recent studies show that widely used deep neural networks (DNNs) are vul...
research
03/18/2019

Generating Adversarial Examples With Conditional Generative Adversarial Net

Recently, deep neural networks have significant progress and successful ...
research
11/06/2019

Reversible Adversarial Example based on Reversible Image Transformation

At present there are many companies that take the most advanced Deep Neu...
research
07/03/2020

Towards Robust Deep Learning with Ensemble Networks and Noisy Layers

In this paper we provide an approach for deep learning that protects aga...
research
02/07/2023

Toward Face Biometric De-identification using Adversarial Examples

The remarkable success of face recognition (FR) has endangered the priva...
research
06/28/2022

Rethinking Adversarial Examples for Location Privacy Protection

We have investigated a new application of adversarial examples, namely l...
research
07/26/2021

Benign Adversarial Attack: Tricking Algorithm for Goodness

In spite of the successful application in many fields, machine learning ...

Please sign up or login with your details

Forgot password? Click here to reset