Adaptive Clustering of Robust Semantic Representations for Adversarial Image Purification

04/05/2021
by   Samuel Henrique Silva, et al.
11

Deep Learning models are highly susceptible to adversarial manipulations that can lead to catastrophic consequences. One of the most effective methods to defend against such disturbances is adversarial training but at the cost of generalization of unseen attacks and transferability across models. In this paper, we propose a robust defense against adversarial attacks, which is model agnostic and generalizable to unseen adversaries. Initially, with a baseline model, we extract the latent representations for each class and adaptively cluster the latent representations that share a semantic similarity. We obtain the distributions for the clustered latent representations and from their originating images, we learn semantic reconstruction dictionaries (SRD). We adversarially train a new model constraining the latent space representation to minimize the distance between the adversarial latent representation and the true cluster distribution. To purify the image, we decompose the input into low and high-frequency components. The high-frequency component is reconstructed based on the most adequate SRD from the clean dataset. In order to evaluate the most adequate SRD, we rely on the distance between robust latent representations and semantic cluster distributions. The output is a purified image with no perturbation. Image purification on CIFAR-10 and ImageNet-10 using our proposed method improved the accuracy by more than 10 state-of-the-art results.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 6

page 7

page 10

02/05/2021

Optimal Transport as a Defense Against Adversarial Attacks

Deep learning classifiers are now known to have flaws in the representat...
08/19/2019

Adversarial Defense by Suppressing High-frequency Components

Recent works show that deep neural networks trained on image classificat...
10/29/2020

WaveTransform: Crafting Adversarial Examples via Input Decomposition

Frequency spectrum has played a significant role in learning unique and ...
04/04/2021

Reliably fast adversarial training via latent adversarial perturbation

While multi-step adversarial training is widely popular as an effective ...
08/12/2020

Defending Adversarial Examples via DNN Bottleneck Reinforcement

This paper presents a DNN bottleneck reinforcement scheme to alleviate t...
12/07/2020

Learning to Separate Clusters of Adversarial Representations for Robust Adversarial Detection

Although deep neural networks have shown promising performances on vario...
06/26/2020

Learning Diverse Latent Representations for Improving the Resilience to Adversarial Attacks

This paper proposes an ensemble learning model that is resistant to adve...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.