Robust Representation via Dynamic Feature Aggregation

05/16/2022
by   Haozhe Liu, et al.
3

Deep convolutional neural network (CNN) based models are vulnerable to the adversarial attacks. One of the possible reasons is that the embedding space of CNN based model is sparse, resulting in a large space for the generation of adversarial samples. In this study, we propose a method, denoted as Dynamic Feature Aggregation, to compress the embedding space with a novel regularization. Particularly, the convex combination between two samples are regarded as the pivot for aggregation. In the embedding space, the selected samples are guided to be similar to the representation of the pivot. On the other side, to mitigate the trivial solution of such regularization, the last fully-connected layer of the model is replaced by an orthogonal classifier, in which the embedding codes for different classes are processed orthogonally and separately. With the regularization and orthogonal classifier, a more compact embedding space can be obtained, which accordingly improves the model robustness against adversarial attacks. An averaging accuracy of 56.91 achieved by our method on CIFAR-10 against various attack methods, which significantly surpasses a solid baseline (Mixup) by a margin of 37.31 surprisingly, empirical results show that, the proposed method can also achieve the state-of-the-art performance for out-of-distribution (OOD) detection, due to the learned compact feature space. An F1 score of 0.937 is achieved by the proposed method, when adopting CIFAR-10 as in-distribution (ID) dataset and LSUN as OOD dataset. Code is available at https://github.com/HaozheLiu-ST/DynamicFeatureAggregation.

READ FULL TEXT
research
11/19/2021

Enhanced countering adversarial attacks via input denoising and feature restoring

Despite the fact that deep neural networks (DNNs) have achieved prominen...
research
11/03/2019

Improved Detection of Adversarial Attacks via Penetration Distortion Maximization

This paper is concerned with the defense of deep models against adversar...
research
02/28/2022

Robust Textual Embedding against Word-level Adversarial Attacks

We attribute the vulnerability of natural language processing models to ...
research
03/03/2021

Group-wise Inhibition based Feature Regularization for Robust Classification

The vanilla convolutional neural network (CNN) is vulnerable to images w...
research
05/21/2018

Adversarial Noise Layer: Regularize Neural Network By Adding Noise

In this paper, we introduce a novel regularization method called Adversa...
research
09/20/2019

Adversarial Learning with Margin-based Triplet Embedding Regularization

The Deep neural networks (DNNs) have achieved great success on a variety...
research
05/26/2023

Hybrid Energy Based Model in the Feature Space for Out-of-Distribution Detection

Out-of-distribution (OOD) detection is a critical requirement for the de...

Please sign up or login with your details

Forgot password? Click here to reset