Improving the Robustness of Deep Convolutional Neural Networks Through Feature Learning

03/11/2023
by   Jin Ding, et al.
0

Deep convolutional neural network (DCNN for short) models are vulnerable to examples with small perturbations. Adversarial training (AT for short) is a widely used approach to enhance the robustness of DCNN models by data augmentation. In AT, the DCNN models are trained with clean examples and adversarial examples (AE for short) which are generated using a specific attack method, aiming to gain ability to defend themselves when facing the unseen AEs. However, in practice, the trained DCNN models are often fooled by the AEs generated by the novel attack methods. This naturally raises a question: can a DCNN model learn certain features which are insensitive to small perturbations, and further defend itself no matter what attack methods are presented. To answer this question, this paper makes a beginning effort by proposing a shallow binary feature module (SBFM for short), which can be integrated into any popular backbone. The SBFM includes two types of layers, i.e., Sobel layer and threshold layer. In Sobel layer, there are four parallel feature maps which represent horizontal, vertical, and diagonal edge features, respectively. And in threshold layer, it turns the edge features learnt by Sobel layer to the binary features, which then are feeded into the fully connected layers for classification with the features learnt by the backbone. We integrate SBFM into VGG16 and ResNet34, respectively, and conduct experiments on multiple datasets. Experimental results demonstrate, under FGSM attack with ϵ=8/255, the SBFM integrated models can achieve averagely 35% higher accuracy than the original ones, and in CIFAR-10 and TinyImageNet datasets, the SBFM integrated models can achieve averagely 75% classification accuracy. The work in this paper shows it is promising to enhance the robustness of DCNN models through feature learning.

READ FULL TEXT

page 1

page 5

page 6

research
05/13/2019

Harnessing the Vulnerability of Latent Layers in Adversarially Trained Models

Neural networks are vulnerable to adversarial attacks -- small visually ...
research
05/22/2017

Regularizing deep networks using efficient layerwise adversarial training

Adversarial training has been shown to regularize deep neural networks i...
research
06/09/2021

Towards Defending against Adversarial Examples via Attack-Invariant Features

Deep neural networks (DNNs) are vulnerable to adversarial noise. Their a...
research
11/19/2019

Defective Convolutional Layers Learn Robust CNNs

Robustness of convolutional neural networks has recently been highlighte...
research
12/17/2018

Spartan Networks: Self-Feature-Squeezing Neural Networks for increased robustness in adversarial settings

Deep learning models are vulnerable to adversarial examples which are in...
research
10/22/2021

Deep Convolutional Autoencoders as Generic Feature Extractors in Seismological Applications

The idea of using a deep autoencoder to encode seismic waveform features...

Please sign up or login with your details

Forgot password? Click here to reset