What Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space

01/18/2021
by   Shihao Zhao, et al.
16

Deep neural networks (DNNs) have been widely adopted in different applications to achieve state-of-the-art performance. However, they are often applied as a black box with limited understanding of what the model has learned from the data. In this paper, we focus on image classification and propose a method to visualize and understand the class-wise patterns learned by DNNs trained under three different settings including natural, backdoored and adversarial. Different from existing class-wise deep representation visualizations, our method searches for a single predictive pattern in the input (i.e. pixel) space for each class. Based on the proposed method, we show that DNNs trained on natural (clean) data learn abstract shapes along with some texture, and backdoored models learn a small but highly predictive pattern for the backdoor target class. Interestingly, the existence of class-wise predictive patterns in the input space indicates that even DNNs trained on clean data can have backdoors, and the class-wise patterns identified by our method can be readily applied to "backdoor" attack the model. In the adversarial setting, we show that adversarially trained models learn more simplified shape patterns. Our method can serve as a useful tool to better understand DNNs trained on different datasets under different settings.

READ FULL TEXT

page 5

page 8

page 16

page 17

page 19

page 20

page 21

page 22

research
12/11/2022

General Adversarial Defense Against Black-box Attacks via Pixel Level and Feature Level Distribution Alignments

Deep Neural Networks (DNNs) are vulnerable to the black-box adversarial ...
research
07/14/2020

Patch-wise Attack for Fooling Deep Neural Network

By adding human-imperceptible noise to clean images, the resultant adver...
research
11/25/2019

Adversarial Attack with Pattern Replacement

We propose a generative model for adversarial attack. The model generate...
research
12/31/2020

Patch-wise++ Perturbation for Adversarial Targeted Attacks

Although great progress has been made on adversarial attacks for deep ne...
research
05/31/2021

Dominant Patterns: Critical Features Hidden in Deep Neural Networks

In this paper, we find the existence of critical features hidden in Deep...
research
04/27/2016

Interpretable Deep Neural Networks for Single-Trial EEG Classification

Background: In cognitive neuroscience the potential of Deep Neural Netwo...
research
07/13/2020

Exclusion and Inclusion – A model agnostic approach to feature importance in DNNs

Deep Neural Networks in NLP have enabled systems to learn complex non-li...

Please sign up or login with your details

Forgot password? Click here to reset