Interpreting Convolutional Neural Networks Through Compression

11/07/2017
by   Reza Abbasi-Asl, et al.
0

Convolutional neural networks (CNNs) achieve state-of-the-art performance in a wide variety of tasks in computer vision. However, interpreting CNNs still remains a challenge. This is mainly due to the large number of parameters in these networks. Here, we investigate the role of compression and particularly pruning filters in the interpretation of CNNs. We exploit our recently-proposed greedy structural compression scheme that prunes filters in a trained CNN. In our compression, the filter importance index is defined as the classification accuracy reduction (CAR) of the network after pruning that filter. The filters are then iteratively pruned based on the CAR index. We demonstrate the interpretability of CAR-compressed CNNs by showing that our algorithm prunes filters with visually redundant pattern selectivity. Specifically, we show the importance of shape-selective filters for object recognition, as opposed to color-selective filters. Out of top 20 CAR-pruned filters in AlexNet, 17 of them in the first layer and 14 of them in the second layer are color-selective filters. Finally, we introduce a variant of our CAR importance index that quantifies the importance of each image class to each CNN filter. We show that the most and the least important class labels present a meaningful interpretation of each filter that is consistent with the visualized pattern selectivity of that filter.

READ FULL TEXT

page 3

page 4

research
05/20/2017

Structural Compression of Convolutional Neural Networks Based on Greedy Filter Pruning

Convolutional neural networks (CNNs) have state-of-the-art performance o...
research
07/03/2019

Neuron ranking -- an informed way to condense convolutional neural networks architecture

Convolutional neural networks (CNNs) in recent years have made a dramati...
research
06/06/2022

Why do CNNs Learn Consistent Representations in their First Layer Independent of Labels and Architecture?

It has previously been observed that the filters learned in the first la...
research
11/26/2018

Leveraging Filter Correlations for Deep Model Compression

We present a filter correlation based model compression approach for dee...
research
11/27/2019

Exploring Frequency Domain Interpretation of Convolutional Neural Networks

Many existing interpretation methods of convolutional neural networks (C...
research
01/07/2021

L2PF – Learning to Prune Faster

Various applications in the field of autonomous driving are based on con...
research
05/31/2016

Dynamic Filter Networks

In a traditional convolutional layer, the learned filters stay fixed aft...

Please sign up or login with your details

Forgot password? Click here to reset