Revisiting Sparse Convolutional Model for Visual Recognition

10/24/2022
by   Xili Dai, et al.
0

Despite strong empirical performance for image classification, deep neural networks are often regarded as “black boxes” and they are difficult to interpret. On the other hand, sparse convolutional models, which assume that a signal can be expressed by a linear combination of a few elements from a convolutional dictionary, are powerful tools for analyzing natural images with good theoretical interpretability and biological plausibility. However, such principled models have not demonstrated competitive performance when compared with empirically designed deep networks. This paper revisits the sparse convolutional modeling for image classification and bridges the gap between good empirical performance (of deep learning) and good interpretability (of sparse convolutional models). Our method uses differentiable optimization layers that are defined from convolutional sparse coding as drop-in replacements of standard convolutional layers in conventional deep neural networks. We show that such models have equally strong empirical performance on CIFAR-10, CIFAR-100, and ImageNet datasets when compared to conventional neural networks. By leveraging stable recovery property of sparse modeling, we further show that such models can be much more robust to input corruptions as well as adversarial perturbations in testing through a simple proper trade-off between sparse regularization and data reconstruction terms. Source code can be found at https://github.com/Delay-Xili/SDNet.

READ FULL TEXT

page 16

page 17

research
06/10/2014

Deep Epitomic Convolutional Neural Networks

Deep convolutional neural networks have recently proven extremely compet...
research
07/21/2020

CyCNN: A Rotation Invariant CNN using Polar Mapping and Cylindrical Convolution Layers

Deep Convolutional Neural Networks (CNNs) are empirically known to be in...
research
02/18/2023

Closed-Loop Transcription via Convolutional Sparse Coding

Autoencoding has achieved great empirical success as a framework for lea...
research
05/11/2021

Leveraging Sparse Linear Layers for Debuggable Deep Networks

We show how fitting sparse linear models over learned deep feature repre...
research
09/30/2022

Verifiable and Energy Efficient Medical Image Analysis with Quantised Self-attentive Deep Neural Networks

Convolutional Neural Networks have played a significant role in various ...
research
08/09/2023

SUnAA: Sparse Unmixing using Archetypal Analysis

This paper introduces a new sparse unmixing technique using archetypal a...
research
10/11/2021

NFT-K: Non-Fungible Tangent Kernels

Deep neural networks have become essential for numerous applications due...

Please sign up or login with your details

Forgot password? Click here to reset