Interpretable Convolutional Neural Networks via Feedforward Design

10/05/2018
by   C. -C. Jay Kuo, et al.
16

The model parameters of convolutional neural networks (CNNs) are determined by backpropagation (BP). In this work, we propose an interpretable feedforward (FF) design without any BP as a reference. The FF design adopts a data-centric approach. It derives network parameters of the current layer based on data statistics from the output of the previous layer in a one-pass manner. To construct convolutional layers, we develop a new signal transform, called the Saab (Subspace Approximation with Adjusted Bias) transform. It is a variant of the principal component analysis (PCA) with an added bias vector to annihilate activation's nonlinearity. Multiple Saab transforms in cascade yield multiple convolutional layers. As to fully-connected (FC) layers, we construct them using a cascade of multi-stage linear least squared regressors (LSRs). The classification and robustness (against adversarial attacks) performances of BP- and FF-designed CNNs applied to the MNIST and the CIFAR-10 datasets are compared. Finally, we comment on the relationship between BP and FF designs.

READ FULL TEXT

page 14

page 17

page 20

research
01/08/2019

Ensembles of feedforward-designed convolutional neural networks

An ensemble method that fuses the output decision vectors of multiple fe...
research
02/25/2019

Visualization, Discriminability and Applications of Interpretable Saak Features

In this work, we study the power of Saak features as an effort towards i...
research
11/11/2018

An Interpretable Generative Model for Handwritten Digit Image Synthesis

An interpretable generative model for handwritten digits synthesis is pr...
research
09/09/2020

From Two-Class Linear Discriminant Analysis to Interpretable Multilayer Perceptron Design

A closed-form solution exists in two-class linear discriminant analysis ...
research
09/20/2022

BP-Im2col: Implicit Im2col Supporting AI Backpropagation on Systolic Arrays

State-of-the-art systolic array-based accelerators adopt the traditional...
research
05/31/2022

A comparative study of back propagation and its alternatives on multilayer perceptrons

The de facto algorithm for training the back pass of a feedforward neura...
research
10/11/2017

On Data-Driven Saak Transform

Being motivated by the multilayer RECOS (REctified-COrrelations on a Sph...

Please sign up or login with your details

Forgot password? Click here to reset