Understanding Deep Representations through Random Weights

04/02/2017
by   Yao Shu, et al.
0

We systematically study the deep representation of random weight CNN (convolutional neural network) using the DeCNN (deconvolutional neural network) architecture. We first fix the weights of an untrained CNN, and for each layer of its feature representation, we train a corresponding DeCNN to reconstruct the input image. As compared with the pre-trained CNN, the DeCNN trained on a random weight CNN can reconstruct images more quickly and accurately, no matter which type of random distribution for the CNN's weights. It reveals that every layer of the random CNN can retain photographically accurate information about the image. We then let the DeCNN be untrained, i.e. the overall CNN-DeCNN architecture uses only random weights. Strikingly, we can reconstruct all position information of the image for low layer representations but the colors change. For high layer representations, we can still capture the rough contours of the image. We also change the number of feature maps and the shape of the feature maps and gain more insight on the random function of the CNN-DeCNN structure. Our work reveals that the purely random CNN-DeCNN architecture substantially contributes to the geometric and photometric invariance due to the intrinsic symmetry and invertible structure, but it discards the colormetric information due to the random projection.

READ FULL TEXT

page 4

page 5

page 6

page 8

research
07/08/2015

Feature Representation in Convolutional Neural Networks

Convolutional Neural Networks (CNNs) are powerful models that achieve im...
research
05/18/2018

Unsupervised Learning of Neural Networks to Explain Neural Networks

This paper presents an unsupervised method to learn a neural network, na...
research
06/15/2016

A Powerful Generative Model Using Random Weights for the Deep Image Representation

To what extent is the success of deep visualization due to the training?...
research
02/17/2023

Random Padding Data Augmentation

The convolutional neural network (CNN) learns the same object in differe...
research
12/03/2019

A Step Towards Exposing Bias in Trained Deep Convolutional Neural Network Models

We present Smooth Grad-CAM++, a technique which combines two recent tech...
research
06/05/2022

U(1) Symmetry-breaking Observed in Generic CNN Bottleneck Layers

We report on a significant discovery linking deep convolutional neural n...
research
03/16/2021

SoWaF: Shuffling of Weights and Feature Maps: A Novel Hardware Intrinsic Attack (HIA) on Convolutional Neural Network (CNN)

Security of inference phase deployment of Convolutional neural network (...

Please sign up or login with your details

Forgot password? Click here to reset