Understanding the Initial Condensation of Convolutional Neural Networks

05/17/2023
by   Zhangchen Zhou, et al.
0

Previous research has shown that fully-connected networks with small initialization and gradient-based training methods exhibit a phenomenon known as condensation during training. This phenomenon refers to the input weights of hidden neurons condensing into isolated orientations during training, revealing an implicit bias towards simple solutions in the parameter space. However, the impact of neural network structure on condensation has not been investigated yet. In this study, we focus on the investigation of convolutional neural networks (CNNs). Our experiments suggest that when subjected to small initialization and gradient-based training methods, kernel weights within the same CNN layer also cluster together during training, demonstrating a significant degree of condensation. Theoretically, we demonstrate that in a finite training period, kernels of a two-layer CNN with small initialization will converge to one or a few directions. This work represents a step towards a better understanding of the non-linear training behavior exhibited by neural networks with specialized structures.

READ FULL TEXT

page 5

page 6

page 14

page 15

research
03/12/2023

Phase Diagram of Initial Condensation for Two-layer Neural Networks

The phenomenon of distinct behaviors exhibited by neural networks under ...
research
05/25/2021

Towards Understanding the Condensation of Two-layer Neural Networks at Initial Training

It is important to study what implicit regularization is imposed on the ...
research
12/19/2018

A Note on Lazy Training in Supervised Differentiable Programming

In a series of recent theoretical works, it has been shown that strongly...
research
02/11/2020

Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss

Neural networks trained to minimize the logistic (a.k.a. cross-entropy) ...
research
10/29/2020

What can we learn from gradients?

Recent work (<cit.>) has shown that it is possible to reconstruct the in...
research
12/11/2019

Is Feature Diversity Necessary in Neural Network Initialization?

Standard practice in training neural networks involves initializing the ...
research
01/22/2020

How Much Position Information Do Convolutional Neural Networks Encode?

In contrast to fully connected networks, Convolutional Neural Networks (...

Please sign up or login with your details

Forgot password? Click here to reset