Regularization for convolutional kernel tensors to avoid unstable gradient problem in convolutional neural networks

02/05/2021
by   Pei-Chang Guo, et al.
0

Convolutional neural networks are very popular nowadays. Training neural networks is not an easy task. Each convolution corresponds to a structured transformation matrix. In order to help avoid the exploding/vanishing gradient problem, it is desirable that the singular values of each transformation matrix are not large/small in the training process. We propose three new regularization terms for a convolutional kernel tensor to constrain the singular values of each transformation matrix. We show how to carry out the gradient type methods, which provides new insight about the training of convolutional neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/25/2019

A Frobenius norm regularization method for convolutional kernels to avoid unstable gradient problem

Convolutional neural network is a very important model of deep learning....
research
06/12/2019

On regularization for a convolutional kernel in neural networks

Convolutional neural network is a very important model of deep learning....
research
03/19/2019

Kernel-based Translations of Convolutional Networks

Convolutional Neural Networks, as most artificial neural networks, are c...
research
06/10/2018

Transformationally Identical and Invariant Convolutional Neural Networks through Symmetric Element Operators

Mathematically speaking, a transformationally invariant operator, such a...
research
06/12/2020

Asymptotic Singular Value Distribution of Linear Convolutional Layers

In convolutional neural networks, the linear transformation of multi-cha...
research
03/29/2023

A Tensor-based Convolutional Neural Network for Small Dataset Classification

Inspired by the ConvNets with structured hidden representations, we prop...

Please sign up or login with your details

Forgot password? Click here to reset