A Frobenius norm regularization method for convolutional kernels to avoid unstable gradient problem

07/25/2019
by   Pei-Chang Guo, et al.
3

Convolutional neural network is a very important model of deep learning. It can help avoid the exploding/vanishing gradient problem and improve the generalizability of a neural network if the singular values of the Jacobian of a layer are bounded around 1 in the training process. We propose a new penalty function for a convolutional kernel to let the singular values of the corresponding transformation matrix are bounded around 1. We show how to carry out the gradient type methods. The penalty is about the structured transformation matrix corresponding to a convolutional kernel. This provides a new regularization method about the weights of convolutional layers.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro