An Information-Theoretic View for Deep Learning

04/24/2018
by   Jingwei Zhang, et al.
0

Deep learning has transformed the computer vision, natural language processing and speech recognition. However, the following two critical questions are remaining obscure: (1) why deep neural networks generalize better than shallow networks? (2) Does it always hold that a deeper network leads to better performance? Specifically, letting L be the number of convolutional and pooling layers in a deep neural network, and n be the size of the training sample, we derive the upper bound on the expected generalization error for this network, i.e., E[R(W)-R_S(W)] ≤(-L/21/η)√(2σ^2/nI(S,W) ) where σ >0 is a constant depending on the loss function, 0<η<1 is a constant depending on the information loss for each convolutional or pooling layer, and I(S, W) is the mutual information between the training sample S and the output hypothesis W. This upper bound discovers: (1) As the network increases its number of convolutional and pooling layers L, the expected generalization error will decrease exponentially to zero. Layers with strict information loss, such as the convolutional layers, reduce the generalization error of deep learning algorithms. This answers the first question. However, (2) algorithms with zero expected generalization error does not imply a small test error or E[R(W)]. This is because E[R_S(W)] will be large when the information for fitting the data is lost as the number of layers increases. This suggests that the claim "the deeper the better" is conditioned on a small training error or E[R_S(W)].

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset