
An InformationTheoretic View for Deep Learning
Deep learning has transformed the computer vision, natural language proc...
04/24/2018 ∙ by Jingwei Zhang, et al. ∙ 0 ∙ shareread it

On Tighter Generalization Bound for Deep Neural Networks: CNNs, ResNets, and Beyond
Our paper proposes a generalization error bound for a general family of ...
06/13/2018 ∙ by Xingguo Li, et al. ∙ 2 ∙ shareread it

Evaluation of Dataflow through layers of Deep Neural Networks in Classification and Regression Problems
This paper introduces two straightforward, effective indices to evaluate...
06/12/2019 ∙ by Ahmad Kalhor, et al. ∙ 0 ∙ shareread it

An analysis of training and generalization errors in shallow and deep networks
An open problem around deep networks is the apparent absence of overfit...
02/17/2018 ∙ by Hrushikesh Mhaskar, et al. ∙ 0 ∙ shareread it

Analysis of Gradient Clipping and Adaptive Scaling with a Relaxed Smoothness Condition
We provide a theoretical explanation for the fast convergence of gradien...
05/28/2019 ∙ by Jingzhao Zhang, et al. ∙ 0 ∙ shareread it

Compressibility and Generalization in LargeScale Deep Learning
Modern neural networks are highly overparameterized, with capacity to su...
04/16/2018 ∙ by Wenda Zhou, et al. ∙ 0 ∙ shareread it

Scalable KMedoids via True Error Bound and Familywise Bandits
KMedoids(KM) is a standard clustering method, used extensively on semi...
05/27/2019 ∙ by Aravindakshan Babu, et al. ∙ 0 ∙ shareread it
Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness
The accuracy of deep learning, i.e., deep neural networks, can be characterized by dividing the total error into three main types: approximation error, optimization error, and generalization error. Whereas there are some satisfactory answers to the problems of approximation and optimization, much less is known about the theory of generalization. Most existing theoretical works for generalization fail to explain the performance of neural networks in practice. To derive a meaningful bound, we study the generalization error of neural networks for classification problems in terms of data distribution and neural network smoothness. We introduce the cover complexity (CC) to measure the difficulty of learning a data set and the inverse of modules of continuity to quantify neural network smoothness. A quantitative bound for expected accuracy/error is derived by considering both the CC and neural network smoothness. We validate our theoretical results by several data sets of images. The numerical results verify that the expected error of trained networks scaled with the square root of the number of classes has a linear relationship with respect to the CC. In addition, we observe a clear consistency between test loss and neural network smoothness during the training process.
READ FULL TEXT