On Rademacher Complexity-based Generalization Bounds for Deep Learning

08/08/2022
by   Lan V. Truong, et al.
0

In this paper, we develop some novel bounds for the Rademacher complexity and the generalization error in deep learning with i.i.d. and Markov datasets. The new Rademacher complexity and generalization bounds are tight up to O(1/√(n)) where n is the size of the training set. They can be exponentially decayed in the depth L for some neural network structures. The development of Talagrand's contraction lemmas for high-dimensional mappings between function spaces and deep neural networks for general activation functions is a key technical contribution to this work.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset