On the Generalization of Models Trained with SGD: Information-Theoretic Bounds and Implications

10/07/2021
by   Ziqiao Wang, et al.
0

This paper follows up on a recent work of (Neu, 2021) and presents new and tighter information-theoretic upper bounds for the generalization error of machine learning models, such as neural networks, trained with SGD. We apply these bounds to analyzing the generalization behaviour of linear and two-layer ReLU networks. Experimental study based on these bounds provide some insights on the SGD training of neural networks. They also point to a new and simple regularization scheme which we show performs comparably to the current state of the art.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset