Generalization Ability of Wide Residual Networks

05/29/2023
by   Jianfa Lai, et al.
0

In this paper, we study the generalization ability of the wide residual network on 𝕊^d-1 with the ReLU activation function. We first show that as the width m→∞, the residual network kernel (RNK) uniformly converges to the residual neural tangent kernel (RNTK). This uniform convergence further guarantees that the generalization error of the residual network converges to that of the kernel regression with respect to the RNTK. As direct corollaries, we then show i) the wide residual network with the early stopping strategy can achieve the minimax rate provided that the target regression function falls in the reproducing kernel Hilbert space (RKHS) associated with the RNTK; ii) the wide residual network can not generalize well if it is trained till overfitting the data. We finally illustrate some experiments to reconcile the contradiction between our theoretical result and the widely observed “benign overfitting phenomenon”

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset