Residual Tangent Kernels
A recent body of work has focused on the theoretical study of neural networks at the regime of large width. Specifically, it was shown that training infinitely-wide and properly scaled vanilla ReLU networks using the L2 loss is equivalent to kernel regression using the Neural Tangent Kernel, which is independent of the initialization instance, and remains constant during training. In this work, we derive the form of the limiting kernel for architectures incorporating bypass connections, namely residual networks (ResNets), as well as to densely connected networks (DenseNets). In addition, we derive finite width corrections for both cases. Our analysis reveals that deep practical residual architectures might operate much closer to the “kernel” regime than their vanilla counterparts: while in networks that do not use skip connections, convergence to the limiting kernels requires one to fix depth while increasing the layers' width, in both ResNets and DenseNets, convergence to the limiting kernel may occur for infinite deep and wide networks, provided proper initialization.
READ FULL TEXT