Log In Sign Up

Deep Equals Shallow for ReLU Networks in Kernel Regimes

by   Alberto Bietti, et al.

Deep networks are often considered to be more expressive than shallow ones in terms of approximation. Indeed, certain functions can be approximated by deep networks provably more efficiently than by shallow ones, however, no tractable algorithms are known for learning such deep models. Separately, a recent line of work has shown that deep networks trained with gradient descent may behave like (tractable) kernel methods in a certain over-parameterized regime, where the kernel is determined by the architecture and initialization, and this paper focuses on approximation for such kernels. We show that for ReLU activations, the kernels derived from deep fully-connected networks have essentially the same approximation properties as their shallow two-layer counterpart, namely the same eigenvalue decay for the corresponding integral operator. This highlights the limitations of the kernel framework for understanding the benefits of such deep architectures. Our main theoretical result relies on characterizing such eigenvalue decays through differentiability properties of the kernel function, which also easily applies to the study of other kernels defined on the sphere.


page 1

page 2

page 3

page 4


On the Inductive Bias of Neural Tangent Kernels

State-of-the-art neural networks are heavily over-parameterized, making ...

Como funciona o Deep Learning

Deep Learning methods are currently the state-of-the-art in many problem...

Function approximation by deep networks

We show that deep networks are better than shallow networks at approxima...

Neural tangent kernels, transportation mappings, and universal approximation

This paper establishes rates of universal approximation for the shallow ...

Deep Networks Provably Classify Data on Curves

Data with low-dimensional nonlinear structure are ubiquitous in engineer...

On the Power of Shallow Learning

A deluge of recent work has explored equivalences between wide neural ne...

Is Deeper Better only when Shallow is Good?

Understanding the power of depth in feed-forward neural networks is an o...

Code Repositories