Towards Understanding Hierarchical Learning: Benefits of Neural Representations

06/24/2020
by   Minshuo Chen, et al.
5

Deep neural networks can empirically perform efficient hierarchical learning, in which the layers learn useful representations of the data. However, how they make use of the intermediate representations are not explained by recent theories that relate them to "shallow learners" such as kernels. In this work, we demonstrate that intermediate neural representations add more flexibility to neural networks and can be advantageous over raw inputs. We consider a fixed, randomly initialized neural network as a representation function fed into another trainable network. When the trainable network is the quadratic Taylor model of a wide two-layer network, we show that neural representation can achieve improved sample complexities compared with the raw input: For learning a low-rank degree-p polynomial (p ≥ 4) in d dimension, neural representation requires only Õ(d^⌈ p/2 ⌉) samples, while the best-known sample complexity upper bound for the raw input is Õ(d^p-1). We contrast our result with a lower bound showing that neural representations do not improve over the raw input (in the infinite width limit), when the trainable network is instead a neural tangent kernel. Our results characterize when neural representations are beneficial, and may provide a new perspective on why depth is important in deep learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/24/2019

Width Provably Matters in Optimization for Deep Linear Neural Networks

We prove that for an L-layer fully-connected linear neural network, if t...
research
01/13/2020

Backward Feature Correction: How Deep Learning Performs Deep Learning

How does a 110-layer ResNet learn a high-complexity classifier using rel...
research
02/23/2020

Comparing the Parameter Complexity of Hypernetworks and the Embedding-Based Alternative

In the context of learning to map an input I to a function h_I:X→R, we c...
research
06/18/2021

An Empirical Investigation into Deep and Shallow Rule Learning

Inductive rule learning is arguably among the most traditional paradigms...
research
06/01/2021

Asymptotics of representation learning in finite Bayesian neural networks

Recent works have suggested that finite Bayesian neural networks may out...
research
02/18/2022

A Note on the Implicit Bias Towards Minimal Depth of Deep Neural Networks

Deep learning systems have steadily advanced the state of the art in a w...
research
10/27/2014

Maximally Informative Hierarchical Representations of High-Dimensional Data

We consider a set of probabilistic functions of some input variables as ...

Please sign up or login with your details

Forgot password? Click here to reset