Backward Feature Correction: How Deep Learning Performs Deep Learning

01/13/2020
by   Zeyuan Allen-Zhu, et al.
5

How does a 110-layer ResNet learn a high-complexity classifier using relatively few training examples and short training time? We present a theory towards explaining this in terms of hierarchical learning. We refer hierarchical learning as the learner learns to represent a complicated target function by decomposing it into a sequence of simpler functions to reduce sample and time complexity. This paper formally analyzes how multi-layer neural networks can perform such hierarchical learning efficiently and automatically simply by applying stochastic gradient descent (SGD). On the conceptual side, we present, to the best of our knowledge, the FIRST theory result indicating how very deep neural networks can still be sample and time efficient on certain hierarchical learning tasks, when NO KNOWN non-hierarchical algorithms (such as kernel method, linear regression over feature mappings, tensor decomposition, sparse coding) are efficient. We establish a new principle called "backward feature correction", which we believe is the key to understand the hierarchical learning in multi-layer neural networks. On the technical side, we show for regression and even for binary classification, for every input dimension d > 0, there is a concept class consisting of degree ω(1) multi-variate polynomials so that, using ω(1)-layer neural networks as learners, SGD can learn any target function from this class in poly(d) time using poly(d) samples to any 1/poly(d) error, through learning to represent it as a composition of ω(1) layers of quadratic functions. In contrast, we present lower bounds stating that several non-hierarchical learners, including any kernel methods, neural tangent kernels, must suffer from d^ω(1) sample or time complexity to learn functions in this concept class even to any d^-0.01 error.

READ FULL TEXT
05/24/2019

What Can ResNet Learn Efficiently, Going Beyond Kernels?

How can neural networks such as ResNet efficiently learn CIFAR-10 with t...
01/07/2020

Poly-time universality and limitations of deep learning

The goal of this paper is to characterize function distributions that de...
04/26/2013

An Algorithm for Training Polynomial Networks

We consider deep neural networks, in which the output of each node is a ...
06/24/2020

Towards Understanding Hierarchical Learning: Benefits of Neural Representations

Deep neural networks can empirically perform efficient hierarchical lear...
05/06/2022

What Makes A Good Fisherman? Linear Regression under Self-Selection Bias

In the classical setting of self-selection, the goal is to learn k model...
12/16/2018

Provable limitations of deep learning

As the success of deep learning reaches more grounds, one would like to ...
05/30/2017

A Multi-Layer K-means Approach for Multi-Sensor Data Pattern Recognition in Multi-Target Localization

Data-target association is an important step in multi-target localizatio...