A Corrective View of Neural Networks: Representation, Memorization and Learning
We develop a corrective mechanism for neural network approximation: the total available non-linear units are divided into multiple groups and the first group approximates the function under consideration, the second group approximates the error in approximation produced by the first group and corrects it, the third group approximates the error produced by the first and second groups together and so on. This technique yields several new representation and learning results for neural networks. First, we show that two-layer neural networks in the random features regime (RF) can memorize arbitrary labels for arbitrary points under under Euclidean distance separation condition using Õ(n) ReLU or Step activation functions which is optimal in n up to logarithmic factors. Next, we give a powerful representation result for two-layer neural networks with ReLU and smoothed ReLU units which can achieve a squared error of at most ϵ with O(C(a,d)ϵ^-1/(a+1)) for a ∈N∪{0} when the function is smooth enough (roughly when it has Θ(ad) bounded derivatives). In certain cases d can be replaced with effective dimension q ≪ d. Previous results of this type implement Taylor series approximation using deep architectures. We also consider three-layer neural networks and show that the corrective mechanism yields faster representation rates for smooth radial functions. Lastly, we obtain the first O(subpoly(1/ϵ)) upper bound on the number of neurons required for a two layer network to learn low degree polynomials up to squared error ϵ via gradient descent. Even though deep networks can express these polynomials with O(polylog(1/ϵ)) neurons, the best learning bounds on this problem require poly(1/ϵ) neurons.
READ FULL TEXT