On the Approximation Lower Bound for Neural Nets with Random Weights

08/19/2020
by   Sho Sonoda, et al.
17

A random net is a shallow neural network where the hidden layer is frozen with random assignment and the output layer is trained by convex optimization. Using random weights for a hidden layer is an effective method to avoid the inevitable non-convexity in standard gradient descent learning. It has recently been adopted in the study of deep learning theory. Here, we investigate the expressive power of random nets. We show that, despite the well-known fact that a shallow neural network is a universal approximator, a random net cannot achieve zero approximation error even for smooth functions. In particular, we prove that for a class of smooth functions, if the proposal distribution is compactly supported, then a lower bound is positive. Based on the ridgelet analysis and harmonic analysis for neural networks, the proof uses the Plancherel theorem and an estimate for the truncated tail of the parameter distribution. We corroborate our theoretical results with various simulation studies, and generally two main take-home messages are offered: (i) Not any distribution for selecting random weights is feasible to build a universal approximator; (ii) A suitable assignment of random weights exists but to some degree is associated with the complexity of the target function.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/13/2020

Approximation smooth and sparse functions by deep neural networks without saturation

Constructing neural networks for function approximation is a classical a...
research
02/28/2017

Deep Semi-Random Features for Nonlinear Function Approximation

We propose semi-random features for nonlinear function approximation. Th...
research
12/26/2018

Towards a Theoretical Understanding of Hashing-Based Neural Nets

Parameter reduction has been an important topic in deep learning due to ...
research
05/12/2023

∂𝔹 nets: learning discrete functions by gradient descent

∂𝔹 nets are differentiable neural networks that learn discrete boolean-v...
research
01/27/2021

A Note on the Representation Power of GHHs

In this note we prove a sharp lower bound on the necessary number of nes...
research
11/07/2016

Neural Taylor Approximations: Convergence and Exploration in Rectifier Networks

Modern convolutional networks, incorporating rectifiers and max-pooling,...
research
06/17/2018

Fast Convex Pruning of Deep Neural Networks

We develop a fast, tractable technique called Net-Trim for simplifying a...

Please sign up or login with your details

Forgot password? Click here to reset