Deep Semi-Random Features for Nonlinear Function Approximation

02/28/2017
by   Kenji Kawaguchi, et al.
0

We propose semi-random features for nonlinear function approximation. The flexibility of semi-random feature lies between the fully adjustable units in deep learning and the random features used in kernel methods. For one hidden layer models with semi-random features, we prove with no unrealistic assumptions that the model classes contain an arbitrarily good function as the width increases (universality), and despite non-convexity, we can find such a good function (optimization theory) that generalizes to unseen new data (generalization bound). For deep models, with no unrealistic assumptions, we prove universal approximation ability, a lower bound on approximation error, a partial optimization guarantee, and a generalization bound. Depending on the problems, the generalization bound of deep semi-random features can be exponentially better than the known bounds of deep ReLU nets; our generalization error bound can be independent of the depth, the number of trainable weights as well as the input dimensionality. In experiments, we show that semi-random features can match the performance of neural networks by using slightly more units, and it outperforms random features by using significantly fewer units. Moreover, we introduce a new implicit ensemble method by using semi-random features.

READ FULL TEXT

page 7

page 9

research
08/19/2020

On the Approximation Lower Bound for Neural Nets with Random Weights

A random net is a shallow neural network where the hidden layer is froze...
research
10/10/2018

Random ReLU Features: Universality, Approximation, and Composition

We propose random ReLU features models in this work. Its motivation is r...
research
06/30/2023

Efficient uniform approximation using Random Vector Functional Link networks

A Random Vector Functional Link (RVFL) network is a depth-2 neural netwo...
research
08/09/2017

Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations

This article concerns the expressive power of depth in neural nets with ...
research
05/09/2023

A duality framework for generalization analysis of random feature models and two-layer neural networks

We consider the problem of learning functions in the ℱ_p,π and Barron sp...
research
10/05/2020

On the Universality of the Double Descent Peak in Ridgeless Regression

We prove a non-asymptotic distribution-independent lower bound for the e...
research
03/12/2018

R3Net: Random Weights, Rectifier Linear Units and Robustness for Artificial Neural Network

We consider a neural network architecture with randomized features, a si...

Please sign up or login with your details

Forgot password? Click here to reset