Function Approximation via Sparse Random Features

03/04/2021
by   Abolfazl Hashemi, et al.
40

Random feature methods have been successful in various machine learning tasks, are easy to compute, and come with theoretical accuracy bounds. They serve as an alternative approach to standard neural networks since they can represent similar function spaces without a costly training phase. However, for accuracy, random feature methods require more measurements than trainable parameters, limiting their use for data-scarce applications or problems in scientific machine learning. This paper introduces the sparse random feature method that learns parsimonious random feature models utilizing techniques from compressive sensing. We provide uniform bounds on the approximation error for functions in a reproducing kernel Hilbert space depending on the number of samples and the distribution of features. The error bounds improve with additional structural conditions, such as coordinate sparsity, compact clusters of the spectrum, or rapid spectral decay. We show that the sparse random feature method outperforms shallow networks for well-structured functions and applications to scientific machine learning tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/18/2019

Comparison of Classification Methods for Very High-Dimensional Data in Sparse Random Projection Representation

The big data trend has inspired feature-driven learning tasks, which can...
research
05/09/2023

A duality framework for generalization analysis of random feature models and two-layer neural networks

We consider the problem of learning functions in the ℱ_p,π and Barron sp...
research
06/05/2023

The L^∞ Learnability of Reproducing Kernel Hilbert Spaces

In this work, we analyze the learnability of reproducing kernel Hilbert ...
research
06/24/2021

Shallow Representation is Deep: Learning Uncertainty-aware and Worst-case Random Feature Dynamics

Random features is a powerful universal function approximator that inher...
research
05/21/2020

Kolmogorov Width Decay and Poor Approximators in Machine Learning: Shallow Neural Networks, Random Feature Models and Neural Tangent Kernels

We establish a scale separation of Kolmogorov width type between subspac...
research
07/23/2012

Bellman Error Based Feature Generation using Random Projections on Sparse Spaces

We address the problem of automatic generation of features for value fun...
research
09/04/2019

Minimax Isometry Method

We present a compressive sensing approach for the long standing problem ...

Please sign up or login with your details

Forgot password? Click here to reset