Additive function approximation in the brain

09/05/2019
by   Kameron Decker Harris, et al.
0

Many biological learning systems such as the mushroom body, hippocampus, and cerebellum are built from sparsely connected networks of neurons. For a new understanding of such networks, we study the function spaces induced by sparse random features and characterize what functions may and may not be learned. A network with d inputs per neuron is found to be equivalent to an additive model of order d, whereas with a degree distribution the network combines additive terms of different orders. We identify three specific advantages of sparsity: additive function approximation is a powerful inductive bias that limits the curse of dimensionality, sparse networks are stable to outlier noise in the inputs, and sparse random features are scalable. Thus, even simple brain architectures can be powerful function approximators. Finally, we hope that this work helps popularize kernel theories of networks among computational neuroscientists.

READ FULL TEXT
research
06/03/2021

Rich dynamics caused by known biological brain network features resulting in stateful networks

The mammalian brain could contain dense and sparse network connectivity ...
research
04/19/2023

Minimax Signal Detection in Sparse Additive Models

Sparse additive models are an attractive choice in circumstances calling...
research
02/24/2021

Inductive Bias of Multi-Channel Linear Convolutional Networks with Bounded Weight Norm

We study the function space characterization of the inductive bias resul...
research
05/30/2016

Synthesizing the preferred inputs for neurons in neural networks via deep generator networks

Deep neural networks (DNNs) have demonstrated state-of-the-art results o...
research
03/10/2015

Post-Regularization Confidence Bands for High Dimensional Nonparametric Models with Local Sparsity

We propose a novel high dimensional nonparametric model named ATLAS whic...
research
12/02/2019

A Random Matrix Perspective on Mixtures of Nonlinearities for Deep Learning

One of the distinguishing characteristics of modern deep learning system...
research
04/11/2019

Reconstructing Network Inputs with Additive Perturbation Signatures

In this work, we present preliminary results demonstrating the ability t...

Please sign up or login with your details

Forgot password? Click here to reset