Sampling weights of deep neural networks

06/29/2023
by   Erik Lien Bolager, et al.
0

We introduce a probability distribution, combined with an efficient sampling algorithm, for weights and biases of fully-connected neural networks. In a supervised learning context, no iterative optimization or gradient computations of internal network parameters are needed to obtain a trained network. The sampling is based on the idea of random feature models. However, instead of a data-agnostic distribution, e.g., a normal distribution, we use both the input and the output training data of the supervised learning problem to sample both shallow and deep networks. We prove that the sampled networks we construct are universal approximators. We also show that our sampling scheme is invariant to rigid body transformations and scaling of the input data. This implies many popular pre-processing techniques are no longer required. For Barron functions, we show that the L^2-approximation error of sampled shallow networks decreases with the square root of the number of neurons. In numerical experiments, we demonstrate that sampled networks achieve comparable accuracy as iteratively trained ones, but can be constructed orders of magnitude faster. Our test cases involve a classification benchmark from OpenML, sampling of neural operators to represent maps in function spaces, and transfer learning using well-known architectures.

READ FULL TEXT
research
12/17/2021

On the existence of global minima and convergence analyses for gradient descent methods in the training of deep neural networks

In this article we study fully-connected feedforward deep ReLU ANNs with...
research
11/14/2018

Controllability, Multiplexing, and Transfer Learning in Networks using Evolutionary Learning

Networks are fundamental building blocks for representing data, and comp...
research
04/19/2019

Shallow Neural Network can Perfectly Classify an Object following Separable Probability Distribution

Guiding the design of neural networks is of great importance to save eno...
research
01/18/2021

Stable Recovery of Entangled Weights: Towards Robust Identification of Deep Neural Networks from Minimal Samples

In this paper we approach the problem of unique and stable identifiabili...
research
05/30/2019

Modeling Uncertainty by Learning a Hierarchy of Deep Neural Connections

Quantifying and measuring uncertainty in deep neural networks, despite r...
research
03/03/2022

A study on the distribution of social biases in self-supervised learning visual models

Deep neural networks are efficient at learning the data distribution if ...
research
04/15/2021

On Energy-Based Models with Overparametrized Shallow Neural Networks

Energy-based models (EBMs) are a simple yet powerful framework for gener...

Please sign up or login with your details

Forgot password? Click here to reset