Greedy Shallow Networks: A New Approach for Constructing and Training Neural Networks

05/24/2019
by   Anton Dereventsov, et al.
0

We present a novel greedy approach to obtain a single layer neural network approximation to a target function with the use of a ReLU activation function. In our approach we construct a shallow network by utilizing a greedy algorithm where the set of possible inner weights acts as a parametrization of the prescribed dictionary. To facilitate the greedy selection we employ an integral representation of the network, based on the ridgelet transform, that significantly reduces the cardinality of the dictionary and hence promotes feasibility of the proposed method. Our approach allows for the construction of efficient architectures which can be treated either as improved initializations to be used in place of random-based alternatives, or as fully-trained networks, thus potentially nullifying the need for training and/or calibrating based on backpropagation. Numerical experiments demonstrate the tenability of the proposed concept and its advantages compared to the classical techniques for training and constructing neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/20/2021

Integral representations of shallow neural network with Rectified Power Unit activation function

In this effort, we derive a formula for the integral representation of a...
research
10/22/2020

A ReLU Dense Layer to Improve the Performance of Neural Networks

We propose ReDense as a simple and low complexity way to improve the per...
research
10/31/2021

Graph Neural Network based scheduling : Improved throughput under a generalized interference model

In this work, we propose a Graph Convolutional Neural Networks (GCN) bas...
research
10/07/2019

Neural network integral representations with the ReLU activation function

We derive a formula for neural network integral representations on the s...
research
02/01/2019

DANTE: Deep AlterNations for Training nEural networks

We present DANTE, a novel method for training neural networks using the ...
research
06/28/2021

Characterization of the Variation Spaces Corresponding to Shallow Neural Networks

We consider the variation space corresponding to a dictionary of functio...
research
04/24/2020

Nonconvex penalization for sparse neural networks

Training methods for artificial neural networks often rely on over-param...

Please sign up or login with your details

Forgot password? Click here to reset