Progressive Learning for Systematic Design of Large Neural Networks

10/23/2017
by   Saikat Chatterjee, et al.
0

We develop an algorithm for systematic design of a large artificial neural network using a progression property. We find that some non-linear functions, such as the rectifier linear unit and its derivatives, hold the property. The systematic design addresses the choice of network size and regularization of parameters. The number of nodes and layers in network increases in progression with the objective of consistently reducing an appropriate cost. Each layer is optimized at a time, where appropriate parameters are learned using convex optimization. Regularization parameters for convex optimization do not need a significant manual effort for tuning. We also use random instances for some weight matrices, and that helps to reduce the number of parameters we learn. The developed network is expected to show good generalization power due to appropriate regularization and use of random weights in the layers. This expectation is verified by extensive experiments for classification and regression problems, using standard databases.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/17/2019

SSFN: Self Size-estimating Feed-forward Network and Low Complexity Design

We design a self size-estimating feed-forward network (SSFN) using a joi...
research
06/03/2019

NodeDrop: A Condition for Reducing Network Size without Effect on Output

Determining an appropriate number of features for each layer in a neural...
research
03/29/2020

High-dimensional Neural Feature using Rectified Linear Unit and Random Matrix Instance

We design a ReLU-based multilayer neural network to generate a rich high...
research
03/02/2021

Demystifying Batch Normalization in ReLU Networks: Equivalent Convex Optimization Models and Implicit Regularization

Batch Normalization (BN) is a commonly used technique to accelerate and ...
research
10/06/2021

Use of Deterministic Transforms to Design Weight Matrices of a Neural Network

Self size-estimating feedforward network (SSFN) is a feedforward multila...
research
11/16/2016

Net-Trim: Convex Pruning of Deep Neural Networks with Performance Guarantee

We introduce and analyze a new technique for model reduction for deep ne...

Please sign up or login with your details

Forgot password? Click here to reset