Forward Thinking: Building and Training Neural Networks One Layer at a Time

06/08/2017
by   Chris Hettinger, et al.
0

We present a general framework for training deep neural networks without backpropagation. This substantially decreases training time and also allows for construction of deep networks with many sorts of learners, including networks whose layers are defined by functions that are not easily differentiated, like decision trees. The main idea is that layers can be trained one at a time, and once they are trained, the input data are mapped forward through the layer to create a new learning problem. The process is repeated, transforming the data through multiple layers, one at a time, rendering a new data set, which is expected to be better behaved, and on which a final output layer can achieve good performance. We call this forward thinking and demonstrate a proof of concept by achieving state-of-the-art accuracy on the MNIST dataset for convolutional neural networks. We also provide a general mathematical formulation of forward thinking that allows for other types of deep learning problems to be considered.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/20/2017

Forward Thinking: Building Deep Random Forests

The success of deep neural networks has inspired many to wonder whether ...
research
05/23/2017

Input Fast-Forwarding for Better Deep Learning

This paper introduces a new architectural framework, known as input fast...
research
10/15/2015

Layer-Specific Adaptive Learning Rates for Deep Networks

The increasing complexity of deep learning architectures is resulting in...
research
11/28/2020

Lattice Fusion Networks for Image Denoising

A novel method for feature fusion in convolutional neural networks is pr...
research
03/30/2016

Deep Networks with Stochastic Depth

Very deep convolutional networks with hundreds of layers have led to sig...
research
01/28/2022

Mixing Implicit and Explicit Deep Learning with Skip DEQs and Infinite Time Neural ODEs (Continuous DEQs)

Implicit deep learning architectures, like Neural ODEs and Deep Equilibr...
research
05/21/2023

Layer Collaboration in the Forward-Forward Algorithm

Backpropagation, which uses the chain rule, is the de-facto standard alg...

Please sign up or login with your details

Forgot password? Click here to reset