Forward Thinking: Building Deep Random Forests

05/20/2017
by   Kevin Miller, et al.
0

The success of deep neural networks has inspired many to wonder whether other learners could benefit from deep, layered architectures. We present a general framework called forward thinking for deep learning that generalizes the architectural flexibility and sophistication of deep neural networks while also allowing for (i) different types of learning functions in the network, other than neurons, and (ii) the ability to adaptively deepen the network as needed to improve results. This is done by training one layer at a time, and once a layer is trained, the input data are mapped forward through the layer to create a new learning problem. The process is then repeated, transforming the data through multiple layers, one at a time, rendering a new dataset, which is expected to be better behaved, and on which a final output layer can achieve good performance. In the case where the neurons of deep neural nets are replaced with decision trees, we call the result a Forward Thinking Deep Random Forest (FTDRF). We demonstrate a proof of concept by applying FTDRF on the MNIST dataset. We also provide a general mathematical formulation that allows for other types of deep learning problems to be considered.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/08/2017

Forward Thinking: Building and Training Neural Networks One Layer at a Time

We present a general framework for training deep neural networks without...
research
11/05/2018

How deep is deep enough? - Optimizing deep neural network architecture

Deep neural networks use stacked layers of feature detectors to repeated...
research
12/29/2020

Growing Deep Forests Efficiently with Soft Routing and Learned Connectivity

Despite the latest prevailing success of deep neural networks (DNNs), se...
research
04/25/2021

Vector Neurons: A General Framework for SO(3)-Equivariant Networks

Invariance and equivariance to the rotation group have been widely discu...
research
11/06/2014

How transferable are features in deep neural networks?

Many deep neural networks trained on natural images exhibit a curious ph...
research
09/12/2022

The Mori-Zwanzig formulation of deep learning

We develop a new formulation of deep learning based on the Mori-Zwanzig ...
research
03/08/2019

Approximating Optimisation Solutions for Travelling Officer Problem with Customised Deep Learning Network

Deep learning has been extended to a number of new domains with critical...

Please sign up or login with your details

Forgot password? Click here to reset