Adaptive Reticulum

12/12/2019
by   Giuseppe Nuti, et al.
14

Neural Networks and Random Forests: two popular techniques for supervised learning that are seemingly disconnected in their formulation and optimization method, have recently been linked in a single construct. The connection pivots on assembling an artificial Neural Network with nodes that allow for a gate-like function to mimic a tree split, optimized using the standard approach of recursively applying the chain rule to update its parameters. Yet two main challenges have impeded wide use of this hybrid approach: (a) the inability of global gradient descent techniques to optimize hierarchical parameters (as introduced by the gate function); and (b) the construction of the tree structure, which has relied on standard decision tree algorithms to learn the network topology or incrementally (and heuristically) searching the space at random. We propose a probabilistic construct that exploits the idea of a node's unexplained potential (the total error channeled through the node) in order to decide where to expand further, mimicking the standard tree construction in a Neural Network setting, alongside a modified gradient descent that first locally optimizes an expanded node before a global optimization. The probabilistic approach allows us to evaluate each new split as a ratio of likelihoods that balance the statistical improvement in explaining the evidence against the additional model complexity — thus providing a natural stopping condition. The result is a novel classification and regression technique that leverages the strength of both: a tree-structure that grows naturally and is simple to interpret with the plasticity of Neural Networks that allow for soft margins and slanted boundaries.

READ FULL TEXT

page 12

page 14

research
06/19/2015

CO2 Forest: Improved Random Forest by Continuous Optimization of Oblique Splits

We propose a novel algorithm for optimizing multivariate linear threshol...
research
01/27/2023

Interpreting learning in biological neural networks as zero-order optimization method

Recently, significant progress has been made regarding the statistical u...
research
09/21/2021

A Novel Structured Natural Gradient Descent for Deep Learning

Natural gradient descent (NGD) provided deep insights and powerful tools...
research
10/06/2019

Splitting Steepest Descent for Growing Neural Architectures

We develop a progressive training approach for neural networks which ada...
research
10/18/2011

AOSO-LogitBoost: Adaptive One-Vs-One LogitBoost for Multi-Class Problem

This paper presents an improvement to model learning when using multi-cl...
research
12/06/2018

A two-stage hybrid model by using artificial neural networks as feature construction algorithms

We propose a two-stage hybrid approach with neural networks as the new f...
research
12/10/2019

Transparent Classification with Multilayer Logical Perceptrons and Random Binarization

Models with transparent inner structure and high classification performa...

Please sign up or login with your details

Forgot password? Click here to reset