Stochastic Gradient Trees

01/23/2019
by   Henry Gouk, et al.
0

We present an online algorithm that induces decision trees using gradient information as the source of supervision. In contrast to previous approaches to gradient-based tree learning, we do not require soft splits or construction of a new tree for every update. In experiments, our method performs comparably to standard incremental classification trees and outperforms state of the art incremental regression trees. We also show how the method can be used to construct a novel type of neural network layer suited to learning representations from tabular data and find that it increases accuracy of multiclass and multi-label classification.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/22/2019

Interpretable Reinforcement Learning via Differentiable Decision Trees

Decision trees are ubiquitous in machine learning for their ease of use ...
research
05/24/2019

LdSM: Logarithm-depth Streaming Multi-label Decision Trees

We consider multi-label classification where the goal is to annotate eac...
research
09/26/2014

Autoencoder Trees

We discuss an autoencoder model in which the encoding and decoding funct...
research
03/09/2018

Sequential Outlier Detection based on Incremental Decision Trees

We introduce an online outlier detection algorithm to detect outliers in...
research
07/03/2018

Learning concise representations for regression by evolving networks of trees

We propose and study a method for learning interpretable representations...
research
02/15/2021

Learning Accurate Decision Trees with Bandit Feedback via Quantized Gradient Descent

Decision trees provide a rich family of highly non-linear but efficient ...
research
02/18/2020

The Tree Ensemble Layer: Differentiability meets Conditional Computation

Neural networks and tree ensembles are state-of-the-art learners, each w...

Please sign up or login with your details

Forgot password? Click here to reset