Growing Deep Forests Efficiently with Soft Routing and Learned Connectivity

12/29/2020
by   Jianghao Shen, et al.
6

Despite the latest prevailing success of deep neural networks (DNNs), several concerns have been raised against their usage, including the lack of intepretability the gap between DNNs and other well-established machine learning models, and the growingly expensive computational costs. A number of recent works [1], [2], [3] explored the alternative to sequentially stacking decision tree/random forest building blocks in a purely feed-forward way, with no need of back propagation. Since decision trees enjoy inherent reasoning transparency, such deep forest models can also facilitate the understanding of the internaldecision making process. This paper further extends the deep forest idea in several important aspects. Firstly, we employ a probabilistic tree whose nodes make probabilistic routing decisions, a.k.a., soft routing, rather than hard binary decisions.Besides enhancing the flexibility, it also enables non-greedy optimization for each tree. Second, we propose an innovative topology learning strategy: every node in the ree now maintains a new learnable hyperparameter indicating the probability that it will be a leaf node. In that way, the tree will jointly optimize both its parameters and the tree topology during training. Experiments on the MNIST dataset demonstrate that our empowered deep forests can achieve better or comparable performance than [1],[3] , with dramatically reduced model complexity. For example,our model with only 1 layer of 15 trees can perform comparably with the model in [3] with 2 layers of 2000 trees each.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/12/2015

Efficient non-greedy optimization of decision trees

Decision trees and randomized forests are widely used in computer vision...
research
02/13/2023

Multiple Instance Learning with Trainable Decision Tree Ensembles

A new random forest based model for solving the Multiple Instance Learni...
research
10/16/2022

Positive-Unlabeled Learning using Random Forests via Recursive Greedy Risk Minimization

The need to learn from positive and unlabeled data, or PU learning, aris...
research
05/20/2017

Forward Thinking: Building Deep Random Forests

The success of deep neural networks has inspired many to wonder whether ...
research
07/04/2018

Generating Mandarin and Cantonese F0 Contours with Decision Trees and BLSTMs

This paper models the fundamental frequency contours on both Mandarin an...
research
07/03/2022

DecisioNet – A Binary-Tree Structured Neural Network

Deep neural networks (DNNs) and decision trees (DTs) are both state-of-t...
research
12/03/2018

Deep Hierarchical Machine: a Flexible Divide-and-Conquer Architecture

We propose Deep Hierarchical Machine (DHM), a model inspired from the di...

Please sign up or login with your details

Forgot password? Click here to reset