Learning Transformations for Classification Forests

12/19/2013
by   Qiang Qiu, et al.
0

This work introduces a transformation-based learner model for classification forests. The weak learner at each split node plays a crucial role in a classification tree. We propose to optimize the splitting objective by learning a linear transformation on subspaces using nuclear norm as the optimization criteria. The learned linear transformation restores a low-rank structure for data from the same class, and, at the same time, maximizes the separation between different classes, thereby improving the performance of the split function. Theoretical and experimental results support the proposed framework.

READ FULL TEXT

page 5

page 7

research
09/09/2013

Learning Transformations for Clustering and Classification

A low-rank transformation learning framework for subspace clustering and...
research
11/07/2016

One Class Splitting Criteria for Random Forests

Random Forests (RFs) are strong machine learning tools for classificatio...
research
02/05/2019

Survival Forests under Test: Impact of the Proportional Hazards Assumption on Prognostic and Predictive Forests for ALS Survival

We investigate the effect of the proportional hazards assumption on prog...
research
12/14/2022

MABSplit: Faster Forest Training Using Multi-Armed Bandits

Random forests are some of the most widely used machine learning models ...
research
08/01/2013

Domain-invariant Face Recognition using Learned Low-rank Transformation

We present a low-rank transformation approach to compensate for face var...
research
08/17/2020

Stochastic Optimization Forests

We study conditional stochastic optimization problems, where we leverage...
research
11/22/2017

ForestHash: Semantic Hashing With Shallow Random Forests and Tiny Convolutional Networks

Hash codes are efficient data representations for coping with the ever g...

Please sign up or login with your details

Forgot password? Click here to reset