The Power of Unbiased Recursive Partitioning: A Unifying View of CTree, MOB, and GUIDE

06/24/2019
by   Lisa Schlosser, et al.
0

A core step of every algorithm for learning regression trees is the selection of the best splitting variable from the available covariates and the corresponding split point. Early tree algorithms (e.g., AID, CART) employed greedy search strategies, directly comparing all possible split points in all available covariates. However, subsequent research showed that this is biased towards selecting covariates with more potential split points. Therefore, unbiased recursive partitioning algorithms have been suggested (e.g., QUEST, GUIDE, CTree, MOB) that first select the covariate based on statistical inference using p-values that are adjusted for the possible split points. In a second step a split point optimizing some objective function is selected in the chosen split variable. However, different unbiased tree algorithms obtain these p-values from different inference frameworks and their relative advantages or disadvantages are not well understood, yet. Therefore, three different popular approaches are considered here: classical categorical association tests (as in GUIDE), conditional inference (as in CTree), and parameter instability tests (as in MOB). First, these are embedded into a common inference framework encompassing parametric model trees, in particular linear model trees. Second, it is assessed how different building blocks from this common framework affect the power of the algorithms to select the appropriate covariates for splitting: observation-wise goodness-of-fit measure (residuals vs. model scores), dichotomization of residuals/scores at zero, and binning of possible split variables. This shows that specifically the goodness-of-fit measure is crucial for the power of the procedures, with model scores without dichotomization performing much better in many scenarios.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/11/2016

Random forests for survival analysis using maximally selected rank statistics

The most popular approach for analyzing survival data is the Cox regress...
research
02/13/2021

Variable importance scores

Scoring of variables for importance in predicting a response is an ill-d...
research
06/21/2018

Subgroup Identification using Covariate Adjusted Interaction Trees

We consider the problem of identifying sub-groups of participants in a c...
research
09/21/2012

Regression trees for longitudinal and multiresponse data

Previous algorithms for constructing regression tree models for longitud...
research
12/10/2015

Cross-Validated Variable Selection in Tree-Based Methods Improves Predictive Performance

Recursive partitioning approaches producing tree-like models are a long ...
research
09/29/2021

Adaptive Bayesian Sum of Trees Model for Covariate Dependent Spectral Analysis

This article introduces a flexible and adaptive nonparametric method for...
research
11/15/2019

The implications of Labour's plan to scrap Key Stage 2 tests for Progress 8 and secondary school accountability in England

In England, Progress 8 is the Conservative government's headline seconda...

Please sign up or login with your details

Forgot password? Click here to reset