An Experimental Comparison of Old and New Decision Tree Algorithms

11/08/2019
by   Arman Zharmagambetov, et al.
0

This paper presents a detailed comparison of a recently proposed algorithm for optimizing decision trees, tree alternating optimization (TAO), with other popular, established algorithms, such as CART and C5.0. We compare their performance on a number of datasets of different size, dimensionality and number of classes, across different performance factors: accuracy and tree size (in terms of the number of leaves or the depth of the tree). We find that TAO achieves higher accuracy in every single dataset, often by a large margin.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/10/2021

On Learning and Testing Decision Tree

In this paper, we study learning and testing decision tree of size and d...
research
12/08/2012

A fair comparison of many max-tree computation algorithms (Extended version of the paper submitted to ISMM 2013

With the development of connected filters for the last decade, many algo...
research
12/03/2021

A Flexible HLS Hoeffding Tree Implementation for Runtime Learning on FPGA

Decision trees are often preferred when implementing Machine Learning in...
research
05/14/2021

N-ary Huffman Encoding Using High-Degree Trees – A Performance Comparison

In this paper we implement an n-ary Huffman Encoding and Decoding applic...
research
09/11/2019

LazyBum: Decision tree learning using lazy propositionalization

Propositionalization is the process of summarizing relational data into ...
research
06/15/2023

Performance Evaluation and Comparison of a New Regression Algorithm

In recent years, Machine Learning algorithms, in particular supervised l...
research
08/09/2022

Global Evaluation for Decision Tree Learning

We transfer distances on clusterings to the building process of decision...

Please sign up or login with your details

Forgot password? Click here to reset