Booster: An Accelerator for Gradient Boosting Decision Trees

11/03/2020
by   Mingxuan He, et al.
0

We propose Booster, a novel accelerator for gradient boosting trees based on the unique characteristics of gradient boosting models. We observe that the dominant steps of gradient boosting training (accounting for 90-98 time) involve simple, fine-grained, independent operations on small-footprint data structures (e.g., accumulate and compare values in the structures). Unfortunately, existing multicores and GPUs are unable to harness this parallelism because they do not support massively-parallel data structure accesses that are irregular and data-dependent. By employing a scalable sea-of-small-SRAMs approach and an SRAM bandwidth-preserving mapping of data record fields to the SRAMs, Booster achieves significantly more parallelism (e.g., 3200-way parallelism) than multicores and GPU. In addition, Booster employs a redundant data representation that significantly lowers the memory bandwidth demand. Our simulations reveal that Booster achieves 11.4x speedup and 6.4x speedup over an ideal 32-core multicore and an ideal GPU, respectively. Based on ASIC synthesis of FPGA-validated RTL using 45 nm technology, we estimate a Booster chip to occupy 60 mm^2 of area and dissipate 23 W when operating at 1-GHz clock speed.

READ FULL TEXT

page 1

page 6

research
05/19/2020

Out-of-Core GPU Gradient Boosting

GPU-based algorithms have greatly accelerated many machine learning meth...
research
01/07/2020

HyGCN: A GCN Accelerator with Hybrid Architecture

In this work, we first characterize the hybrid execution patterns of GCN...
research
06/13/2021

G-TADOC: Enabling Efficient GPU-Based Text Analytics without Decompression

Text analytics directly on compression (TADOC) has proven to be a promis...
research
06/30/2023

HashMem: PIM-based Hashmap Accelerator

Hashmaps are widely utilized data structures in many applications to per...
research
07/20/2022

Quantized Training of Gradient Boosting Decision Trees

Recent years have witnessed significant success in Gradient Boosting Dec...
research
03/10/2020

GPU Parallelization of Policy Iteration RRT#

Sampling-based planning has become a de facto standard for complex robot...
research
10/31/2017

TF Boosted Trees: A scalable TensorFlow based framework for gradient boosting

TF Boosted Trees (TFBT) is a new open-sourced frame-work for the distrib...

Please sign up or login with your details

Forgot password? Click here to reset