Out-of-Core GPU Gradient Boosting

05/19/2020
by   Rong Ou, et al.
0

GPU-based algorithms have greatly accelerated many machine learning methods; however, GPU memory is typically smaller than main memory, limiting the size of training data. In this paper, we describe an out-of-core GPU gradient boosting algorithm implemented in the XGBoost library. We show that much larger datasets can fit on a given GPU, without degrading model accuracy or training time. To the best of our knowledge, this is the first out-of-core GPU implementation of gradient boosting. Similar approaches can be applied to other machine learning algorithms

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/29/2018

XGBoost: Scalable GPU Accelerated Learning

We describe the multi-GPU gradient boosting algorithm implemented in the...
research
10/24/2018

CatBoost: gradient boosting with categorical features support

In this paper we present CatBoost, a new open-sourced gradient boosting ...
research
02/28/2017

Improving the Neural GPU Architecture for Algorithm Learning

Algorithm learning is a core problem in artificial intelligence with sig...
research
11/03/2020

Booster: An Accelerator for Gradient Boosting Decision Trees

We propose Booster, a novel accelerator for gradient boosting trees base...
research
03/04/2020

Array relocation approach for radial scanning algorithms on multi-GPU systems: total viewshed problem as a case study

In geographic information systems, Digital Elevation Models (DEMs) are c...
research
10/15/2021

Accelerating Genetic Programming using GPUs

Genetic Programming (GP), an evolutionary learning technique, has multip...
research
03/20/2019

Accelerating Gradient Boosting Machine

Gradient Boosting Machine (GBM) is an extremely powerful supervised lear...

Please sign up or login with your details

Forgot password? Click here to reset