Quantized Training of Gradient Boosting Decision Trees

07/20/2022
by   Yu Shi, et al.
5

Recent years have witnessed significant success in Gradient Boosting Decision Trees (GBDT) for a wide range of machine learning applications. Generally, a consensus about GBDT's training algorithms is gradients and statistics are computed based on high-precision floating points. In this paper, we investigate an essentially important question which has been largely ignored by the previous literature: how many bits are needed for representing gradients in training GBDT? To solve this mystery, we propose to quantize all the high-precision gradients in a very simple yet effective way in the GBDT's training algorithm. Surprisingly, both our theoretical analysis and empirical studies show that the necessary precisions of gradients without hurting any performance can be quite low, e.g., 2 or 3 bits. With low-precision gradients, most arithmetic operations in GBDT training can be replaced by integer operations of 8, 16, or 32 bits. Promisingly, these findings may pave the way for much more efficient training of GBDT from several aspects: (1) speeding up the computation of gradient statistics in histograms; (2) compressing the communication cost of high-precision statistical information during distributed training; (3) the inspiration of utilization and development of hardware architectures which well support low-precision computation for GBDT training. Benchmarked on CPU, GPU, and distributed clusters, we observe up to 2× speedup of our simple quantization strategy compared with SOTA GBDT systems on extensive datasets, demonstrating the effectiveness and potential of the low-precision training of GBDT. The code will be released to the official repository of LightGBM.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/09/2018

High-Accuracy Low-Precision Training

Low-precision computation is often used to lower the time and energy cos...
research
04/02/2019

Nested Dithered Quantization for Communication Reduction in Distributed Training

In distributed training, the communication cost due to the transmission ...
research
11/20/2019

Auto-Precision Scaling for Distributed Deep Learning

In recent years, large-batch optimization is becoming the key of distrib...
research
07/14/2022

Low-Precision Arithmetic for Fast Gaussian Processes

Low-precision arithmetic has had a transformative effect on the training...
research
11/11/2019

Privacy-Preserving Gradient Boosting Decision Trees

The Gradient Boosting Decision Tree (GBDT) is a popular machine learning...
research
11/03/2020

Booster: An Accelerator for Gradient Boosting Decision Trees

We propose Booster, a novel accelerator for gradient boosting trees base...
research
11/18/2019

Distributed Low Precision Training Without Mixed Precision

Low precision training is one of the most popular strategies for deployi...

Please sign up or login with your details

Forgot password? Click here to reset