Gradient-less Federated Gradient Boosting Trees with Learnable Learning Rates

04/15/2023
by   Chenyang Ma, et al.
0

The privacy-sensitive nature of decentralized datasets and the robustness of eXtreme Gradient Boosting (XGBoost) on tabular data raise the needs to train XGBoost in the context of federated learning (FL). Existing works on federated XGBoost in the horizontal setting rely on the sharing of gradients, which induce per-node level communication frequency and serious privacy concerns. To alleviate these problems, we develop an innovative framework for horizontal federated XGBoost which does not depend on the sharing of gradients and simultaneously boosts privacy and communication efficiency by making the learning rates of the aggregated tree ensembles learnable. We conduct extensive evaluations on various classification and regression datasets, showing our approach achieves performance comparable to the state-of-the-art method and effectively improves communication efficiency by lowering both communication rounds and communication overhead by factors ranging from 25x to 700x.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/09/2020

Cloud-based Federated Boosting for Mobile Crowdsensing

The application of federated extreme gradient boosting to mobile crowdse...
research
07/14/2020

Privacy Preserving Text Recognition with Gradient-Boosting for Federated Learning

Typical machine learning approaches require centralized data for model t...
research
04/03/2022

FedGBF: An efficient vertical federated learning framework via gradient boosting and bagging

Federated learning, conducive to solving data privacy and security probl...
research
07/20/2022

FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning

Federated learning (FL) has recently attracted increasing attention from...
research
12/11/2020

Adaptive Histogram-Based Gradient Boosted Trees for Federated Learning

Federated Learning (FL) is an approach to collaboratively train a model ...
research
02/09/2023

Communication-Efficient Federated Hypergradient Computation via Aggregated Iterative Differentiation

Federated bilevel optimization has attracted increasing attention due to...

Please sign up or login with your details

Forgot password? Click here to reset