DeepAI AI Chat
Log In Sign Up

GBHT: Gradient Boosting Histogram Transform for Density Estimation

06/10/2021
by   Jingyi Cui, et al.
0

In this paper, we propose a density estimation algorithm called Gradient Boosting Histogram Transform (GBHT), where we adopt the Negative Log Likelihood as the loss function to make the boosting procedure available for the unsupervised tasks. From a learning theory viewpoint, we first prove fast convergence rates for GBHT with the smoothness assumption that the underlying density function lies in the space C^0,α. Then when the target density function lies in spaces C^1,α, we present an upper bound for GBHT which is smaller than the lower bound of its corresponding base learner, in the sense of convergence rates. To the best of our knowledge, we make the first attempt to theoretically explain why boosting can enhance the performance of its base learners for density estimation problems. In experiments, we not only conduct performance comparisons with the widely used KDE, but also apply GBHT to anomaly detection to showcase a further application of GBHT.

READ FULL TEXT

page 1

page 2

page 3

page 4

12/05/2021

Local Adaptivity of Gradient Boosting in Histogram Transform Ensemble Learning

In this paper, we propose a gradient boosting algorithm called adaptive ...
11/24/2019

Histogram Transform Ensembles for Density Estimation

We investigate an algorithm named histogram transform ensembles (HTE) de...
03/22/2018

Boosted Density Estimation Remastered

There has recently been a steadily increase in the iterative approaches ...
06/03/2021

Gradient Boosted Binary Histogram Ensemble for Large-scale Regression

In this paper, we propose a gradient boosting algorithm for large-scale ...
12/08/2019

Histogram Transform Ensembles for Large-scale Regression

We propose a novel algorithm for large-scale regression problems named h...
11/01/2021

Bounds all around: training energy-based models with bidirectional bounds

Energy-based models (EBMs) provide an elegant framework for density esti...
05/30/2019

Global empirical risk minimizers with "shape constraints" are rate optimal in general dimensions

Entropy integrals are widely used as a powerful tool to obtain upper bou...