 # Gradient Boosting Machine: A Survey

In this survey, we discuss several different types of gradient boosting algorithms and illustrate their mathematical frameworks in detail: 1. introduction of gradient boosting leads to 2. objective function optimization, 3. loss function estimations, and 4. model constructions. 5. application of boosting in ranking.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Proposed by Freund and Schapire (1997), boosting is a general issue of constructing an extremely accurate prediction with numerous roughly accurate predictions. Addressed by Friedman (2001, 2002) and Natekin and Knoll (2013), the Gradient Boosting Machines (GBM) seeks to build predictive models through back-fittings and non-parametric regressions. Instead of building a single model, the GBM starts by generating an initial model and constantly fits new models through loss function minimization to produce the most precise model (Natekin and Knoll, 2013).

This survey concentrates on the mathematical derivations of the gradient boosting algorithms. In Section 2, we analyze the optimization methods for parametric and non-parametric models. Section 3 covers the definitions of different types of loss functions. In Section 4, we present different types of boosting algorithms, while in Section 5, we explore the combination of boosting algorithms and ranking algorithms to rank the real-world data.

## 2 Basic Framework

The ultimate goal of the GBM is to find a function , which minimize its loss function as

 F∗=argminFEy,xL(y,F(x)),

through iterative back-fitting.

### 2.1 Numerical Optimization

By definition, a boosted model is a weighted linear combination of the base learners

 F(x;{βm,am}M1)=M∑m=1βmh(x:am),

where is a base learner parameterized by . Regarded as weak learners, the base learners produce hypotheses that only predict slightly better than random guessing, and it was proved that recursive learning with weak learners can perform just as good as a strong learning algorithm (Schapire, 1990).

If the base learner is a regression tree, the parameter is usually the splitting nodes of tree branches (Friedman, 2002). Tree-based models divide the input variable space into regions and apply a series of rules to identify the regions that have the strongest responses to the inputs (Elith et al., 2008). Each region is then fitted with a regression tree taking the mean response of the observations (Elith et al., 2008)

. Decision trees are constructed through binary splits, and recursive splits generate a large tree, which is then pruned to drop out the weak branches.

The optimization process can be written as follows

 P∗=argminpΦ(P)Φ(P)=Ey,xL(y,F(x;P))F∗(x)=F(x;P∗),

where and are the consecutive boosting steps.

One of the approaches to generate these steps, , is to use the steepest-descent algorithm by calculating the gradient

 gm={gjm}={δΦ(P)δPj∣∣∣P=Pm−1},

where The boosting step in the previous function is given by

 pm=−ρmgm,

and is the line search on the direction of steepest-descent.

In the non-parametric case, the function is solved by minimizing

 Φ(F)=Ey,xL(y,F(x))=Ex[Ey(L(y,F(x)))|x]=Ey[L(y,F(x))|x],

and the optimum is reached at where

The gradient of the non-parametric model is

 gm(x)=δΦ(F(x))δF(x)∣∣∣F(x)=Fm−1(x)=δEy[L(y,F(x))|x]δF(x)∣∣∣F(x)=Fm−1(x).

The differentiation and integration of the gradient function can be switched interchangeably, and the gradient function can be simplified to

 gm(x)=Ey[δΦ(F(x))δF(x)∣∣∣x]F(x)=Fm−1(x).

According to the above equation, the line search is solved as

 ρm=argminρEy,xL(y,Fm−1(x)−ρgm(x)).

### 2.2 Finite Dataset

With finite number of samples, a non-parametric function can be obtained by a greedy stage-wise algorithm. Different from stepwise strategy, the stage-wise strategy does not make adjustments to the previous boosting steps, and thus can be demonstrated as

 (βm,am)=argminβ,aN∑i=1L(yi,Fm−1(xi)+βh(xi;a)).

Thus, can be obtained iteratively

 Fm(x)=Fm−1(x)+βmh(x;am).

## 3 Estimation

Under the framework of GBM, different loss functions can be applied to solve different tasks (Natekin and Knoll, 2013; Koenker and Hallock, 2001; Friedman, 2002).

### 3.1 Continuous Response

loss function, known as Laplacian loss, is presented as

 L(y,F)L1=|y−F|,

which is the absolute value of residuals between the explained variable and the predictive function , while the most commonly used squared-error loss function is defined as

 L(y,F)L2=12(y−F)2.

In addition, the Huber loss function that merges and loss functions described above can be a robust alternative to the loss function

 L(y,F)Huber,δ=⎧⎨⎩12(y−F)2   |y−F|≤δδ(|y−F|−δ2)   |y−F|>δ.

Quantile loss is inevitably handy in the situations of ordering and sorting because of its robustness, which is framed as

 L(y,f)α={(1−α)|y−f|   y−f≤0α|y−f|   y−f>0,

where designates the targeted quantile in the conditional distribution. The loss function can be degenerated into the loss by taking .

### 3.2 Categorical Response

There are two loss functions designed for categorical response that are widely used, namely the Bernoulli loss function and the exponential loss function. The Bernoulli loss function is formulated as

 L(y,F)Bern=log(1+exp(−2¯¯¯yF)),

while in Adaboost algorithm, the same alteration on variable in Bernoulli loss is applied on variable in exponential loss (Natekin and Knoll, 2013)

## 4 Methodology

Gradient boosting is a generalization of Adaboost. The design of Adaboost (Freund and Schapire, 1997), the original boosting algorithm, is to find a hypothesis with low prediction error relative to a given distribution over the training samples. Freund and Schapire (1997)

demonstrated their algorithm through a horse gambling example, where a gambler wishes to bet on the horse that has the greatest chance to win. In order to increase the winning probability of a bet, the gambler is encouraged to gather the expert opinions before placing a bet. Such a process of collecting information from different experts is similar to the ensemble of a class of poor classifiers. In Adaboost, each expert’s opinion corresponds to a training set

(Wang, 2012). Each sample is initialized with a weight, and the weights of the training sets are adjusted after each iteration, such that the weights of misclassified samples are increased, while the weights of correctly classified samples are decreased.

In each iteration of boosting, the current weak learner of Adaboost chooses a weak hypothesis from the entire set of weak hypotheses instead of just the weak hypotheses that are currently found to the point. Since the search of an entire space of hypotheses can be enormous amount of work, it is often suitable to apply weak learners that approximately cover the whole set (Collins et al., 2002).

Boosting algorithms with certain modifications perform well under high bias and high variance settings. When weighted sampling is implemented for the training data, the performance of boosting is determined by its ability to reduce variance

(Friedman et al., 2000). Meanwhile, boosting performance depends on bias reduction when the weighted sampling is replaced with weighted tree fitting (Friedman et al., 2000).

Additionally, Adaboost is prone to cause model overfitting because of the exponential loss. The overfitting may be mitigated by minimizing the normalized sigmoid cost function in exchange (Mason et al., 2000),

 C(F)=1mm∑i=11−tanh(λyiF(xi)).

In the above function, is a convex combination of weak hypotheses, and the parameter measures the steepness of the margin cost function . Through their experiments, Mason et al. (2000) showed that a new boosting algorithm optimizing normalized sigmoid cost, called DOOM II, overall performed better than Adaboost. According to Mason et al. (1999), AnyBoost is a general boosting algorithm that optimize gradient descent in an inner product space. The inner product space , which is inclusive of all linear combinations of weak hypotheses, contains the weak hypotheses and their combination . The inner product can thus be represented as,

 ⟨F,G⟩\coloneqq1mm∑i=1F(xi)G(xi),

where and are the combinations of weak hypotheses that belong to the set of all linear combinations of weak hypotheses. Only AnyBoost algorithm that implies the inner product function and normalized sigmoid cost function is referred to DOOM II (Mason et al., 2000).

### Arc-x4

Arcing, a concept introduced by Breiman (1996) and utilized in Adaboost, is a technique to adaptively reweighting the training samples. Arc-x4 (Breiman, 1997) performs similarly to the original boosting in training error and generalization error reduction. At each boosting step, a new training sample is generated from the training set with probability

 p(n)=(1+m(n)4)∑(1+m(n)4),

where is the number of misclassified cases.

### Least Squares Boost

The least-squared loss function in continuous response is one of the most commonly used loss function. In parametrized model, the optimization using the least-squared loss has an equation

 (ρm,am)=argmina,ρN∑i=1[˜yi−ρh(xi;a)]2.

Solving for , we obtain a stage-wise model

 Fm(x)=Fm−1(x)+ρmh(x;am).

### Logitboost

Another well-known boosting Algorithm is Logitboost. Similar to other boosting algorithms, Logitboost adopts regression trees as the weak leaners. Deriving from the logistic regression, Logitboost takes the negative of the loglikelihood of class probabilities

(Li, 2012). Defined as , class probability is formulated as

 pi,k=Pr(yi=k|xi)=eFi,k(xi)∑K−1s=0eFi,s(xi),

where

is the output vector and

is the input vector. Thus, the loss function of Logitboost can be written out

 L=N∑i=1Li,Li=−K−1∑k=0ri,klogpi,k,

where if and on the contrary. A stagewise model follows as

 Fi,k=Fi,k+vK−1K(fi,k−1KK−1∑k=0fi,k),

where is a shrinkage parameter, and is the objective function.

Beside the class probabilities, another important factor in Logitboost is the dense Hessian matrix, which is obtained by computing the tree split gain and node value fitting. However, certain modifications are required in order to incorporate these factors into optimization. The sum-to-zero constraint of classifier, implied by the sum-to-one constraint of the class probabilities, can be settled by adopting a vector tree at each boost. In the vector tree, a sum-to-zero vector is fitted at each split node in the K-dimensional space. Moreover, adding the vector tree allows explicit computations of the split gain and node fitting, which becomes a secondary problem when fitting a new tree. Such secondary problems can then be used to cope with the dense Hessian matrix, where only two coordinates are allowed for each of the secondary problems (Sun et al., 2012).

The LAD regression proposed by Friedman (2002) has its loss function as , where is solved by

 Fm(x)=Fm−1(x)+J∑j−1γ1(x∈Rjm),   γjm=ρmbjm.

Moreover, in the LAD regression, the gamma parameter is

 γjm=medianxi∈Rjm{yi−Fm−1(xi)}.

### M-Regression

M-Regresison (Friedman, 2002) is designed to incorporate with the Huber loss function

 γjm=˜rjm+1Njm∑xi∈Rjmsign(rm−1(xi)−˜rjm)∙min(δm,abs(rm−1(xi)−˜rjm)),

where and

### Two Class Logistic Regression

The loss function applied in the logistic regression is a binary function (Friedman, 2002), which is the Bernoulli loss function. Approximately, the line search of the logistic regression can be solved from Bernoulli loss

 γjm=∑xi∈Rjm˜yi/∑xi∈Rjm|˜yi|(2−|˜yi|),j=1,…,J,

where

### Multiclass Logistic Regression

The loss function applied in the Multi-class logistic regression are as follows

 L({yk,Fk(x)}K1)=−K∑k=1yklogpk(x),   yk=1∈{0,1},   pk(x)=Pr(yk=1|x),

where the line search of the multi-class logistic regression is

 γjkm=K−1K∑xi∈Rjkm˜yik∑xi∈Rjkm|˜yik|(1−|˜yik|).

## 5 Ranking Problem

One of the most discussed problems in machine learning is teaching a computer to rank. Two sets of data are required before constructing a ranking algorithm

(Zheng et al., 2008), i.e., the preference data containing a set of features, and the ranked targets. Based on these two datasets, the ranking function can be computed for each dataset under an optimization problem.

The objective function for the ranking problem is

 R(h)=w2N∑i=1(max{0,h(yi)−h(xi)+τ})2+1−w2n∑i=1(li−h(zi))2,

where and are the features in the preference data, and if is ranked higher than .

Wu et al. (2008) proposed a highly effective ranking algorithm LambdaMART which integrates LambdaRank function and boosting. The LambdaRank function aims to maximize the Normalized Discounted Cumulative Gain (NDCG)

 Ni\coloneqqniT∑j=1(2r(j)−1)/log(1+j),

where represents the ranking of the targets. Gamma gradient is used in the optimization

 γi,j\coloneqqSij∣∣∣△NDCGδCijδoij∣∣∣,

where takes the value of 1 or -1 depending on the relevance of the items. For example, in ranking for webpages, the gamma gradient is used to determine the relevance of information retrieved online. If a piece of information is more relevant than another piece , then equals to 1; otherwise equals to -1. The represents the difference between the ranking scores predicted by the ranking function and Moreover, the gamma gradient of a specific item is as follows

 γi=∑j∈Pγij.

## 6 Conclusion

In this paper, we summarize the Gradient Boosting Algorithms from several aspects, including the general function optimization, the objective functions, and different loss functions. Additionally, we present a set of boosting algorithms with unique loss functions, and we solve their predictive models accordingly.

## References

• Breiman (1996) Breiman, L. (1996). Bias, Variance, and Arcing Classifiers. Statistics Department, University of California, Berkeley, CA, USA. Tech. Rep. 460.
• Breiman (1997) Breiman, L. (1997). ARCING THE EDGE. Statistics Department, University of California, Berkeley, CA, USA. Tech. Rep. 486.
• Collins et al. (2002) Collins, M., Schapire, R. E., & Singer, Y. (2002). Logistic Regression, Adaboost amd Bregman Distances. Machine Learning 48, 253–285.
• Elith et al. (2008) Elith, J., and Leathwick, J. R., & Hastie, T. (2008). A Working Guide to boosted Regression Rrees. Journal of Animal Ecology 77, 802–813.
• Freund and Schapire (1997) Freund, Y., and Schapire, R. E. (1997). A Decision Theoretic Generalization of Online Learning and An Application to Boosting. Journal of Computer and System Sciences 55, 119–139.
• Friedman (2001) Friedman, J. H. (2001). Greedy Function Approximation: A Gradient Boosting Machine. Annals of Statistics, 1189–1232.
• Friedman (2002) Friedman, J. H. (2002). Stochastic Gradient Boosting. Computational Statistics & Data Analysis 38, 367–378.
• Friedman et al. (2000) Friedman, J., Hastie, T., & Tibshirani, R. (2000). ADDITIVE LOGISTIC REGRESSION: A STATISTICAL VIEW OF BOOSTING. The Annals of Statistics 28, 337–407.
• Koenker and Hallock (2001) Koenker, R., and Hallock, K. F. (2001). Quantile Regression Journal of Economic Perspectives 15, 143–156.
• Li (2012) Li, P. (2012). Robust Logitboost and Adaptive Base Class (abc) Logitboost. arXiv preprint arXiv:1203.3491.
• Mason et al. (1999) Mason, L., Baxter, J., Bartlett, P., & Frean, M. (1999). Boosting Algorithms as Gradient Descent in Function Space.
• Mason et al. (2000) Mason, L., Baxter, J., Bartlett, P., & Frean, M. (2000). Boosting Algorithms as Gradient Descent. Advances in Neural Information Processing Systems, 512–518.
• Natekin and Knoll (2013) Natekin, A., and Knoll, A. (2013). Gradient Boosting Machines, A Tutorial. Frontiers in Neurorobotics 7, 21.
• Schapire (1990) Schapire, R. E. (1990). The Strength of Weak Learnability. Machine Learning 5, 197–227.
• Sun et al. (2012) Sun, P., Reid, M. D., & Zhou., J. (2012). ASOS-LogitBoost: Adaptive One-Vs-One LogitBoost for Multi-Class Problem. arXiv preprint arXiv:1110.3907.
• Wang (2012)

Wang, R. (2012). AdaBoost for Feature Selection, Classification and Its Relation with SVM, A Review.

Physics Procedia 25, 800–807.
• Wu et al. (2008) Wu, Q., Burges, C. J., Svore, K. M., & Gao, J. (2008). Ranking, Boosting, and Model Adaptation.
• Zheng et al. (2008) Zheng, Z., Zha, H., Zhang, T., Chapelle, O., Chen, K., & Sun, G. (2008). A General Boosting Method and Its Application to Learning Ranking Functions for Web Search. Advances in Neural Information Processing Systems, 1697–1704.