SelfieBoost: A Boosting Algorithm for Deep Learning

by   Shai Shalev-Shwartz, et al.

We describe and analyze a new boosting algorithm for deep learning called SelfieBoost. Unlike other boosting algorithms, like AdaBoost, which construct ensembles of classifiers, SelfieBoost boosts the accuracy of a single network. We prove a (1/ϵ) convergence rate for SelfieBoost under some "SGD success" assumption which seems to hold in practice.



There are no comments yet.


page 1

page 2

page 3

page 4


Re-scale boosting for regression and classification

Boosting is a learning scheme that combines weak prediction rules to pro...

Boosting-like Deep Learning For Pedestrian Detection

This paper proposes boosting-like deep learning (BDL) framework for pede...

Deep Incremental Boosting

This paper introduces Deep Incremental Boosting, a new technique derived...

Distorted English Alphabet Identification : An application of Difference Boosting Algorithm

The difference-boosting algorithm is used on letters dataset from the UC...

A Distributionally Robust Boosting Algorithm

Distributionally Robust Optimization (DRO) has been shown to provide a f...

Boosting as a Product of Experts

In this paper, we derive a novel probabilistic model of boosting as a Pr...

Iterative Boosting Deep Neural Networks for Predicting Click-Through Rate

The click-through rate (CTR) reflects the ratio of clicks on a specific ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep learning, which involves training artificial neural networks with many layers, becomes one of the most significant recent developments in machine learning. Deep learning have shown very impressive practical performance on a variety of domains (e.g.

[13, 9, 18, 2, 5, 16, 12, 11, 26, 6, 3]).

One of the most successful approaches for training neural networks is Stochastic Gradient Descent (SGD) and its variants (see for example

[14, 4, 1, 15, 24, 25, 21]). The two main advantages of SGD are the constant cost of each iteration (which does not depend on the number of examples) and the ability to overcome local minima. However, a major disadvantage of SGD is its slow convergence.

There have been several attempts to speed up the convergence rate of SGD, such as momentum [24], second order information [17, 7]

, and variance reducing methods

[23, 19, 10].

Another natural approach is to use SGD as a weak learner and apply a boosting algorithm such as AdaBoost [8]. See for example [20]. The celebrated analysis of AdaBoost guarantees that if at each boosting iteration the weak learner manages to produce a classifier which is slightly better than a random guess than after iterations, AdaBoost returns an ensemble of classifiers whose training error is at most .

However, a major disadvantage of AdaBoost is that it outputs an ensemble of classifiers. In the context of deep learning, this means that at prediction time, we need to apply several neural networks on every example to make the prediction. This leads to a rather slow predictor.

In this paper we present the SelfieBoost algorithm, which boosts the performance of a single network. SelfieBoost can be applied with SGD as its weak learner, and we prove that its convergence rate is provided that SGD manages to find a constant accuracy solution at each boosting iteration.

2 The SelfieBoost Algorithm

We now describe the SelfieBoost algorithm. We focus on a binary classification problem in the realizable case. Let be the training set, with and . Let be the class of all functions that can be implemented by a neural network of a certain architecture. We assume that there exists a network such that for all . Similarly to the AdaBoost algorithm, we maintain weights over the examples that are proportional to minus the signed margin, . This focuses the algorithm on the hard cases. Focusing the learner on the mistakes of might cause to work well on these examples but deteriorate on the examples on which performs well. AdaBoost overcomes this problem by remembering all the intermediate classifiers and predicting a weighted majority of their predictions. In contrast, SelfieBoost forgets the intermediate classifiers and outputs just the last classifier. We therefore need another method to make sure that the performance does not deteriorate. SelfieBoost achieves this goal by regularizing so that its predictions will not be too far from the predictions of .



Parameters: Edge parameter, , and number of iterations
Initialization: Start with an initial network (e.g. by running few SGD iterations)
define weights over the examples according to
let be indices chosen at random according to the distribution
use SGD for approximately finding a network as follows:
if and
continue to next iteration
increase the number of SGD iterations and/or the architecture and try again
break if no such found

3 Analysis

For any classifier, define

to be the number of mistakes the classifier makes on the training examples and

to be the error rate.

Our first theorem bounds using the number of SelfieBoost iterations and the edge parameter .

Theorem 1.

Suppose that SelfieBoost (for either the realizable or unrealizable cases) is run for iterations with an edge parameter . Then, the number of mistakes of is bounded by

In other words, for any , if SelfieBoost performs

iterations then we must have .


Define , and

Observe that upper bounds the zero-one loss of on , and so .

Suppose that we start with such that . For example, we can take . At each iteration of the algorithm, we find such that and


We will show that for such we have that , which implies that after rounds we’ll have . But, we also have that , hence

as required.

It is left to show that (1) indeed upper bounds

. To see this, we rely on the following bound, which holds for every vectors

for which for all (see for example the proof of Theorem 2.22 in [22]):

Therefore, with , , and , we have


So, at each iteration, our algorithm will try to find that approximately minimizes the right-hand side. In the next sub-sections we show that there exists , which can be implemented as a simple network, that achieves a sufficiently small objective.

Note that we obtain behavior which is very similar to AdaBoost, in which the number of iterations depends on an “edge” — here the “edge" is how good we manage to minimize (1).

So far, we have shown that if SelfieBoost performs enough iterations then it will produce an accurate classifier. Next, we show that it is indeed possible to find a network with a small value of (1). The key lemma below shows that SelfieBoost can progress.

Lemma 1.

Let be a network of size and be a network of size . Assume that . Then, there exists a network of size such that and (2) is at most .


Choose s.t. . Clearly, the size of is . In addition, for every ,

This implies that (2) becomes

The above lemma tells us that at each iteration of SelfieBoost, it is possible to find a network for which the objective value of (1) is at most . Since SGD can rather quickly find a network whose objective is bounded by a constant from the optimum, we can use, say, and expect that SGD will find a network with (1) smaller than .

Observe that when we apply SGD we can either sample an example according to at each iteration, or sample indices according to and then let the SGD sample uniformly from these indices.


The above lemma shows us that we might need to increase the network by the size of at each SelfieBoost iterations. In practice, we usually learn networks which are significantly larger than (because this makes the optimization problem SGD solves easier). Therefore, one may expect that even without increasing the size of the network we’ll be able to find a new network with (1) being negative.

4 Discussion and Future Work

We have described ana analyzed a new boosting algorithm for deep learning, whose main advantage is the ability to boost the performance of the same network.

In the future we plan to evaluate the empirical performance of SelfieBoost for challenging deep learning tasks. We also plan to generalize the results beyond binary classification problems.


  • Bengio [2009] Y. Bengio. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1):1–127, 2009.
  • Bengio and LeCun [2007] Y. Bengio and Y. LeCun. Scaling learning algorithms towards ai. Large-Scale Kernel Machines, 34, 2007.
  • Bengio et al. [2013] Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35:1798–1828, 2013.
  • Bousquet and Bottou [2008] Olivier Bousquet and Léon Bottou. The tradeoffs of large scale learning. In Advances in neural information processing systems, pages 161–168, 2008.
  • Collobert and Weston [2008] R. Collobert and J. Weston.

    A unified architecture for natural language processing: deep neural networks with multitask learning.

    In ICML, 2008.
  • Dahl et al. [2013] G. Dahl, T. Sainath, and G. Hinton.

    Improving deep neural networks for lvcsr using rectified linear units and dropout.

    In ICASSP, 2013.
  • Duchi et al. [2011] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121–2159, 2011.
  • Freund and Schapire [1995] Y. Freund and R.E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. In

    European Conference on Computational Learning Theory (EuroCOLT)

    , pages 23–37. Springer-Verlag, 1995.
  • Hinton et al. [2006] G. E. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18(7):1527–1554, 2006.
  • Johnson and Zhang [2013] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in Neural Information Processing Systems, pages 315–323, 2013.
  • Krizhevsky et al. [2012] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
  • Le et al. [2012] Q. V. Le, M.-A. Ranzato, R. Monga, M. Devin, G. Corrado, K. Chen, J. Dean, and A. Y. Ng.

    Building high-level features using large scale unsupervised learning.

    In ICML, 2012.
  • LeCun and Bengio [1995] Y. LeCun and Y. Bengio. Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks, 3361, 1995.
  • LeCun et al. [1998] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
  • LeCun et al. [2012] Yann A LeCun, Léon Bottou, Genevieve B Orr, and Klaus-Robert Müller. Efficient backprop. In Neural networks: Tricks of the trade, pages 9–48. Springer, 2012.
  • Lee et al. [2009] H. Lee, R. Grosse, R. Ranganath, and A.Y. Ng.

    Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations.

    In ICML, 2009.
  • Martens [2010] James Martens. Deep learning via hessian-free optimization. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 735–742, 2010.
  • Ranzato et al. [2007] M.A. Ranzato, F.J. Huang, Y.L. Boureau, and Y. Lecun. Unsupervised learning of invariant feature hierarchies with applications to object recognition. In CVPR, 2007.
  • Schmidt et al. [2013] Mark Schmidt, Nicolas Le Roux, and Francis Bach. Minimizing finite sums with the stochastic average gradient. arXiv preprint arXiv:1309.2388, 2013.
  • Schwenk and Bengio [2000] Holger Schwenk and Yoshua Bengio. Boosting neural networks. Neural Computation, 12(8):1869–1887, 2000.
  • Shalev-Shwartz and Ben-David [2014] S. Shalev-Shwartz and S. Ben-David. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, 2014.
  • Shalev-Shwartz [2011] Shai Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107–194, 2011.
  • Shalev-Shwartz and Zhang [2013] Shai Shalev-Shwartz and Tong Zhang. Stochastic dual coordinate ascent methods for regularized loss. The Journal of Machine Learning Research, 14(1):567–599, 2013.
  • Sutskever et al. [2013] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In ICML, 2013.
  • Sutskever [2013] Ilya Sutskever.

    Training recurrent neural networks

    PhD thesis, University of Toronto, 2013.
  • Zeiler and Fergus [2013] M. Zeiler and R. Fergus. Visualizing and understanding convolutional neural networks. arXiv preprint arXiv:1311.2901, 2013.