Internal node bagging: an explicit ensemble learning method in neural network training

05/01/2018 ∙ by Shun Yi, et al. ∙ North China University of Technolege 0

We introduce a novel view to understand how dropout works as an inexplicit ensemble learning method, which do not point out how many and which nodes to learn a certain feature. We propose a new training method named internal node bagging, this method explicitly force a group of nodes to learn a certain feature in training time, and combine those nodes to be one node in inference time. It means we can use much more parameters to improve model's fitting ability in training time while keeping model small in inference time. We test our method on several benchmark datasets and find it significantly more efficiency than dropout on small model.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Neural network is a universal approximator, we can easily increase its fitting ability by adding more layers or more nodes each layer. As large labeled datasets are relatively easy to obtain now, neural network is widely used in computer vision, NLP and other domain. However, achieve state-of-the-art performance always need big models with regularization

[Goodfellow et al., 2016], or even ensemble of several models, which limits the use of neural network especially on mobile device. Although some lightweight models have been proposed recently, like [Howard et al., 2017, Zhang et al., 2017]

, but they rely on well-designed structures and only focus on convolutional neural network.

Dropout [Hinton et al., 2012] is a famous regularization method in neural network training which randomly set the outputs of some hidden nodes or input nodes to zero in train time. It is commonly accepted that dropout training is similar to bagging [Breiman, 1996]. For each training sample, dropout randomly deletes some nodes from network, and trains a thinner subnet, those subnets are trained on different samples and averaged in test time. Instead of making any real model average which will cost too much computing resource, a very simple approximate averaging method is applied by weight scaling. There are some empirical analyses show weight scaling works well in deep models [Srivastava et al., 2014, Wardefarley et al., 2014, Pham et al., 2014]. Dropout can indiscriminately and reliably yield a modest improvement in performance when applied to almost any type of model [Goodfellow et al., 2013], but may not very efficiency on small model[Srivastava et al., 2014].

In this paper, we introduce a novel view to understand how dropout works as a layer-wise ensemble learning method basing on several assumptions, and propose a new training method named internal node bagging according to our theory. We test our method on MINIST [Lecun et al., 1998], CIFAR-10 [Krizhevsky, 2009] and SVHN [Netzer et al., 2011], with fully connected network and convolutional network, find it can significantly improve test performance of small models.

Ii Motivation

Consider a network that classify white horse and zebra in figure

(a)a. This simple fully connect network has 4 input nodes which represent different features belong to horse and zebra. If we apply dropout on input layer and the only difference between white horse and zebra is black-white strip, then the network absolutely cannot work if the first node which represents black-white strip is dropped.

(a)
(b)
Fig. 1: (a): An one layer fully connected network used for classifying zebra and white horse, different input nodes represent different features. (b): Similar to (a), but for every feature, there are a group of nodes represent it.

To avoid those features like black-white strip which are the key to classify different type of samples are dropped, network should learn to use more than one node to learn each of them. As we can see in figure (b)b

which is the multi nodes per feature edition, there are 4 groups of nodes, every group contains more than one node and all of those nodes represent same feature. If drop probability is 0.5 for every node and feature’s drop probability is less than 0.01, then every group should at least contain

nodes.

Although what we mentioned above is too idealized because features learned by neural network are distributed, there are still some empirical evidences support our assumption, for example, dropout training always need bigger models [Srivastava et al., 2014]

, shutting off a hidden neuron in dropout network can not simply remove features of input

[Bouthillier et al., 2015], and there is significant redundancy in the parameterization of several deep learning models [Shakibi et al., 2013].

Consider a fully connected dropout network which every feature is represented by a group of nodes like network in figure (b)b, the th layer contains groups and every group has nodes, is the output of the th node from th group of layer , and are the corresponding weights and biases, is drop mask,

is activation function,

is the output of layer . Then the forward propagation is:

(1)

Because all nodes in a group represent same feature, so we can assume their weights will converge to similar value as the training process going. Let the value be , so we can simplify 1 to:

(2)

According to 2, we can simplify neural network with dropout to network described in figure 2, which is different to network in figure (b)b that every group only has one output sampled from corresponding nodes.

Fig. 2: Internal node bagging style network.

Network described in figure 2 is very similar to maxout [Goodfellow et al., 2013], but instead choose the biggest output in a group, it randomly samples one. Let the sampled output be , it computed by:

(3)

Now we have a novel view to understand how dropout works as a layer-wise ensemble training method: For every feature in a layer, there are a group of nodes to learn it, next layer randomly samples a value from those nodes as the feature activation while training. In test time, weight scaling approximately let every group output the expected feature activation.

We consider that if we know which nodes in a layer represent same feature, then we may be able to combine those nodes to be one node in test time which will reduce tons of parameters and computation. It can also be interpreted from opposite perspective: for every internal node in a small network, we use a group of nodes to estimate its parameters in train time. We name this method

internal node bagging.

There are 2 problems we should resolve, the first is how do we know which nodes represent same feature, the second is how to combine nodes to be one node. For resolving the 2 problems, we apply follow 2 tricks:

  • We manually assign nodes to different groups and force them learn same feature in train time. For every group, we initialize them with same initialization and periodically compute average weights and biases and assign it to all nodes.

  • We use relu

    [Glorot et al., 2012] in all experiments111except in the experiments of comparing the performance of different activation functions. We assume the outputs of nodes in a group are similar as they represent same feature, so it is highly possible that those outputs distribute on the linear part of relu, and combining nodes in test time is feasible.

We outline detail model description in section III, and experiment results in section IV.

Iii Model description

In this section, we first introduce how to combine nodes in a group to be one node and how to compute average weights, then we introduce 2 methods to sample a feature activation from a group used in our experiments.

Iii-a Combine nodes

For given input, the expected value sampled from a group is:

(4)

For all in a group, they obey same distribution, so, let:

(5)

Then, we can simplify 4 to:

(6)

Consider222 here is different from it in 1, they just represent corresponding weights:

(7)

Assume for all in a group, they distribute on the linear part of relu, so:

(8)

We combine nodes in a group to be one node in test time according to 8, which is equal to weight scaling if every group only contain one node.

Iii-B Compute average weights

Consider a layer and group , average weights are , average biases are , then:

(9)

So:

(10)
(11)

We periodically compute average weights and biases in a group, and assign it to all nodes in this group to force them learn same feature.

Iii-C Sample methods

We propose 2 methods to sample an activation from a group:

  • Method A: every node in a group has same probability to be sampled independently.

  • Method B: only one node will be sampled every group.

If there is only one node every group, method A is equal to dropout, network apply method B is equal to standard network.

Iv Experiments

We evaluate our methods on MINIST [Lecun et al., 1998], CIFAR-10 [Krizhevsky, 2009] and SVHN [Netzer et al., 2011]. MNIST dataset consists of 28*28 pixel gray images of handwritten digits, with 60000 samples for training and 10000 samples for testing; CIFAR-10 dataset consists of 32*32 RGB images in 10-classes with 50000 images for training and 10000 for testing; SVHN dataset consists of 32*32 RGB image dataset of digits, with 73257 images for training and 26032 images for testing.

We implement our models using tensorflow, all source code is available in

www.github.com/Xiong-Da/internal_node_bagging_V2. Settings shared in all experiments are listed below:

  • We apply internal node bagging on all hidden layers.

  • All sample probability used in method A is 0.5.

  • We don’t use any other normalization method.

  • We default apply weight average described in III-B

    every 10 epochs.

For experiments on MNIST, we use fully connected network with 2 same width hidden layers. For experiments on CIFAR-10 and SVHN, we use CNN described in table I, which is modified from the "base model C" in [Springenberg et al., 2014]

, just remove last 1*1 convolution layer, all stride in convolution layer is 1, all padding is "SAME" except last 3*3 convolution layer.

We train all models with Adam optimization algorithm [Kingma and Ba, 2014]. For experiments on MNIST, we train first 100 epochs with learning rate 1e-3, and train another 100 epochs with learning rate 1e-4. For Experiments on CIFAR-10 and SVHN, we train models with initial learning rate 1e-3, and decay learning rate when validate error stop decrease untill models are converged.

32*32 RGB image
2 layer of 3*3 conv.64

3*3 max-pooling stride 2

2 layer of 3*3 conv.128
3*3 max-pooling stride 2
3*3 conv.192
1*1 conv.192
global averaging over 6*6 spatial dimensions
10-way softmax
TABLE I: The architecture of CNN used for classification experiments on CIFAR-10 and SVHN.

Iv-a Performance on models of different size

(a)
(b)
(c)
Fig. 3: Experiments on method A with different model size. Model width in (a) means how many groups hidden layer has. Model width in (b) and (c) means the proportion of filter we use, for example, model width 0.5 means we multiply the number of filters each layer in table I by 0.5.
(a)
(b)
(c)
Fig. 4: Experiments on method B with different model size.

In this section, we investigate the performance of our methods on models with variety size.

Figures in 3 show the performance of method A on 3 datasets. As we can see in those figures, when model size is small, increasing group size can significantly improve test performance especially on CIFAR-10 and SVHN. But as model size increase, the performance improvement start to decrease, test error of models with big group size is even worse than dropout network on SVHN (method A with group size 1 is equal to dropout).

Figures in 4 show the performance of method B. Compare to method A, method B is relatively more complex to analyze. On MNIST dataset, increasing group size can modestly improve performance both on small models and big models. On CIFAR-10 datatset, models with different group size have relatively similar performance, models with big group size perform slightly better. On SVHN dataset, method B can significantly improve performance especially on small models.

Iv-B Effect of weight average

Figure in 5 shows the effect of weight average described in III-B on MNIST dataset with model width 256. "weight average frequency" mean train how many epochs and then apply weight average once, we only train 200 epochs on MNIST dataset, so frequency 200 means don’t apply it. In our experiments, method B seemed not sensitive to weight average frequency, but method A can’t converge well without moderate frequency, especially on models with large group size.

Fig. 5: Analyze the effect of weight average described in III-B.

Iv-C Convergence propertie

Figure in 6 shows the convergence properties of our 2 methods on MNIST dataset with model width 256. As we can see in those figure, for both methods, models with big group size do converge slower, but not slow too much.

Fig. 6: Analyze the convergence properties.

Iv-D Performance on different activation function

Fig. 7: Analyze the performance of our methods with different activation function and group size.

In section III-A, we assume outputs of nodes from a group is similar and distribute on the linear part of relu as they represent same feature, and this is why combining nodes in a group to be one node in test time is feasible. In this section, we analyze that if relu is inreplaceable to our method. Figure in 7 shows the experiment results of our 2 methods with 3 different activation functions on MNIST dataset. For method A, all 3 activation functions perform better when increase group size to 2, but only relu’s performance keep improving when increase group size to 4. For method B, when increase group size to 2, relu and tanh perform better, but when increase group size to 4, both 2 activation function don’t perform good.

In our experiment, relu is not inreplaceable, but do perform slightly better.

V Discussion

We introduced a novel view to understand how dropout works as a layer-wise ensemble learning method, and proposed a new ensemble training algorithm named internal node bagging. We tested 2 sample methods in our experiments: method A can be seen as generalization of dropout, method B can be seen as generalization of standard network. For method A, increasing group size can significantly improve test performance on small models. For method B, increasing group size can moderately improve performance both on small models and big models, but it performs quite different on 3 different datasets. We also introduced 2 way to understand how internal node bagging works: the first one thinks our method is equivalent to simplify big redundant models to small models without redundancy, the second one thinks our method is equivalent to estimate the parameters of every internal node by multiple nodes in train time. It seems the second one is more reasonable basing on our experiments.

References

  • Bouthillier et al. [2015] Xavier Bouthillier, Kishore Konda, Pascal Vincent, and Roland Memisevic. Dropout as data augmentation. Computer Science, 2015.
  • Breiman [1996] Leo Breiman. Bagging predictors. Machine Learning, 24(2):123–140, 1996.
  • Glorot et al. [2012] X. Glorot, A. Bordes, and Y. Bengio. Deep sparse rectifier neural networks. In

    International Conference on Artificial Intelligence and Statistics

    , pages 315–323, 2012.
  • Goodfellow et al. [2016] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016. http://www.deeplearningbook.org.
  • Goodfellow et al. [2013] Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout networks. Computer Science, pages 1319–1327, 2013.
  • Hinton et al. [2012] Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. Computer Science, 3(4):págs. 212–223, 2012.
  • Howard et al. [2017] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. 2017.
  • Kingma and Ba [2014] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. Computer Science, 2014.
  • Krizhevsky [2009] Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009.
  • Lecun et al. [1998] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
  • Netzer et al. [2011] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. Nips Workshop on Deep Learning & Unsupervised Feature Learning, 2011.
  • Pham et al. [2014] Vu Pham, Théodore Bluche, Christopher Kermorvant, and Jérôme Louradour.

    Dropout improves recurrent neural networks for handwriting recognition.

    In International Conference on Frontiers in Handwriting Recognition, pages 285–290, 2014.
  • Shakibi et al. [2013] Babak Shakibi, Babak Shakibi, Marc’Aurelio Ranzato, Marc’Aurelio Ranzato, and Nando De Freitas. Predicting parameters in deep learning. In International Conference on Neural Information Processing Systems, pages 2148–2156, 2013.
  • Springenberg et al. [2014] Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. Eprint Arxiv, 2014.
  • Srivastava et al. [2014] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958, 2014.
  • Wardefarley et al. [2014] David Wardefarley, Ian J. Goodfellow, Aaron Courville, and Yoshua Bengio. An empirical analysis of dropout in piecewise linear networks. Computer Science, 2014.
  • Zhang et al. [2017] Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices. 2017.