Subset Sampling For Progressive Neural Network Learning

Progressive Neural Network Learning is a class of algorithms that incrementally construct the network's topology and optimize its parameters based on the training data. While this approach exempts the users from the manual task of designing and validating multiple network topologies, it often requires an enormous number of computations. In this paper, we propose to speed up this process by exploiting subsets of training data at each incremental training step. Three different sampling strategies for selecting the training samples according to different criteria are proposed and evaluated. We also propose to perform online hyperparameter selection during the network progression, which further reduces the overall training time. Experimental results in object, scene and face recognition problems demonstrate that the proposed approach speeds up the optimization procedure considerably while operating on par with the baseline approach exploiting the entire training set throughout the training process.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

02/26/2020

PrIU: A Provenance-Based Approach for Incrementally Updating Regression Models

The ubiquitous use of machine learning algorithms brings new challenges ...
06/14/2016

DCNNs on a Diet: Sampling Strategies for Reducing the Training Set Size

Large-scale supervised classification algorithms, especially those based...
03/23/2019

Competence-based Curriculum Learning for Neural Machine Translation

Current state-of-the-art NMT systems use large neural networks that are ...
09/06/2021

Iterative Pseudo-Labeling with Deep Feature Annotation and Confidence-Based Sampling

Training deep neural networks is challenging when large and annotated da...
03/25/2020

Accelerated learning algorithms of general fuzzy min-max neural network using a branch-and-bound-based hyperbox selection rule

This paper proposes a method to accelerate the training process of gener...
07/15/2019

Unsupervised Fault Detection in Varying Operating Conditions

Training data-driven approaches for complex industrial system health mon...
02/01/2019

Learning icons appearance similarity

Selecting an optimal set of icons is a crucial step in the pipeline of v...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Progressive Neural Network Learning (PNNL) [8, 25, 3, 4, 9, 21, 22, 19]

aims to build the network’s topology incrementally depending on the training set given for the specific problem at hand. At each incremental training step, a PNNL algorithm adds a new set of neurons to the existing network topology and optimizes the new synaptic weights using the entire training set. Thus, throughout the network’s topology progression, the number of times the PNNL algorithm iterates through the entire training set is very high. For large datasets, this approach leads to an enormous computational cost and long training process. In this paper, we propose to perform the optimization of each incremental training step using only a subset of the training data. Our motivation in doing so is two-fold. Firstly, optimizing with respect to a subset of the training data leads to lower overall computational cost; Secondly, the use of different subsets of data at each incremental step promotes specialization of different sets of neurons at capturing different patterns in the data.

The idea of subset sampling for training machine learning methods has been proposed in different contexts in literature. With the motivation of reducing the expense of labeling data for training, methods following the active learning paradigm

[15] seek to define a sampling strategy that selects a sample to be labeled among a large pool of unlabeled data for the next learning round. While the active learning paradigm considers the problem of data selection in an (initially) unsupervised setting, in the context of PNNL we take advantage of the available labeling information for subset selection. Directly related to our work are methods selecting a subset of data formed by the most representative samples [14, 2, 16, 1]. These methods however only consider the data selection process once based on the input data representations and the available labels. The selected subset of data is then used to train a model with fixed capacity. Different from this line of works, we propose to perform subset sampling at every incremental step of the PNNL process with selection strategies that can also take into account the data representations learned by the current network’s tolpology.

When building a learning system, the development process often requires running multiple experiments to select the best values for the hyper-parameters associated with the learning model. For neural networks, such hyper-parameters correspond to the values used e.g. for the weight decay coefficient, or the dropout percentage. In existing PNNL algorithms, the value associated with each hyper-parameter is fixed throughout the entire training process, and the best combination of the hyper-parameter values is usually selected by following a grid search strategy training multiple models each corresponding to a different combination of hyper-parameter values. Different from that (traditional) approach, we propose to incorporate the hyper-parameter selection process into each incremental training step, enabling adaptive hyper-parameter assignment during the network’s topology progression process. Coupled with the speed up gained from subset sampling, this further accelerates the overall training process and improves generalization performance as indicated by our experimental results.

The remainder of the paper is organized as follows: Section 2 reviews Progressive Neural Network Learning and the subset sampling strategies in different learning contexts. Section 3 describes the proposed progressive network training method. In Section 4, we detail our experimental setup and present empirical results. Section 5 concludes our work.

2 Related works

2.1 Progressive Neural Network Learning

In Progressive Neural Network Learning (PNNL), an algorithm starts with an initial network topology and gradually increases the capacity of the model by adding and optimizing new blocks of neurons following an iterative optimization process [8, 25, 3, 4, 9, 21, 22, 19, 20, 23]. When a new set of neurons is added to the current network topology, different PNNL algorithms determine different rules to form new synaptic connections from the new neurons to the existing ones. For example, in I-ELM [8] and BLS [4], the progression strategies only allow the algorithms to learn networks with one and two hidden layers, respectively, while other PNNL algorithms such as PLN [3], StackedELM [25] or HeMLGOP [22] can generate multilayer networks.

Regarding the adopted optimization strategies, many algorithms employ random hidden neurons to relax the optimization objective to a convex form and use convex optimization techniques to achieve global solutions such as [8, 4, 25, 3]. While this approach is computationally efficient and often comes with certain theoretical guarantees, most algorithms are sensitive to hyper-parameter selection and require extensive evaluation of a large set of hyper-parameter values. Besides, these algorithms often construct very large network topologies to achieve good performance. Recently, the authors in [22] proposed HeMLGOP, a PNNL algorithm that combines both randomization process and stochastic optimization to progressively train networks of heterogeneous neurons. Since HeMLGOP not only optimizes the network’s topology but also the functional form of each neuron, the resulting network are both compact and efficient. This, however, comes with a much higher training computational cost compared to those employing random neurons and convex optimization.

As a variant of HeMLGOP algorithm which only optimizes the network’s topology with the standard Perceptron, Progressive Multilayer Perceptron (PMLP) yields a good trade-off between optimization complexity, topology compactness and learning capability. Thus, in this paper we apply our proposal to speed-up and enhance PMLP. Although our investigation in this paper limits to only PMLP, the proposed method can be generalized to all PNNL algorithms as will be indicated in the next Section.

2.2 Subset Sampling

In Active Learning, query-acquiring or pool-based method refers to a class of algorithms that uses different sampling strategies to select the most informative samples from a pool of unlabeled data. The most representative examples in this category of methods include the information-theoretic method in [11], the ensemble method in [12]

, and the method based on uncertainty heuristics in

[18]. For a comprehensive review of active learning methods, we refer the reader to [15].

Subset sampling methods have also been proposed in different contexts. For example, submodular function optimization for selecting a subset of samples was proposed in [1] to speed up neural network training. To study sample redundancy, [2] performs clustering using representations generated by a pre-trained model, while [24] measures the importance of a sample via the gradient information. In the context of dataset compression and distributed learning, [16] optimizes sample selection and model’s parameters iteratively based on convex optimization.

3 Proposed Method

3.1 Subset Sampling

Let us denote by the training set formed samples with and being the -th sample and its label, respectively. Let us also denote by the function induced by the neural network’s topology at the progression step , where representing the set of parameters to optimize, and representing the set of hyper-parameters.

At step , instead of optimizing with respect to on , we propose to solve the optimization problem on a subset having cardinality , i.e.:

(1)

where

denotes the loss function. To this end, we evaluate three different sample selection methods defined based on the following criteria:

  • Random Sampling: at each progression step , we form by uniformly selecting samples from . Although random sampling has been theoretically proven to be inferior to other sampling strategies in many learning contexts [5, 6, 15, 1], as it will be shown by our empirical study, this is not necessarily the case for PNNL. Throughout the architecture progression process, random sampling ensures diverse sets of samples being iteratively presented to the network, thus promoting diversity of the newly added neurons with respect to the existing ones.

  • Top-M Sampling based on miss-classification: at each progression step , this method computes the loss induced by each sample in using the network’s topology learned at step , i.e., , and selects the top samples which induce the highest loss values. Since the loss values directly provide supervisory signal when updating the model’s parameters, by conditioning on the current model’s knowledge expressed via

    , this strategy enforces a given algorithm to learn new blocks of neurons which can correctly classify the most difficult cases.

  • Top-M Sampling based on diverse miss-classification

    : while the previous sampling method solely considers the most difficult to classify samples, this strategy also aims to promote diversity and reduce similarity among the selected samples. To do so, we perform K-Means clustering using

    as inputs. The number of clusters , which is pre-defined, can be set using simple heuristics such as being equal to the number of classes in classification tasks. We also compute the loss value induced by each sample using and select the top samples that induce highest loss values for every cluster, with .

3.2 Online Hyperparameter Selection

Models Caltech256 MIT CelebA
Subset Percentage 10%
PMLP-Random
PMLP-Top-Loss
PMLP-C-Top-Loss
Subset Percentage 20%
PMLP-Random
PMLP-Top-Loss
PMLP-C-Top-loss
Subset Percentage 30%
PMLP-Random
PMLP-Top-Loss
PMLP-C-Top-loss
Full Set
PMLP
StackedELM [25]
PLN [3]

Table 1: Test accuracy (%). bold-face results indicate the best performance in each column.

In most existing PNNL algorithms, the value of each hyper-parameter is fixed throughout the network’s topology progression. An algorithm is run for all combinations of hyper-parameter values defined a-priori, and the hyper-parameter values combination leading to the best performance on the validation set is selected for final model deployment.

Since PNNL algorithms gradually increase the complexity of the neural network, it is intuitive that the model might require different degrees of regularization at different stages. Besides, with subset sampling incorporated, we train new blocks of neurons with different subsets of training samples at each step, which might require different hyperparameter configurations. Thus, instead of performing hyper-parameter selection in an offline fashion, we propose to incorporate the hyper-parameter selection procedure into progressive learning at every incremental step.

Particularly, let be the set of all hyper-parameter values combinations, and be the cardinality of . At each progression step , after determining , we solve optimization problems corresponding to assignments of hyper-parameter values:

(2)

The algorithm then selects that achieves the best performance on the validation set for the newly added block of neurons. Online selection not only ensures the best hyper-parameter values selection for each newly added block of neurons, but also reduces the computation overhead incurred when running individual network progression steps.

4 Experiments

Models Caltech256 MIT CelebA
Subset Percentage 10%
PMLP-Random
PMLP-Top-Loss
PMLP-C-Top-Loss
Subset Percentage 20%
PMLP-Random
PMLP-Top-Loss
PMLP-C-Top-loss
Subset Percentage 30%
PMLP-Random
PMLP-Top-Loss
PMLP-C-Top-loss
Table 2: Number of unique samples (in thousands) selected by each strategy during network topology construction

To evaluate the effectiveness of the proposed subset sampling and online hyper-parameter selection method, we perform experiments on publicly available datasets designed for object recognition (Caltech256 [7]

), indoor scene recognition (MIT

[13]) and face recognition (CelebA [10]) problems. For CelebA dataset, we used a subset of images corresponding to identities. In each dataset, , and

of the data were used for training, validation and testing, respectively. The inputs to all PNNL algorithms are deep features (global average pooling of the last convolution layer) from a pre-trained network on ImageNet dataset

[17]. We demonstrate subset sampling with subset percentage of , , , and online hyper-parameter selection on PMLP. As previously mentioned, the adopted PMLP follows the progression rule of HeMLGOP in [22]. Here we should note that subset selection was used to only speed-up the network progression process (topology construction); the final topologies were fine-tuned with the full set of training data. We also evaluated other PNNL algorithms, namely StackedELM [25] and PLN [3] which run on the full training set at each step. For detailed information about our experiment protocols and hyper-parameter setting, we refer the readers to our publicly available implementation of this work111https://bit.ly/2MshRza.

Models Caltech256 MIT CelebA
Subset Percentage 10%
PMLP-Random
PMLP-Top-Loss
PMLP-C-Top-Loss
Subset Percentage 20%
PMLP-Random
PMLP-Top-Loss
PMLP-C-Top-loss
Subset Percentage 30%
PMLP-Random
PMLP-Top-Loss
PMLP-C-Top-loss
Full Set
PMLP
StackedELM [25]
PLN [3]
Table 3: Average time taken to optimize one block of neurons (in seconds)

Table 1 shows the recognition accuracy on the test set of all models on the three datasets. For compact presentation, we refer to the proposed PMLP variants based on Random Sampling, Top-M Sampling based on miss-classification and Top-M Sampling based on diverse miss-classification by PMLP-Random, PMLP-Top-Loss and PMLP-C-Top-Loss, respectively. Different from the empirical results obtained in other learning contexts, the best performing subset selection strategy is random sampling at the lowest percentage level (). In fact, PMLP-Random at performs better than all other algorithms, including the original PMLP. This can be attributed to the effects of both random subset sampling and online hyper-parameter selection. Random sampling with a small percentage leads to the general effect that different blocks of neurons are optimized with respect to diverse subsets of data. The final network after optimization can be loosely seen as an ensemble of smaller networks. On the other hand, when a subset of data persists being miss-classified throughout the network’s topology progression process, the corresponding sampling strategies will bias the algorithm to select only these samples and reduce the diversity of information presented to the network.

Table 2 shows the total number of unique samples selected by each algorithm following different sampling strategies. This table reflects the degree of diversity in the inputs observed by different networks trained with different sampling strategies. It is clear that the numbers are much higher for random sampling. While the original PMLP presents greater amount of information to the network during progression, every block of neurons in PMLP observes the same set of data, which might lead to over-fitting.

Table 3 shows the average time taken to optimize one block of neurons in each algorithm while Table 4 shows the total time taken to perform experiments for a particular setting. Every experiment run was performed on the same node configuration (4 CPU cores, 16 GB of RAM). It is clear that using subset selection, the average time taken at each step of PMLP is greatly reduced (Table 3). Combining subset selection and online hyper-parameter selection, the total experiment time is significantly lower (Table 4).

Models Caltech256 MIT CelebA
Subset Percentage 10%
PMLP-Random
PMLP-Top-loss
PMLP-C-Top-Loss
Subset Percentage 20%
PMLP-Random
PMLP-Top-loss
PMLP-C-Top-loss
Subset Percentage 30%
PMLP-Random
PMLP-Top-loss
PMLP-C-Top-loss
Full Set
PMLP
StackedELM [25]
PLN [3]
Table 4: Total time taken to perform all experiments (in hours)

5 Conclusion

In this work, we proposed subset sampling and online hyper-parameter selection to speed up and enhance PNNL algorithms. Empirical results demonstrated with PMLP show that proposed approach can not only accelerate the optimization procedure in PMLP but also improve the generalization performance of the resulting networks.

6 Acknowledgement

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 871449 (OpenDR). This publication reflects the authors’ views only. The European Commission is not responsible for any use that may be made of the information it contains.

References

  • [1] S. Banerjee and S. Chakraborty (2019)

    Deepsub: a novel subset selection framework for training deep learning architectures

    .
    In 2019 IEEE International Conference on Image Processing (ICIP), pp. 1615–1619. Cited by: §1, §2.2, 1st item.
  • [2] V. Birodkar, H. Mobahi, and S. Bengio (2019) Semantic redundancies in image-classification datasets: the 10% you don’t need. arXiv preprint arXiv:1901.11409. Cited by: §1, §2.2.
  • [3] S. Chatterjee, A. M. Javid, M. Sadeghi, P. P. Mitra, and M. Skoglund (2017) Progressive learning for systematic design of large neural networks. arXiv preprint arXiv:1710.08177. Cited by: §1, §2.1, §2.1, Table 1, Table 3, Table 4, §4.
  • [4] C. P. Chen and Z. Liu (2017) Broad learning system: an effective and efficient incremental learning system without the need for deep architecture. IEEE transactions on neural networks and learning systems 29 (1), pp. 10–24. Cited by: §1, §2.1, §2.1.
  • [5] Y. Freund, H. S. Seung, E. Shamir, and N. Tishby (1997) Selective sampling using the query by committee algorithm. Machine learning 28 (2-3), pp. 133–168. Cited by: 1st item.
  • [6] R. Gilad-Bachrach, A. Navot, and N. Tishby (2006) Query by committee made real. In Advances in neural information processing systems, pp. 443–450. Cited by: 1st item.
  • [7] G. Griffin, A. Holub, and P. Perona (2007) Caltech-256 object category dataset. Cited by: §4.
  • [8] G. Huang and L. Chen (2007) Convex incremental extreme learning machine. Neurocomputing 70 (16-18), pp. 3056–3062. Cited by: §1, §2.1, §2.1.
  • [9] S. Kiranyaz, T. Ince, A. Iosifidis, and M. Gabbouj (2017) Progressive operational perceptrons. Neurocomputing 224, pp. 142–154. Cited by: §1, §2.1.
  • [10] Z. Liu, P. Luo, X. Wang, and X. Tang (2015) Deep learning face attributes in the wild. In

    Proceedings of the IEEE international conference on computer vision

    ,
    pp. 3730–3738. Cited by: §4.
  • [11] D. J. MacKay (1992) Information-based objective functions for active data selection. Neural computation 4 (4), pp. 590–604. Cited by: §2.2.
  • [12] A. K. McCallumzy and K. Nigamy (1998) Employing em and pool-based active learning for text classification. In Proc. International Conference on Machine Learning (ICML), pp. 359–367. Cited by: §2.2.
  • [13] A. Quattoni and A. Torralba (2009) Recognizing indoor scenes. In

    2009 IEEE Conference on Computer Vision and Pattern Recognition

    ,
    pp. 413–420. Cited by: §4.
  • [14] O. Sener and S. Savarese (2017)

    Active learning for convolutional neural networks: a core-set approach

    .
    arXiv preprint arXiv:1708.00489. Cited by: §1.
  • [15] B. Settles (2009) Active learning literature survey. Technical report University of Wisconsin-Madison Department of Computer Sciences. Cited by: §1, §2.2, 1st item.
  • [16] H. Shokri-Ghadikolaei, H. Ghauch, C. Fischione, and M. Skoglund (2019) Learning and data selection in big datasets. In International Conference on Machine Learning (ICML), Cited by: §1, §2.2.
  • [17] K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §4.
  • [18] S. Tong and D. Koller (2001) Support vector machine active learning with applications to text classification. Journal of machine learning research 2 (Nov), pp. 45–66. Cited by: §2.2.
  • [19] D. T. Tran and A. Iosifidis (2019) Learning to rank: a progressive neural network learning approach. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8355–8359. Cited by: §1, §2.1.
  • [20] D. T. Tran, J. Kanniainen, M. Gabbouj, and A. Iosifidis (2019) Data-driven neural architecture learning for financial time-series forecasting. arXiv preprint arXiv:1903.06751. Cited by: §2.1.
  • [21] D. T. Tran, S. Kiranyaz, M. Gabbouj, and A. Iosifidis (2018) Progressive operational perceptron with memory. arXiv preprint arXiv:1808.06377. Cited by: §1, §2.1.
  • [22] D. T. Tran, S. Kiranyaz, M. Gabbouj, and A. Iosifidis (2019) Heterogeneous multilayer generalized operational perceptron. IEEE transactions on neural networks and learning systems. Cited by: §1, §2.1, §2.1, §4.
  • [23] D. T. Tran, S. Kiranyaz, M. Gabbouj, and A. Iosifidis (2019) Knowledge transfer for face verification using heterogeneous generalized operational perceptrons. In 2019 IEEE International Conference on Image Processing (ICIP), pp. 1168–1172. Cited by: §2.1.
  • [24] K. Vodrahalli, K. Li, and J. Malik (2018) Are all training examples created equal? an empirical study. arXiv preprint arXiv:1811.12569. Cited by: §2.2.
  • [25] H. Zhou, G. Huang, Z. Lin, H. Wang, and Y. C. Soh (2014) Stacked extreme learning machines. IEEE transactions on cybernetics 45 (9), pp. 2013–2025. Cited by: §1, §2.1, §2.1, Table 1, Table 3, Table 4, §4.