Progressive Class-based Expansion Learning For Image Classification

06/28/2021 ∙ by Hui Wang, et al. ∙ 0

In this paper, we propose a novel image process scheme called class-based expansion learning for image classification, which aims at improving the supervision-stimulation frequency for the samples of the confusing classes. Class-based expansion learning takes a bottom-up growing strategy in a class-based expansion optimization fashion, which pays more attention to the quality of learning the fine-grained classification boundaries for the preferentially selected classes. Besides, we develop a class confusion criterion to select the confusing class preferentially for training. In this way, the classification boundaries of the confusing classes are frequently stimulated, resulting in a fine-grained form. Experimental results demonstrate the effectiveness of the proposed scheme on several benchmarks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Convolutional neural networks [13, 18, 22, 7, 9] (CNN) have attracted considerable attention in image classification due to their effectiveness in representation learning [2, 27]

. Since they require computationally expensive and memory-consuming operations, CNN training typically resorts to stochastic gradient descent (SGD) 

[16, 17]

for iterative batch-level learning, it traverses the entire training dataset across randomly generated batches throughout successive epochs. With this epoch-by-epoch learning procedure, the classification boundaries of the CNN model are dynamically updated until convergence. Due to the memory resource limit, the samples within a smaller-size batch usually distribute extremely diversely and sparsely, resulting in a low supervision-stimulation frequency for each sample. The low frequency for each sample in turn causes the learning process to pay more attention to the learning quality of the coarse-grained classification boundaries while ignoring fine-grained details. Therefore, seeking for an effective and stable image classification strategy remains a key issue to solve in the CNN learning area.

Fig. 1: Examples of class-based expansion learning and normal training (Black curves in each figure denote the classification boundaries). Normal training pays more attention to the learning quality of the coarse-grained classification boundaries while ignoring fine-grained details. Our Class-based expansion learning pays more attention to fine-grained details of the classification boundaries for confusing classes.

To date, curriculum learning [3, 21, 1, 5, 29, 24] and self-paced learning [14, 10, 11, 15, 6, 26, 20] have been proposed to improve the speed of convergence of the training process to a minimum and the quality of the local minima obtained. The key concept of the proposed methods is inspired by human behavior, who always learn new things from “easy” to “complex”. But these methods still do not consider the fine-grained classification boundaries.

In this letter, we propose a new learning pattern that arises from the inspirations of the biological learning mechanism. The Hebbian theory [8] delivers an important insight that the increase in synaptic effects of synaptic cells comes from repeated and sustained stimulation. Meanwhile, the human learning pattern usually follows a progressive knowledge expansion learning pipeline, it dynamically learns new knowledge while keeping the old knowledge frequently reviewed. The knowledge that is frequently reviewed is often better learned. Inspired by the biological learning mechanism, we propose a progressive learning pipeline that aims at effectively enhancing the supervision-stimulation frequency for each sample to enhance the quality of the fine-grained classification boundaries as shown in Fig. 1. Specifically, we present a progressive piecewise class-based expansion learning scheme, which first learns fine-grained classification boundaries for a small portion of classes and subsequently expands the classification boundaries with new classes added. Therefore, the presented class-based expansion learning scheme takes a bottom-up growing strategy in a class-based expansion optimization fashion, which puts more emphasis on the quality of learning the fine-grained classification boundaries for dynamically growing local classes. Besides, we propose a class confusion criterion to sort the classes involved in the class-based expansion learning process. The classes where the samples have large intra-class and small inter-class distances on average (i.e., class confusing samples) are preferentially involved in the class-based expansion learning process. Once a particular class is selected, all the samples belonging to this class are added to the training sample pool for the CNN model learning. Such an expansion procedure is repeated until all the class samples participated in the training process. In this way, the classification network model is dynamically refined based on the updated training sample pool, and the classification boundaries of the preferentially selected classes are frequently stimulated, resulting in a fine-grained form.

The main contributions of this work are summarized as follows: i) Motivated by Hebbian theory, we propose to investigate the influence of “stimulation frequency” on neural network learning and make an observation that the poor performance of the confusing classes is partially a result of the low stimulation frequency. ii) We propose a novel class-based expansion learning pipeline to deal with the learning problem. This pipeline progressively trains the CNN model in a hard-to-easy class-based growing manner, thereby the classification boundaries of the preferentially selected confusing classes are frequently stimulated. iii) We develop two class confusion criteria to sort the classes for the class-based expansion learning process. Extensive experiments demonstrate the effectiveness of our work against conventional learning pipelines on several benchmarks.

Fig. 2: Illustration of our class confusion criterion. We calculate the confusion score of each class on an unordered dataset. The confusion score of a class is high when the samples of the class are far away from the corresponding same-class center and meanwhile close to other class centers (e.g. rectangle in this figure). Finally, we sort the classes of the unordered dataset in descending order of the confusion score and obtain an ordered dataset.

Ii Method

In this section, we detail the proposed class-based expansion learning scheme. We first formally define the problem in Section II-A, we describe our algorithm to solve it in Sections II-C and II-B.

Ii-a Problem Definition

Given a -classes dataset , and the -th class of the dataset contains samples and their corresponding labels :

(1)

Let denote the mapping function of the CNN model, where represents the model parameters inside . In a typical training process, the goal is to learn an optimal :

(2)

where

is the loss function (e.g. cross-entropy loss).

Ii-B Class Confusion Criterion

In this section, we introduce our metric for deciding the order in which classes are presented to the class-based expansion framework. Ideally, we want to set the score of a class to a high value when it is easy to confuse.

We use a pre-trained tiny network to evaluate the confusion score for each class. Note the training cost of is much lower than that of . To obtain the score of each class, we start by using

to transform samples from image space into feature space and logits space:

(3)

where and denote the feature extractor and the classfier of the network . Afterwards, we propose two kinds of class confusion criteria:

Distance-based Criterion. To obtain it, we first calculate the class center of each class:

(4)

where is the number of samples in class . Then, the confusion score of the class can be reformulated as:

(5)

where denotes the squared euclidean distance.

Entropy-based Criterion. This criterion is formulated as:

(6)

where denotes the confusion score of .

We can observe that the above two confusion criteria measure the confusion score in different spaces. The confusion score obtained by the distance-based criterion is measured in feature space, which rises as the features in a certain class move away from the center of that class and approach other class centers. The confusion score obtained by the entropy-based criterion is measured in logits space, which rises when the logits of samples of a certain class move away from the one-hot vector.

Based on the obtained scores for each class, we can get an ordered dataset:

(7)

where is the index of the class with the -th largest confusion score. The sorting process is detailed in Fig. 2.

Fig. 3: Illustration of our class-based expansion learning method. We first learn the network by a training sample pool which only contains a small portion of the classes from an ordered dataset. Based on the previously learned model, we learn the network by the class-based expanded training sample pool with some classes newly added from the ordered dataset. We repeat this process after all the classes of the ordered dataset are added to the training sample pool.

Ii-C Progressive Expansion Learning

We now describe our proposed progressive expansion learning pattern for CNN models. With the ordered dataset , we split the optimization of Eq. 2 into stages (For convenience, we set to be divisible by ). We start with an empty training sample pool (). At the first stage, the first classes from the ordered dataset are added to , then the training sample pool is expanded to :

(8)

The target optimization function of is:

(9)

where is randomly initialized and represents the optimal model parameters learned from . At the -th stage (), the training sample pool is expanded to :

(10)

where the last classes of are newly added. In order to find the optimal model parameters for , we have:

(11)

where the is initialized by the optimal model parameters learned from .

In the simplest form of class-based expansion learning, the classes of the dataset are progressively added to the training sample pool. By analogy, using such a progressive way, we will eventually solve the problem in Eq. 2 after the samples of all the classes participate in the training process.

Ii-D Complexity Analysis

In this section, we consider the time complexity of class-based expansion learning (CEL). Let be the time cost for a normal training process. For CEL, if we use the same number of epochs for each stage, the time cost will be at stage (the ratio of the dataset size of stage to the size of the entire dataset is ). Then the class-based expansion learning time cost is:

(12)

which is a linear time algorithm.

In the experiment, we observe that reducing the number of epochs in the early stages by a factor of does not sacrifice accuracy. We then train the network with the full amount of epochs only at the final stage and reducing the epoch number in other stages. In this way, the time cost is:

(13)

We can reduce the time cost of CEL by controlling the value of . With a large , only a little consumption time is required at the early stage, making the training time of our strategy comparable to the training time of normal training.

Iii Experiments

Iii-a Experimental Settings

Method Network Runs Test error (%) Test error of each class (%)
Airplane Automobile Bird Cat Deer Dog Frog Horse Ship Truck
Normal Training ResNet-32 5 7.08 6.00 3.50 9.60 14.40 5.70 11.90 4.70 4.90 5.00 4.80
ResNet-110 5 6.24 4.40 2.40 9.70 12.40 4.70 12.00 3.70 4.50 4.10 4.50
CEL ResNet-32 5 6.16 4.50 3.00 7.80 12.50 4.30 11.60 4.90 5.10 3.40 4.50
ResNet-110 5 5.71 4.90 2.60 7.40 11.80 4.30 9.60 2.90 4.60 4.30 4.70
TABLE I: Test errors (%) of Normal Training and CEL for CIFAR10.
Dataset Normal Training CBS [19] DIHCL [28] Curriculum [25] CEL CEL-2
CIFAR10 7.08 7.99 6.93 6.88 6.16 6.84
CIFAR100 30.40 31.49 31.47 30.02 29.82 29.41
TABLE II: Test errors (%) of different methods for CIFAR10 and CIFAR100 based on ResNet-32.
Dataset Network Normal Training CEL
ImageNet100 ResNet-18 29.83 26.86
TABLE III: Test errors (%) of normal training and CEL for ImageNet100.
Ranking 1 2 3 4 5
class name Cat Bird Dog Airplane Deer
Ranking 6 7 8 9 10
class name Frog Horse Truck Ship Automobile
TABLE IV: The class order of the ordered dataset for CIFAR10 based on our class confusion criterion.

Dataset

We conduct our experiments on three datasets, namely CIFAR10, CIFAR100, and ImageNet100. The CIFAR10 [12] dataset is a labeled subset of the 80 million tiny images dataset [23], which consists of 60,000 RGB images of resolution 3232 in 10 classes, with 5,000 images per class for training and 1,000 per class for testing. The CIFAR100 [12]

is similar to the CIFAR10, except that it has 100 classes containing 600 images each. There are 500 training images and 100 testing images for each class of CIFAR100. The ImageNet100 is a subset of ImageNet 

[13]

for ImageNet Large Scale Visual Recognition Challenge 2012. It contains 129,395 training images and 5,000 validation images in the first 100 classes of ImageNet.

Data preprocessing

On CIFAR10 and CIFAR100, we just follow the simple data augmentation in ResNet [7]

for training, including random cropping for 4 pixels padded image, per-pixel mean subtraction and horizontal flip. On ImageNet100, the augmentation strategies we use are the 224

224 random cropping and the horizontal flip.

Implementation details

We conduct our class-based expansion learning scheme on CIFAR10, CIFAR100, and ImageNet100 by using the state-of-the-art CNN models, including ResNet-18, ResNet-32 and ResNet-110. On CIFAR10, as described in Section II-B, we use a pre-trained ResNet-20 (trained by 60 epochs) on ImageNet to determine the order of classes. Then, as described in Section II-C, we divide the learning of the ordered dataset into stages. At the first four stages, we use epochs to train the network and we train the network with epochs at the last stage, i.e., . At each stage, we train the network using SGD with a mini-batch size of 128, a weight decay of and a momentum of . The initial learning rate is set to and is divided by after and of all epochs. On CIFAR100, we also use the pre-trained ResNet-50 to determine the order of classes. Afterwards, we divide the learning of the ordered dataset into stages. At the first nine stages, we utilize epochs to train the network and we train the network with epochs at the final stage. The other parameters of the experiments are the same as those used on CIFAR10. On ImageNet100, we use a pre-trained ResNet-18 to determine the order of classes (with 30 epochs). We divide the learning of the dataset into stages by the original order. At the first nine stages, we use 60 epochs to train the network and we train the network with

epochs at the final stage. The initial learning rate is set to 2 and is divided by 5 after 20, 30, 40 and 50 epochs. The rest of the settings are the same as those on CIFAR10. We implement our scheme with the theano 

[4] and use an NVIDIA TITAN 1080 Ti GPU to train the network.

Iii-B Comparisons with the State of the Art

We compare our class-based expansion learning (CEL) with other state-of-the-art methods of Normal Training, CBS [19], DIHCL [28], and Curriculum [25]. The normal training method represents a standard training method, the CEL adopts the distance-based confusion criterion, and the CEL-2 adopts the entropy-based class confusion criterion. The results are summarized in Table II. As shown in Table II, CEL and CEL-2 outperform other state-of-the-art methods, which demonstrates their effectiveness.

We also employ ResNet-18 to conduct experiments on ImageNet100. The results are summarized in Table III. Similar conclusions to those on CIFAR10 datasets can be made. These results demonstrate the generalization of our approach.

(a) Training loss
(b) Validation loss
(c) Accuracy
Fig. 4: The final stage of CEL method and normal training for CIFAR10 based on ResNet-32.

Iii-C Ablation Experiments

Analysis of the class order

We presented the test error results for each class of CIFAR10 in Table I and the class order in Table IV. As shown in Table IV, we can observe the cat class, the bird class, as well as the dog class, are both confusing classes defined by the distance-based confusion criterion, and the error rates of these classes are the largest ones in Table I. Table I gives the results on CIFAR10, which shows that our method outperforms the normal training method on ResNet-32 and ResNet-110. In addition, we observe that the improvement in the performance of the model was mainly due to the preferentially selected classes (i.e. cat, bird, deer, dog, and airplane).

Analysis of the individual components

We carry out an experiment on CIFAR10 to analyze the individual components in the CEL method. In this experiment, without the sorted class order obtained by , we perform class expansion learning in a random class order, which is denoted by “w/o ”. The results in Table V indicate that “w/o ” performs better than normal training due to the class-based expansion learning process. In addition, “w/ ” also can improve the performance of “w/o ”, showing the effectiveness of the sorted class order obtained by .

Dataset Normal Training w/ w/o
CIFAR10 7.08 6.16 6.32
CIFAR100 30.40 29.82 30.16
TABLE V: Test errors (%) of different methods for CIFAR10 and CIFAR100 based on ResNet-32.
Dataset Network Normal Training CEL
Epoch time Test error Epoch time Test error
CIFAR10 ResNet-32 420 6.50 420 6.16
ImageNet100 ResNet-18 330 27.52 330 26.86
TABLE VI: Test errors (%) of different methods at the same number of epoch times for CIFAR100 and ImageNet100. One epoch time is the time of traversing the entire training dataset once.

Convergence performance of final stage

The convergence performance of the final stage is shown in Fig. 4. From Fig. 4, we observe that our method converges faster than the normal training at the beginning, and performs better in most cases. These observations mean that learning local classes in advance can effectively accelerate network convergence.

Impact of long time training

To evaluate the impact of the long time training, we conduct the experiments on CIFAR10 and ImageNet100 where we make the time cost of the normal training the same as the one of CEL. In these experiments, we increase the number of epochs in normal training to match the one used in the CEL method. Table VI gives the results, which show that our method outperforms the normal training method at the same number of epochs.

Iv Conclusion

In this letter, we have presented a novel class-based expansion learning scheme for CNN, which learns the whole dataset by progressively training the CNN model in a bottom-up class growing manner. By using this scheme, the classification boundaries of the preferentially selected classes are frequently stimulated, resulting in a fine-grained form. Based on the characteristics of the scheme, we have also proposed a class confusion criterion that prioritizes the classes that are easily confused. Extensive experimental results demonstrate the effectiveness of our work.

References

  • [1] S. Basu and J. Christensen (2013) Teaching classification boundaries to humans. In

    American Association for Artificial Intelligence

    ,
    Cited by: §I.
  • [2] Y. Bengio, A. Courville, and P. Vincent (2013) Representation learning: a review and new perspectives. IEEE transactions on Pattern Analysis and Machine Intelligence 35 (8), pp. 1798–1828. Cited by: §I.
  • [3] Y. Bengio, J. Louradour, R. Collobert, and J. Weston (2009) Curriculum learning. In

    International Conference on Machine Learning

    ,
    pp. 41–48. Cited by: §I.
  • [4] J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde-Farley, and Y. Bengio (2010) Theano: a cpu and gpu math expression compiler. In Scientific Computing with Python conference, Cited by: §III-A.
  • [5] A. Graves, M. G. Bellemare, J. Menick, R. Munos, and K. Kavukcuoglu (2017) Automated curriculum learning for neural networks. In International Conference on Machine Learning, pp. 1311–1320. Cited by: §I.
  • [6] N. Gu, M. Fan, and D. Meng (2016) Robust semi-supervised classification for noisy labels based on self-paced learning. IEEE Signal Processing Letters 23 (12), pp. 1806–1810. Cited by: §I.
  • [7] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Computer Vision and Pattern Recognition, pp. 770–778. Cited by: §I, §III-A.
  • [8] D. O. Hebb (2005) The organization of behavior: a neuropsychological theory. Psychology Press. Cited by: §I.
  • [9] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger (2017) Densely connected convolutional networks. In Computer Vision and Pattern Recognition, pp. 4700–4708. Cited by: §I.
  • [10] L. Jiang, D. Meng, S. Yu, Z. Lan, S. Shan, and A. Hauptmann (2014) Self-paced learning with diversity. In Neural Information Processing Systems, pp. 2078–2086. Cited by: §I.
  • [11] L. Jiang, D. Meng, Q. Zhao, S. Shan, and A. Hauptmann (2015) Self-paced curriculum learning. In American Association for Artificial Intelligence, Cited by: §I.
  • [12] A. Krizhevsky and G. Hinton (2009) Learning multiple layers of features from tiny images. Technical report Citeseer. Cited by: §III-A.
  • [13] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) Imagenet classification with deep convolutional neural networks. In Neural Information Processing Systems, pp. 1097–1105. Cited by: §I, §III-A.
  • [14] M. P. Kumar, B. Packer, and D. Koller (2010) Self-paced learning for latent variable models. In Neural Information Processing Systems, pp. 1189–1197. Cited by: §I.
  • [15] D. Meng, Q. Zhao, and L. Jiang (2017) A theoretical understanding of self-paced learning. Information Sciences 414, pp. 319–328. Cited by: §I.
  • [16] H. Robbins and S. Monro (1951) A stochastic approximation method. The Annals of Mathematical Statistics, pp. 400–407. Cited by: §I.
  • [17] D. E. Rumelhart, G. E. Hinton, R. J. Williams, et al. (1988) Learning representations by back-propagating errors. Cognitive Modeling 5 (3), pp. 1. Cited by: §I.
  • [18] K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §I.
  • [19] S. Sinha, A. Garg, and H. Larochelle (2020) Curriculum by smoothing. In Neural Information Processing Systems, Cited by: §III-B, TABLE II.
  • [20] P. Soviany, R. T. Ionescu, P. Rota, and N. Sebe (2021) Curriculum self-paced learning for cross-domain object detection. Computer Vision and Image Understanding 204, pp. 103166. Cited by: §I.
  • [21] V. I. Spitkovsky, H. Alshawi, and D. Jurafsky (2010) From baby steps to leapfrog: how less is more in unsupervised dependency parsing. In North American Chapter of the Association for Computational Linguistics, pp. 751–759. Cited by: §I.
  • [22] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich (2015) Going deeper with convolutions. In Computer Vision and Pattern Recognition, pp. 1–9. Cited by: §I.
  • [23] A. Torralba, R. Fergus, and W. T. Freeman (2008-11)

    80 million tiny images: a large data set for nonparametric object and scene recognition

    .
    IEEE Transactions on Pattern Analysis and Machine Intelligence 30 (11), pp. 1958–1970. External Links: Document, ISSN 0162-8828 Cited by: §III-A.
  • [24] X. Wang, Y. Chen, and W. Zhu (2021) A survey on curriculum learning. IEEE Transactions on Pattern Analysis and Machine Intelligence. Cited by: §I.
  • [25] X. Wu, E. Dyer, and B. Neyshabur (2021) When do curricula work?. In International Conference on Learning Representations, Cited by: §III-B, TABLE II.
  • [26] T. Yu, C. Guo, L. Wang, S. Xiang, and C. Pan (2018)

    Self-paced autoencoder

    .
    IEEE Signal Processing Letters 25 (7), pp. 1054–1058. Cited by: §I.
  • [27] D. Zhang, J. Yin, X. Zhu, and C. Zhang (2018) Network representation learning: a survey. IEEE transactions on Big Data 6 (1), pp. 3–28. Cited by: §I.
  • [28] T. Zhou, S. Wang, and J. A. Bilmes (2020) Curriculum learning by dynamic instance hardness. In Neural Information Processing Systems, Cited by: §III-B, TABLE II.
  • [29] X. Zhu, J. Qian, H. Wang, and P. Liu (2020) Curriculum enhanced supervised attention network for person re-identification. IEEE Signal Processing Letters 27, pp. 1665–1669. Cited by: §I.