. Since they require computationally expensive and memory-consuming operations, CNN training typically resorts to stochastic gradient descent (SGD)[16, 17]
for iterative batch-level learning, it traverses the entire training dataset across randomly generated batches throughout successive epochs. With this epoch-by-epoch learning procedure, the classification boundaries of the CNN model are dynamically updated until convergence. Due to the memory resource limit, the samples within a smaller-size batch usually distribute extremely diversely and sparsely, resulting in a low supervision-stimulation frequency for each sample. The low frequency for each sample in turn causes the learning process to pay more attention to the learning quality of the coarse-grained classification boundaries while ignoring fine-grained details. Therefore, seeking for an effective and stable image classification strategy remains a key issue to solve in the CNN learning area.
To date, curriculum learning [3, 21, 1, 5, 29, 24] and self-paced learning [14, 10, 11, 15, 6, 26, 20] have been proposed to improve the speed of convergence of the training process to a minimum and the quality of the local minima obtained. The key concept of the proposed methods is inspired by human behavior, who always learn new things from “easy” to “complex”. But these methods still do not consider the fine-grained classification boundaries.
In this letter, we propose a new learning pattern that arises from the inspirations of the biological learning mechanism. The Hebbian theory  delivers an important insight that the increase in synaptic effects of synaptic cells comes from repeated and sustained stimulation. Meanwhile, the human learning pattern usually follows a progressive knowledge expansion learning pipeline, it dynamically learns new knowledge while keeping the old knowledge frequently reviewed. The knowledge that is frequently reviewed is often better learned. Inspired by the biological learning mechanism, we propose a progressive learning pipeline that aims at effectively enhancing the supervision-stimulation frequency for each sample to enhance the quality of the fine-grained classification boundaries as shown in Fig. 1. Specifically, we present a progressive piecewise class-based expansion learning scheme, which first learns fine-grained classification boundaries for a small portion of classes and subsequently expands the classification boundaries with new classes added. Therefore, the presented class-based expansion learning scheme takes a bottom-up growing strategy in a class-based expansion optimization fashion, which puts more emphasis on the quality of learning the fine-grained classification boundaries for dynamically growing local classes. Besides, we propose a class confusion criterion to sort the classes involved in the class-based expansion learning process. The classes where the samples have large intra-class and small inter-class distances on average (i.e., class confusing samples) are preferentially involved in the class-based expansion learning process. Once a particular class is selected, all the samples belonging to this class are added to the training sample pool for the CNN model learning. Such an expansion procedure is repeated until all the class samples participated in the training process. In this way, the classification network model is dynamically refined based on the updated training sample pool, and the classification boundaries of the preferentially selected classes are frequently stimulated, resulting in a fine-grained form.
The main contributions of this work are summarized as follows: i) Motivated by Hebbian theory, we propose to investigate the influence of “stimulation frequency” on neural network learning and make an observation that the poor performance of the confusing classes is partially a result of the low stimulation frequency. ii) We propose a novel class-based expansion learning pipeline to deal with the learning problem. This pipeline progressively trains the CNN model in a hard-to-easy class-based growing manner, thereby the classification boundaries of the preferentially selected confusing classes are frequently stimulated. iii) We develop two class confusion criteria to sort the classes for the class-based expansion learning process. Extensive experiments demonstrate the effectiveness of our work against conventional learning pipelines on several benchmarks.
Ii-a Problem Definition
Given a -classes dataset , and the -th class of the dataset contains samples and their corresponding labels :
Let denote the mapping function of the CNN model, where represents the model parameters inside . In a typical training process, the goal is to learn an optimal :
is the loss function (e.g. cross-entropy loss).
Ii-B Class Confusion Criterion
In this section, we introduce our metric for deciding the order in which classes are presented to the class-based expansion framework. Ideally, we want to set the score of a class to a high value when it is easy to confuse.
We use a pre-trained tiny network to evaluate the confusion score for each class. Note the training cost of is much lower than that of . To obtain the score of each class, we start by using
to transform samples from image space into feature space and logits space:
where and denote the feature extractor and the classfier of the network . Afterwards, we propose two kinds of class confusion criteria:
Distance-based Criterion. To obtain it, we first calculate the class center of each class:
where is the number of samples in class . Then, the confusion score of the class can be reformulated as:
where denotes the squared euclidean distance.
Entropy-based Criterion. This criterion is formulated as:
where denotes the confusion score of .
We can observe that the above two confusion criteria measure the confusion score in different spaces. The confusion score obtained by the distance-based criterion is measured in feature space, which rises as the features in a certain class move away from the center of that class and approach other class centers. The confusion score obtained by the entropy-based criterion is measured in logits space, which rises when the logits of samples of a certain class move away from the one-hot vector.
Based on the obtained scores for each class, we can get an ordered dataset:
where is the index of the class with the -th largest confusion score. The sorting process is detailed in Fig. 2.
Ii-C Progressive Expansion Learning
We now describe our proposed progressive expansion learning pattern for CNN models. With the ordered dataset , we split the optimization of Eq. 2 into stages (For convenience, we set to be divisible by ). We start with an empty training sample pool (). At the first stage, the first classes from the ordered dataset are added to , then the training sample pool is expanded to :
The target optimization function of is:
where is randomly initialized and represents the optimal model parameters learned from . At the -th stage (), the training sample pool is expanded to :
where the last classes of are newly added. In order to find the optimal model parameters for , we have:
where the is initialized by the optimal model parameters learned from .
In the simplest form of class-based expansion learning, the classes of the dataset are progressively added to the training sample pool. By analogy, using such a progressive way, we will eventually solve the problem in Eq. 2 after the samples of all the classes participate in the training process.
Ii-D Complexity Analysis
In this section, we consider the time complexity of class-based expansion learning (CEL). Let be the time cost for a normal training process. For CEL, if we use the same number of epochs for each stage, the time cost will be at stage (the ratio of the dataset size of stage to the size of the entire dataset is ). Then the class-based expansion learning time cost is:
which is a linear time algorithm.
In the experiment, we observe that reducing the number of epochs in the early stages by a factor of does not sacrifice accuracy. We then train the network with the full amount of epochs only at the final stage and reducing the epoch number in other stages. In this way, the time cost is:
We can reduce the time cost of CEL by controlling the value of . With a large , only a little consumption time is required at the early stage, making the training time of our strategy comparable to the training time of normal training.
Iii-a Experimental Settings
|Method||Network||Runs||Test error (%)||Test error of each class (%)|
|Dataset||Normal Training||CBS ||DIHCL ||Curriculum ||CEL||CEL-2|
We conduct our experiments on three datasets, namely CIFAR10, CIFAR100, and ImageNet100. The CIFAR10  dataset is a labeled subset of the 80 million tiny images dataset , which consists of 60,000 RGB images of resolution 3232 in 10 classes, with 5,000 images per class for training and 1,000 per class for testing. The CIFAR100 
is similar to the CIFAR10, except that it has 100 classes containing 600 images each. There are 500 training images and 100 testing images for each class of CIFAR100. The ImageNet100 is a subset of ImageNet
for ImageNet Large Scale Visual Recognition Challenge 2012. It contains 129,395 training images and 5,000 validation images in the first 100 classes of ImageNet.
On CIFAR10 and CIFAR100, we just follow the simple data augmentation in ResNet 
for training, including random cropping for 4 pixels padded image, per-pixel mean subtraction and horizontal flip. On ImageNet100, the augmentation strategies we use are the 224224 random cropping and the horizontal flip.
We conduct our class-based expansion learning scheme on CIFAR10, CIFAR100, and ImageNet100 by using the state-of-the-art CNN models, including ResNet-18, ResNet-32 and ResNet-110. On CIFAR10, as described in Section II-B, we use a pre-trained ResNet-20 (trained by 60 epochs) on ImageNet to determine the order of classes. Then, as described in Section II-C, we divide the learning of the ordered dataset into stages. At the first four stages, we use epochs to train the network and we train the network with epochs at the last stage, i.e., . At each stage, we train the network using SGD with a mini-batch size of 128, a weight decay of and a momentum of . The initial learning rate is set to and is divided by after and of all epochs. On CIFAR100, we also use the pre-trained ResNet-50 to determine the order of classes. Afterwards, we divide the learning of the ordered dataset into stages. At the first nine stages, we utilize epochs to train the network and we train the network with epochs at the final stage. The other parameters of the experiments are the same as those used on CIFAR10. On ImageNet100, we use a pre-trained ResNet-18 to determine the order of classes (with 30 epochs). We divide the learning of the dataset into stages by the original order. At the first nine stages, we use 60 epochs to train the network and we train the network with
epochs at the final stage. The initial learning rate is set to 2 and is divided by 5 after 20, 30, 40 and 50 epochs. The rest of the settings are the same as those on CIFAR10. We implement our scheme with the theano and use an NVIDIA TITAN 1080 Ti GPU to train the network.
Iii-B Comparisons with the State of the Art
We compare our class-based expansion learning (CEL) with other state-of-the-art methods of Normal Training, CBS , DIHCL , and Curriculum . The normal training method represents a standard training method, the CEL adopts the distance-based confusion criterion, and the CEL-2 adopts the entropy-based class confusion criterion. The results are summarized in Table II. As shown in Table II, CEL and CEL-2 outperform other state-of-the-art methods, which demonstrates their effectiveness.
We also employ ResNet-18 to conduct experiments on ImageNet100. The results are summarized in Table III. Similar conclusions to those on CIFAR10 datasets can be made. These results demonstrate the generalization of our approach.
Iii-C Ablation Experiments
Analysis of the class order
We presented the test error results for each class of CIFAR10 in Table I and the class order in Table IV. As shown in Table IV, we can observe the cat class, the bird class, as well as the dog class, are both confusing classes defined by the distance-based confusion criterion, and the error rates of these classes are the largest ones in Table I. Table I gives the results on CIFAR10, which shows that our method outperforms the normal training method on ResNet-32 and ResNet-110. In addition, we observe that the improvement in the performance of the model was mainly due to the preferentially selected classes (i.e. cat, bird, deer, dog, and airplane).
Analysis of the individual components
We carry out an experiment on CIFAR10 to analyze the individual components in the CEL method. In this experiment, without the sorted class order obtained by , we perform class expansion learning in a random class order, which is denoted by “w/o ”. The results in Table V indicate that “w/o ” performs better than normal training due to the class-based expansion learning process. In addition, “w/ ” also can improve the performance of “w/o ”, showing the effectiveness of the sorted class order obtained by .
|Epoch time||Test error||Epoch time||Test error|
Convergence performance of final stage
The convergence performance of the final stage is shown in Fig. 4. From Fig. 4, we observe that our method converges faster than the normal training at the beginning, and performs better in most cases. These observations mean that learning local classes in advance can effectively accelerate network convergence.
Impact of long time training
To evaluate the impact of the long time training, we conduct the experiments on CIFAR10 and ImageNet100 where we make the time cost of the normal training the same as the one of CEL. In these experiments, we increase the number of epochs in normal training to match the one used in the CEL method. Table VI gives the results, which show that our method outperforms the normal training method at the same number of epochs.
In this letter, we have presented a novel class-based expansion learning scheme for CNN, which learns the whole dataset by progressively training the CNN model in a bottom-up class growing manner. By using this scheme, the classification boundaries of the preferentially selected classes are frequently stimulated, resulting in a fine-grained form. Based on the characteristics of the scheme, we have also proposed a class confusion criterion that prioritizes the classes that are easily confused. Extensive experimental results demonstrate the effectiveness of our work.
Teaching classification boundaries to humans.
American Association for Artificial Intelligence, Cited by: §I.
-  (2013) Representation learning: a review and new perspectives. IEEE transactions on Pattern Analysis and Machine Intelligence 35 (8), pp. 1798–1828. Cited by: §I.
International Conference on Machine Learning, pp. 41–48. Cited by: §I.
-  (2010) Theano: a cpu and gpu math expression compiler. In Scientific Computing with Python conference, Cited by: §III-A.
-  (2017) Automated curriculum learning for neural networks. In International Conference on Machine Learning, pp. 1311–1320. Cited by: §I.
-  (2016) Robust semi-supervised classification for noisy labels based on self-paced learning. IEEE Signal Processing Letters 23 (12), pp. 1806–1810. Cited by: §I.
-  (2016) Deep residual learning for image recognition. In Computer Vision and Pattern Recognition, pp. 770–778. Cited by: §I, §III-A.
-  (2005) The organization of behavior: a neuropsychological theory. Psychology Press. Cited by: §I.
-  (2017) Densely connected convolutional networks. In Computer Vision and Pattern Recognition, pp. 4700–4708. Cited by: §I.
-  (2014) Self-paced learning with diversity. In Neural Information Processing Systems, pp. 2078–2086. Cited by: §I.
-  (2015) Self-paced curriculum learning. In American Association for Artificial Intelligence, Cited by: §I.
-  (2009) Learning multiple layers of features from tiny images. Technical report Citeseer. Cited by: §III-A.
-  (2012) Imagenet classification with deep convolutional neural networks. In Neural Information Processing Systems, pp. 1097–1105. Cited by: §I, §III-A.
-  (2010) Self-paced learning for latent variable models. In Neural Information Processing Systems, pp. 1189–1197. Cited by: §I.
-  (2017) A theoretical understanding of self-paced learning. Information Sciences 414, pp. 319–328. Cited by: §I.
-  (1951) A stochastic approximation method. The Annals of Mathematical Statistics, pp. 400–407. Cited by: §I.
-  (1988) Learning representations by back-propagating errors. Cognitive Modeling 5 (3), pp. 1. Cited by: §I.
-  (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §I.
-  (2020) Curriculum by smoothing. In Neural Information Processing Systems, Cited by: §III-B, TABLE II.
-  (2021) Curriculum self-paced learning for cross-domain object detection. Computer Vision and Image Understanding 204, pp. 103166. Cited by: §I.
-  (2010) From baby steps to leapfrog: how less is more in unsupervised dependency parsing. In North American Chapter of the Association for Computational Linguistics, pp. 751–759. Cited by: §I.
-  (2015) Going deeper with convolutions. In Computer Vision and Pattern Recognition, pp. 1–9. Cited by: §I.
80 million tiny images: a large data set for nonparametric object and scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 30 (11), pp. 1958–1970. External Links: Cited by: §III-A.
-  (2021) A survey on curriculum learning. IEEE Transactions on Pattern Analysis and Machine Intelligence. Cited by: §I.
-  (2021) When do curricula work?. In International Conference on Learning Representations, Cited by: §III-B, TABLE II.
Self-paced autoencoder. IEEE Signal Processing Letters 25 (7), pp. 1054–1058. Cited by: §I.
-  (2018) Network representation learning: a survey. IEEE transactions on Big Data 6 (1), pp. 3–28. Cited by: §I.
-  (2020) Curriculum learning by dynamic instance hardness. In Neural Information Processing Systems, Cited by: §III-B, TABLE II.
-  (2020) Curriculum enhanced supervised attention network for person re-identification. IEEE Signal Processing Letters 27, pp. 1665–1669. Cited by: §I.