DeepAI
Log In Sign Up

Regularizing Class-wise Predictions via Self-knowledge Distillation

Deep neural networks with millions of parameters may suffer from poor generalization due to overfitting. To mitigate the issue, we propose a new regularization method that penalizes the predictive distribution between similar samples. In particular, we distill the predictive distribution between different samples of the same label during training. This results in regularizing the dark knowledge (i.e., the knowledge on wrong predictions) of a single network (i.e., a self-knowledge distillation) by forcing it to produce more meaningful and consistent predictions in a class-wise manner. Consequently, it mitigates overconfident predictions and reduces intra-class variations. Our experimental results on various image classification tasks demonstrate that the simple yet powerful method can significantly improve not only the generalization ability but also the calibration performance of modern convolutional neural networks.

READ FULL TEXT VIEW PDF

page 6

page 12

07/06/2021

Embracing the Dark Knowledge: Domain Generalization Using Regularized Knowledge Distillation

Though convolutional neural networks are widely used in different tasks,...
08/11/2022

Self-Knowledge Distillation via Dropout

To boost the performance, deep neural networks require deeper or wider n...
02/07/2019

CHIP: Channel-wise Disentangled Interpretation of Deep Convolutional Neural Networks

With the widespread applications of deep convolutional neural networks (...
09/11/2020

Extending Label Smoothing Regularization with Self-Knowledge Distillation

Inspired by the strong correlation between the Label Smoothing Regulariz...
06/22/2020

Self-Knowledge Distillation: A Simple Way for Better Generalization

The generalization capability of deep neural networks has been substanti...
05/30/2018

Collaborative Learning for Deep Neural Networks

We introduce collaborative learning in which multiple classifier heads o...
05/07/2020

ProSelfLC: Progressive Self Label Correction for Target Revising in Label Noise

In this work, we address robust deep learning under label noise (semi-su...

Code Repositories

cs-kd

Regularizing Class-wise Predictions via Self-knowledge Distillation (CVPR 2020)


view repo

CSKD-TF

TF2.x implementation of CS-KD (Regularizing Class-wise Predictions via Self-knowledge Distillation, CVPR 2020).


view repo

1 Introduction

Deep neural networks (DNNs) have achieved state-of-the-art performance on many computer vision tasks,

e.g., image classification [19], generation [4], and segmentation [18]. As the scale of training dataset increases, the size of DNNs (i.e., the number of parameters) also scales up to handle such a large dataset efficiently. However, networks with millions of parameters may incur overfitting and suffer from poor generalizations [36, 55]. To address the issue, many regularization strategies have been investigated in the literature: early stopping [3], /-regularization [35], dropout [42]

, batch normalization

[40] and data augmentation [8].

(a) Overview of our regularization scheme
(b) Top-5 softmax scores on misclassified samples
Figure 1: (a) Illustration of class-wise self-knowledge distillation (CS-KD). (b) Predictive distributions on misclassified samples. We use PreAct ResNet-18 trained on CIFAR-100 dataset. For misclassified samples, softmax scores of the ground-truth class are increased by training DNNs with class-wise regularization.

Regularizing the predictive distribution of DNNs can be effective because it contains the most succinct knowledge of the model. On this line, several strategies such as label-smoothing [32, 43], entropy maximization [13, 36], and angular-margin based methods [5, 58] have been proposed in the literature. They were also influential in solving related problems such as network calibration [16]

, novelty detection

[27]

, and exploration in reinforcement learning

[17]. In this paper, we focus on developing a new output regularizer for deep models utilizing the concept of dark knowledge [22], i.e., the knowledge on wrong predictions made by DNNs. Its importance has been first evidenced by the so-called knowledge distillation (KD) [22] and investigated in many following works [1, 39, 41, 54].

(a)

Log-probabilities of predicted labels on misclassified samples

(b) Log-probabilities of ground-truth labels on misclassified samples
Figure 2: Histogram of log-probabilities of (a) the predicted label, i.e., top-1 softmax score, and (b) the ground-truth label on misclassified samples by networks trained by the cross-entropy (baseline) and CS-KD. The networks are trained on PreAct ResNet-18 for CIFAR-100.

While the related works [15, 21] use the knowledge distillation to transfer the dark knowledge learned by a teacher network to a student network, we regularize the dark knowledge itself during training a single network, i.e., self-knowledge distillation [53, 57]. Specifically, we propose a new regularization technique, coined class-wise self-knowledge distillation (CS-KD), that matches or distills the predictive distribution of DNNs between different samples of the same label as shown in Figure 1(a). One can expect that the proposed regularization method forces DNNs to produce similar wrong predictions if samples are of the same class, while the conventional cross-entropy loss does not consider such consistency on the predictive distributions. Furthermore, it could achieve two desirable goals simultaneously: preventing overconfident predictions and reducing the intra-class variations. We remark that they have been investigated in the literature via different methods, i.e., entropy regularization [13, 32, 36, 43] and margin-based methods [5, 58], respectively, while we achieve both using a single principle.

We demonstrate the effectiveness of our simple yet powerful regularization method using deep convolutional neural networks, such as ResNet [19] and DenseNet [23] trained for image classification tasks on various datasets including CIFAR-100 [26], TinyImageNet111https://tiny-imagenet.herokuapp.com/, CUB-200-2011 [46], Stanford Dogs [25], MIT67 [38]

, and ImageNet

[10]. In our experiments, the top-1 error rates of our method are consistently lower than those of prior output regularization methods such as angular-margin based methods [5, 58] and entropy regularization [13, 32, 36, 43]. In particular, the gain tends to be larger in overall for the top-5 error rates and the expected calibration errors [16], which confirms that our method indeed makes predictive distributions more meaningful. We also found the top-1 error rates of our method are lower than those of the recent self-distillation methods [53, 57] in overall. Moreover, we investigate variants of our method by combining it with other types of regularization methods for boosting performance, such as the Mixup regularization [56] and the original KD method [22]. For example, we improve the top-1 error rate of Mixup from 37.09% to 30.71%, and that of KD from 39.32% to 34.47% using the CUB-200-2011 dataset under ResNet-18 and ResNet-10, respectively.

We remark that the idea of using a consistency regularizer like ours has been investigated in the literature [2, 7, 24, 31, 37, 44, 53]. While most prior methods proposed to regularize the output distributions of original and perturbed inputs to be similar, our method forces the consistency between different samples having the same class. To the best of our knowledge, no work is known to study such a class-wise regularization. We believe that the proposed method may be influential to enjoy a broader usage in other applications, e.g

., face recognition 

[11, 58]

, and image retrieval 

[45].

Initialize parameters .
while  has not converged do
     Sample a batch from the training dataset.
     Sample another batch randomly, which has the same label from the training dataset.
     Update parameters

by computing the gradients of the proposed loss function

in (3.4).
end while
Algorithm 1 Class-wise self-knowledge distillation

2 Class-wise self-knowledge distillation

In this section, we introduce a new regularization technique named class-wise self-knowledge distillation (CS-KD). Throughout this paper, we focus on fully-supervised classification tasks and denote as input and

as its ground-truth label. Suppose that a softmax classifier is used to model a posterior predictive distribution,

i.e., given the input , the predictive distribution is:

where

denotes the logit of DNNs for class

which are parameterized by , and is the temperature scaling parameter.

2.1 Class-wise regularization

We consider matching the predictive distributions on samples of the same class, which distills their dark knowledge from the model itself. To this end, we propose a class-wise regularization loss that enforces consistent predictive distributions in the same class. Formally, given an input and another randomly sampled input having the same label , it is defined as follows:

where KL denotes the Kullback-Leibler (KL) divergence, and is a fixed copy of the parameters . As suggested by Miyato et al. [31], the gradient is not propagated through to avoid the model collapse issue. Similar to the original knowledge distillation method (KD; [22]), the proposed loss matches two predictions. While the original KD matches predictions of a single sample from two networks, we make predictions of different samples from a single network, i.e., self-knowledge distillation. Namely, the total training loss is defined as follows:

(1)

where is the standard cross-entropy loss, and is a loss weight for the class-wise regularization. Note that we multiply the square of the temperature by following the original KD [22]. The full training procedure with the proposed loss is summarized in Algorithm 1.

2.2 Effects of class-wise regularization

The proposed CS-KD is arguably the simplest way to achieve two goals, preventing overconfident predictions and reducing the intra-class variations, via a single mechanism. To avoid overconfident predictions, it utilizes the model-prediction of other samples as the soft-label. It is more ‘realistic’ than the label-smoothing method [32, 43], which generates ‘artificial’ soft-labels. Besides, ours directly minimizes the distance between two logits within the same class, and it would reduce intra-class variations.

We also examined whether the proposed method forces DNNs to produce meaningful predictions. To this end, we investigate prediction values in softmax scores, i.e., , from PreAct ResNet-18 [20] trained on the CIFAR-100 dataset [26] using the standard cross-entropy loss and the proposed CS-KD loss. Specifically, we analyze the predictions of two concrete misclassified samples in the CIFAR-100 dataset. As shown in Figure 1(b), CS-KD not only relaxes the overconfident predictions but also enhances the prediction values of classes correlated to the ground-truth class. This implies that CS-KD induces meaningful predictions by forcing DNNs to produce similar predictions on similar inputs. To evaluate the prediction quality, we also report log-probabilities of the softmax scores on the predicted class and ground-truth class on samples that are commonly misclassified by both the cross-entropy and our method. As shown in Figure 2(a), our method produces less confident predictions on misclassified samples compared to the cross-entropy method. Interestingly, our method increases ground-truth scores for misclassified samples, as reported in Figure 2(b). In our experiments, we found that the classification accuracy and calibration effects can be improved by forcing DNNs to produce such meaningful predictions (see Section 3.2 and 3.4).

Model Method CIFAR-100 TinyImageNet CUB-200-2011 Stanford Dogs MIT67
ResNet-18 Cross-entropy 24.710.24 43.530.19 46.001.43 36.290.32 44.750.80
AdaCos 23.710.36 42.610.20 35.470.07 32.660.34 42.660.43
Virtual-softmax 23.010.42 42.410.20 35.030.51 31.480.16 42.860.71
Maximum-entropy 22.720.29 41.770.13 39.861.11 32.410.20 43.361.62
Label-smoothing 22.690.28 43.090.34 42.990.99 35.300.66 44.400.71
CS-KD (ours) 21.990.13 (-11.0%) 41.620.38 (- 4.4%) 33.280.99 (-27.7%) 30.850.28 (-15.0%) 40.450.45 (- 9.6%)
DeseNet-121 Cross-entropy 22.230.04 39.220.27 42.300.44 33.390.17 41.790.19
AdaCos 22.170.24 38.760.23 30.840.38 27.870.65 40.250.68
Virtual-softmax 23.660.10 41.581.58 33.850.75 30.550.72 43.660.30
Maximum-entropy 22.870.45 38.390.33 37.510.71 29.520.74 43.481.30
Label-smoothing 21.880.45 38.750.18 40.630.24 31.390.46 42.241.23
CS-KD (ours) 21.690.49 (- 2.4%) 37.960.09 (- 3.2%) 30.830.39 (-27.1%) 27.810.13 (-16.7%) 40.020.91 (- 4.2%)
Table 1:

Top-1 error rates (%) on various image classification tasks and model architectures. We report the mean and standard deviation over three runs with different random seeds. Values in parentheses indicate relative error rate reductions from the cross-entropy, and the best results are indicated in bold.

Method CIFAR-100 TinyImageNet CUB-200-2011 Stanford Dogs MIT67
Cross-entropy 24.710.24 43.530.19 46.001.43 36.290.32 44.750.80
DDGSD 23.851.57 41.480.12 41.171.28 31.530.54 41.172.46
BYOT 23.810.11 44.020.57 40.760.39 34.020.14 44.880.46
CS-KD (ours) 21.990.13 (-11.0%) 41.620.38   (- 4.4%) 33.280.99 (-27.7%) 30.850.28 (-15.0%) 40.450.45 (- 9.6%)
Table 2: Top-1 error rates (%) of ResNet-18 with self-distillation methods on various image classification tasks. We report the mean and standard deviation over three runs with different random seeds. Values in parentheses indicate relative error rate reductions from the cross-entropy, and the best results are indicated in bold. The self-distillation methods are re-implemented under our code-base.

3 Experiments

3.1 Experimental setup

Datasets. To demonstrate our method under general situations of data diversity, we consider various image classification tasks, including conventional classification and fine-grained classification tasks.222Code is available at https://github.com/alinlab/cs-kd. Specifically, we use CIFAR-100 [26] and TinyImageNet333https://tiny-imagenet.herokuapp.com/ datasets for conventional classification tasks, and CUB-200-2011 [46], Stanford Dogs [25], and MIT67 [38] datasets for fine-grained classification tasks. The fine-grained image classification tasks have visually similar classes and consist of fewer training samples per class compared to conventional classification tasks. ImageNet [10] is used for a large-scale classification task.

Network architecture. We consider two state-of-the-art convolutional neural network architectures: ResNet [19] and DenseNet [23]. We use standard ResNet-18 with 64 filters and DenseNet-121 with a growth rate of 32 for image size . For CIFAR-100 and TinyImageNet, we use PreAct ResNet-18 [20], which modifies the first convolutional layer444We used a reference implementation: https://github.com/kuangliu/pytorch-cifar. with kernel size

, strides 1 and padding 1, instead of the kernel size

, strides 2 and padding 3, for image size by following [56]. We use DenseNet-BC structure [23], and the first convolution layer of the network is also modified in the same way as in PreAct ResNet-18 for image size .

Hyper-parameters.

All networks are trained from scratch and optimized by stochastic gradient descent (SGD) with momentum 0.9, weight decay 0.0001, and an initial learning rate of 0.1. The learning rate is divided by 10 after epochs 100 and 150 for all datasets, and total epochs are 200. We set batch size as 128 for conventional, and 32 for fine-grained classification tasks. We use the standard data augmentation technique for ImageNet

[10], i.e., flipping and random cropping. For our method, the temperature is chosen from , and the loss weight is chosen from . The optimal parameters are chosen to minimize the top-1 error rates on the validation set. More detailed ablation studies on the hyper-parameters and are provided in the supplementary material.

Baselines. We compare our method with prior regularization methods such as the state-of-the-art angular-margin based methods [5, 58], entropy regularization [13, 32, 36, 43] and self-distillation methods [53, 57]. They also regularize predictive distributions like ours.

  • AdaCos [58].555We used a reference implementation: https://github.com/4uiiurz1/pytorch-adacos

    AdaCos dynamically scales the cosine similarities between training samples and corresponding class center vectors to maximize angular-margin.

  • Virtual-softmax [5]. Virtual-softmax injects an additional virtual class to maximize angular-margin.

  • Maximum-entropy [13, 36]. Maximum-entropy is a typical entropy regularization, which maximizes the entropy of the predictive distribution.

  • Label-smoothing [32, 43]

    . Label-smoothing uses soft labels that are a weighted average of the one-hot labels and the uniform distribution.

  • DDGSD [53]. Data-distortion guided self-distillation (DDGSD) is one of the consistency regularization techniques, which forces the consistent outputs across different augmented versions of the data.

  • BYOT [57]. Be Your Own Teacher (BYOT) transfers the knowledge in the deeper portion of the networks into the shallow ones.

Evaluation metric. For evaluation, we measure the following metrics:

  • Top-1 / 5 error rate. The top- error rate is the fraction of test samples for which the correct label is not in the top- confidences. We measure top-1 and top-5 error rates to evaluate the generalization performances.

  • Expected Calibration Error (ECE). ECE [16, 33] approximates the difference in expectation between confidence and accuracy. It is calculated by partitioning predictions into equally-spaced bins and taking a weighted average of bins’ difference of confidence and accuracy, i.e., , where is the number of samples, is the set of samples whose confidence falls into the -th interval, and , are the accuracy and the average confidence of , respectively. We measure ECE with 20 bins to evaluate whether the model represents the true correctness likelihood.

  • Recall at (R@). Recall at is the percentage of test samples that have at least one from the same class in nearest neighbors on the feature space. To measure the distance between two samples, we use -distance between their pooled features of the penultimate layer. We compare the recall at scores to evaluate intra-class variations of learned features.

Method CIFAR-100 TinyImageNet CUB-200-2011 Stanford Dogs MIT67
Cross-entropy 24.710.24 43.530.19 46.001.43 36.290.32 44.750.80
CS-KD (ours) 21.990.13 41.620.38 33.280.99 30.850.28 40.450.45
Mixup 21.670.34 41.570.38 37.090.27 32.540.04 41.671.05
Mixup + CS-KD (ours) 20.400.31 40.710.32 30.710.64 29.930.14 39.650.85
Table 3: Top-1 error rates (%) of ResNet-18 with Mixup regularization on various image classification tasks. We report the mean and standard deviation over three runs with different random seeds, and the best results are indicated in bold.
Method CIFAR-100 TinyImageNet CUB-200-2011 Stanford Dogs MIT67
Cross-entropy 26.720.33 46.610.22 48.360.61 38.960.40 44.750.62
CS-KD (ours) 25.800.10 44.670.12 39.120.09 34.070.46 41.540.67
KD 25.840.07 43.310.11 39.320.65 34.230.42 41.470.79
KD + CS-KD (ours) 25.580.16 42.820.33 34.470.17 32.590.50 40.270.78
Table 4: Top-1 error rates (%) of ResNet-10 (student) with knowledge distillation (KD) on various image classification tasks. Teacher networks are pre-trained on DenseNet-121 by CS-KD. We report the mean and standard deviation over three runs with different random seeds, and the best results are indicated in bold.
Model Method Top-1 (1-crop)
ResNet-50 Cross-entropy 24.0
CS-KD (ours) 23.6
ResNet-101 Cross-entropy 22.4
CS-KD (ours) 22.0
ResNeXt-101-32x4d Cross-entropy 21.6
CS-KD (ours) 21.2
Table 5: Top-1 error rates (%) on ImageNet dataset with various model architectures trained for 90 epochs with batch size 256. The best results are indicated in bold.
(a) Cross-entropy
(b) Virtual-softmax
(c) AdaCos
(d) CS-KD (ours)
Figure 3: Visualization of various feature embeddings on the penultimate layer using t-SNE on PreAct ResNet-18 for CIFAR-100. The proposed method (d) shows the smallest intra-class variation that leads to the best top-1 error rate.
(a) Cross-entropy
(b) CS-KD (ours)
(c) Top-1 error rates (%)
Figure 4: Experimental results of ResNet-18 on the mixed dataset. The hierarchical classification accuracy (%) of each model trained by (a) the cross-entropy and (b) our method. One can observe that the model trained by CS-KD is less confusing classes across different domains. (c) Top-1 error rates (%) of fine-grained label classification.
Measurement Method CIFAR-100 TinyImageNet CUB-200-2011 Stanford Dogs MIT67
Top-5 Cross-entropy 6.910.09 22.210.29 22.300.68 11.800.27 19.250.53
AdaCos 9.990.20 22.240.11 15.240.66 11.020.22 19.052.33
Virtual-softmax 8.540.11 24.150.17 13.160.20 8.640.21 19.100.20
Maximum-entropy 7.290.12 21.530.50 19.801.21 10.900.31 20.470.90
Label-smoothing 7.180.08 20.740.31 22.400.85 13.410.40 19.530.75
CS-KD (ours) 5.690.03 19.210.04 13.070.26 8.550.07 17.460.38
CS-KD-E (ours) 5.930.06 19.120.34 13.740.91 8.570.13 18.210.45
ECE Cross-entropy 15.450.33 14.080.76 18.390.76 15.050.35 17.990.72
AdaCos 73.760.35 55.090.41 63.390.06 65.380.33 54.000.52
Virtual-softmax 8.020.55 4.600.67 11.680.66 7.910.38 11.211.00
Maximum-entropy 56.410.36 42.680.31 50.521.20 51.530.28 42.411.74
Label-smoothing 13.200.60 2.670.48 15.700.81 11.600.40 8.792.47
CS-KD (ours) 5.170.40 7.260.93 15.440.92 10.461.08 15.560.29
CS-KD-E (ours) 4.690.56 3.790.35 8.750.49 4.700.18 8.061.90
R@1 Cross-entropy 61.380.64 30.590.42 33.921.70 47.511.02 31.421.00
AdaCos 67.950.42 44.660.52 54.860.24 58.370.43 42.391.91
Virtual-softmax 68.350.48 44.690.58 55.560.74 59.710.56 44.200.90
Maximum-entropy 71.510.29 39.180.79 48.662.10 60.050.45 38.063.32
Label-smoothing 71.440.03 34.790.67 41.590.94 54.480.68 35.151.54
CS-KD (ours) 71.150.15 47.150.40 59.060.38 62.670.07 46.741.48
CS-KD-E (ours) 70.570.57 45.520.35 58.441.09 62.030.30 44.821.22
Table 6: Top-5 error, ECE, and Recall at 1 (R@1) rates (%) of ResNet-18 on various image classification tasks. We denote our method combined with the sample-wise regularization by CS-KD-E. The arrow on the right side of the evaluation metric indicates ascending or descending order of the value. We reported the mean and standard deviation over three runs with different random seeds, and the best results are indicated in bold.
(a) Cross-entropy
(b) Virtual-softmax
(c) AdaCos
(d) Maximum-entropy
(e) Label-smoothing
Figure 5: Reliability diagrams [9, 34] show accuracy as a function of confidence, for PreAct ResNet-18 trained on CIFAR-100 using (a) Cross-entropy, (b) Virtual-softmax, (c) AdaCos, (d) Maximum-entropy, and (e) Label-smoothing. All methods are compared with our proposed method, CS-KD. Perfect calibration [16] is plotted by dashed diagonals (Optimal) for all.

3.2 Classification accuracy

Comparison with output regularization methods. We measure the top-1 error rates of the proposed method (denoted by CS-KD) by comparing with Virtual-softmax, AdaCos, Maximum-entropy, and Label-smoothing on various image classification tasks. Table 1 shows that CS-KD outperforms other baselines consistently. In particular, CS-KD improves the top-1 error rate of the cross-entropy loss from 46.00% to 33.28% under the CUB-200-2011 dataset. We also observe that the top-1 error rates of other baselines are often worse than the cross-entropy loss, e.g., Virtual-softmax, Maximum-entropy, and Label-smoothing under MIT67 and DenseNet). As shown in Table 6, top-5 error rates of CS-KD outperform other regularization methods, as it encourages meaningful predictions. In particular, CS-KD improves the top-5 error rate of the cross-entropy loss from 6.91% to 5.69% under the CIFAR-100 dataset, while the top-5 error rates of AdaCos is even worse than the cross-entropy loss. These results imply that our method induces better predictive distributions than other baseline methods.

Comparison with self-distillation methods. We also compare our method with recent proposed self-distillation techniques such as DDGSD [53] and BYOT [57]. As shown in Table 2, CS-KD shows better top-1 error rates on ResNet-18 in overall. For example, CS-KD shows the top-1 error rate of 33.28% on the CUB-200-2011 dataset, while DDGSD and BYOT have 41.17% and 40.76%, respectively. All tested self-distillation methods utilize regularization effects of knowledge distillation. The superiority of CS-KD could be explained by its unique effect of reducing intra-class variations.

Evaluation on large-scale datasets. To verify the scalability of our method, we have evaluated our method on the ImageNet dataset with various model architecture such as ResNet-50, ResNet-101, and ResNeXt-101-32x4d [52]. As reported in Table 5, our method improves 0.4% of the top-1 error rates across all the tested architectures consistently. The 0.4% improvement is comparable to, e.g., adding 51 more layers on ResNet-101 (i.e., ResNet-152) [19].

Compatibility with other regularization methods. We investigate orthogonal usage with other types of regularization methods such as Mixup [56] and knowledge distillation (KD) [22]. Mixup utilizes convex combinations of input pairs and corresponding label pairs for training. We combine our method with Mixup regularization by applying the class-wise regularization loss to mixed inputs and mixed labels, instead of standard inputs and labels. Table 3 shows the effectiveness of our method combined with Mixup regularization. Interestingly, this simple idea significantly improves the performances of fine-grained classification tasks. In particular, our method improves the top-1 error rate of Mixup regularization from 37.09% to 30.71%, where the top-1 error rate of the cross-entropy loss is 46.00% under ResNet-18 on the CUB-200-2011 dataset.

KD regularizes predictive distributions of student network to learn the dark knowledge of a teacher network. We combine our method with KD to learn dark knowledge from the teacher and itself simultaneously. Table 4 shows that our method achieves a similar performance of KD, although ours does not use additional teacher networks. Besides, combining our method with KD further improves the top-1 error rate of our method from 39.32% to 34.47%, where the top-1 error rate of the cross-entropy loss is 48.36% under ResNet-10 trained on the CUB-200-2011 dataset. These results show the wide applicability of our method, compatible to use with other regularization methods.

3.3 Ablation study

Feature embedding analysis. One can expect that the intra-class variations can be reduced by forcing DNNs to produce meaningful predictions. To verify this, we analyze the feature embedding of the penultimate layer of ResNet-18 trained on CIFAR-100 dataset by t-SNE [30] visualization method. As shown in Figure 3, the intra-class variations are significantly decreased by our method (Figure 3(d)) compared to other baselines, including Virtual-softmax (Figure 3(b)) and AdaCos (Figure 3(c)), which are designed to reduce intra-class variations. We also provide quantitative results on the metric Recall at 1 (R@1), which has appeared in Section 3.1. We remark that the larger value of R@1 implies small intra-class variations on the feature embedding [50]. As shown in Table 6, R@1 values can be significantly improved when ResNet-18 is trained by our method. In particular, R@1 of CS-KD is 47.15% under the TinyImageNet dataset, while R@1 of Adacos, Virtual-softamx, and the cross-entropy loss are 44.66%, 44.69%, and 30.59%, respectively.

Hierarchical image classification. By producing more semantic predictions, i.e., increasing the correlation between similar classes in predictions, we expect the trained classifier can capture a hierarchical (or clustering) structure of label space. To verify this, we evaluate the proposed method on a mixed dataset with 387 fine-grained labels and three hierarchy labels, i.e., bird (CUB-200-2011; 200 labels), dog (Stanford Dogs; 120 labels), and indoor (MIT67; 67 labels). Specifically, we randomly choose 30 samples per each fine-grained label for training, and original test datasets are used for the test. For evaluation, we train ResNet-18 to classify the fine-grained labels and measure a hierarchical classification accuracy by predicting a hierarchy label (bird, dog, or indoor) as that of predicted fine-grained label.

First, we extract the hierarchical structure as confusion matrices, where each element indicates the hierarchical image classification accuracy. As shown in Figure 4(a) and 4(b), our method captures the hierarchical structure of the mixed dataset almost perfectly, i.e.

, showing the identity confusion matrix. In particular, our method enhances the hierarchical image classification accuracy significantly up to 99.3% in the bird hierarchy (CUB-200-2011). Moreover, as shown in Figure 

4(c), our method also improves the top-1 error rates of fine-grained label classification significantly. Interestingly, the error rate of CUB-200-2011 is even lower than the errors reported in Table 1. This is because the model learns additional information by utilizing the dark knowledge of more labels.

3.4 Calibration effects

In this section, we also evaluate the calibration effects of the proposed regularization method. Specifically, we provide reliability diagrams [9, 34], which plot the expected sample accuracy as a function of confidence of PreAct ResNet-18 for the CIFAR-100 dataset in Figure 5. We remark that the plotted identity function (dashed diagonal) implies perfect calibration [16], and our method is the closest one among the baselines, as shown in Figure 5. Moreover, we evaluate our method by ECE [16, 33], which is a quantitative metric of calibration, in Table 6. The results demonstrate that our method outperforms the cross-entropy loss consistently. In particular, CS-KD enhances ECE of the cross-entropy from 15.45% to 5.17% under the CIFAR-100 dataset, while AdaCos and Maximum-entropy are significantly worse than the cross-entropy with 73.76% and 56.41%, respectively.

As a natural extension of CS-KD, we also consider combining our method with an existing consistency loss [2, 7, 31, 37, 44], which regularizes the output distributions of a given sample and its augmented one. Specifically, for a given training sample and another sample having the same label, the combined regularization loss is defined as follows:

where is an augmented sample that is generated by the data augmentation technique666We use standard data augmentation techniques (i.e., flipping and random sized cropping) for all tested methods in this paper., and is the loss weight for balancing. The corresponding results are reported in Table 6, denoted by CS-KD-E. We found that CS-KD-E significantly enhances the calibration performance of CS-KD, and also outperforms the baseline methods over top-1 and top-5 error rates consistently. In particular, CS-KD-E enhances ECE of CS-KD from 5.17% to 4.69% under the CIFAR-100 dataset. We think that investigating the effect of such combined regularization could be an interesting direction to explore in the future, e.g., utilizing other augmentation methods such as cutout [12] and auto-augmentation [8].

4 Related work

Regularization techniques. Numerous techniques have been introduced to prevent overfitting of neural networks, including early stopping [3], /-regularization [35], dropout [42], and batch normalization [40]. Alternatively, regularization methods for the predictive distribution also have been explored: Szegedy et al. [43] proposed label-smoothing, which is a mixture of the ground-truth and the uniform distribution, and Zhang et al. [56]

proposed a data augmentation method called Mixup, which linearly interpolates a random pair of training samples and corresponding labels. Müller

et al. [32] investigated a method called Label-smoothing and empirically showed that it improves not only generalization but also model calibration in various tasks, such as image classification and machine translation. Similarly, Pereyra et al. [36]

proposed penalizing low entropy predictive distribution, which improved exploration in reinforcement learning and supervised learning. Moreover, several works

[2, 7, 37, 44]

investigated consistency regularizers between the predictive distributions of corrupted samples and original samples for semi-supervised learning. We remark that our method enjoys orthogonal usages with the prior methods,

i.e., our method can be combined with the prior methods to further improve the generalization performance.

Knowledge distillation. Knowledge distillation [22] is an effective learning method to transfer the knowledge from a powerful teacher model to a student. This pioneering work showed that one can use softmax with temperature scaling to match soft targets for transferring dark knowledge, which contains the information of non-target labels. There are numerous follow-up studies to distill knowledge in the aforementioned teacher-student framework. Recently, some of the self-distillation approaches [53, 57], which distill knowledge itself, are proposed. Data-distortion guided self-distillation method [53] transfers knowledge between different augmented versions of the same training data. Be Your Own Teacher [57], on the other hand, utilizes ensembling predictions from multiple branches to improve its performance. We remark that our method and these knowledge distillation methods have a similar component, i.e., using a soft target distribution, but ours only reduces intra-class variations. We also remark that the joint usage of our method and the prior knowledge distillation methods is also possible.

Margin-based softmax losses. There have been recent efforts toward boosting the recognition performances via enlarging inter-class margins and reducing intra-class variation. Several approaches utilized metric-based methods that measure similarities between features using Euclidean distances, such as triplet [48] and contrastive loss [6]. To make the model extract discriminative features, center loss [49] and range loss [51] were proposed to minimize distances between samples belong to the same class. Recently, angular-margin based losses were proposed for further improvement. L-softmax [29] and A-softmax [28] combined angular-margin constraints with softmax loss to encourage the model to generate more discriminative features. CosFace [47], AM-softmax [14], and ArcFace [11] introduced angular-margins for a similar purpose, by reformulating softmax loss. Different from L-Softmax and A-Softmax, Virtual-softmax [5] encourages a large margin among classes via injecting additional virtual negative class.

5 Conclusion

In this paper, we discover a simple regularization method to enhance the generalization performance of deep neural networks. We propose the regularization term, which penalizes the predictive distribution between different samples of the same label by minimizing the Kullback-Leibler divergence. We remark that our idea regularizes the dark knowledge (

i.e., the knowledge on wrong predictions) itself and encourages the model to produce more meaningful predictions. Moreover, we demonstrate that our proposed method can be useful for the generalization and calibration of neural networks. We think that the proposed regularization technique would enjoy a broader range of applications, such as exploration in deep reinforcement learning [17]

, transfer learning

[1], face verification [11], and detection of out-of-distribution samples [27].

Acknowledgments

This work was supported by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (No.2016-0-00563, Research on Adaptive Machine Learning Technology Development for Intelligent Autonomous Digital Companion) and Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST)). We also thank Sungsoo Ahn and Hankook Lee for helpful discussions.

References

  • [1] Sungsoo Ahn, Shell Xu Hu, Andreas Damianou, Neil D Lawrence, and Zhenwen Dai. Variational information distillation for knowledge transfer. In CVPR, 2019.
  • [2] Philip Bachman, Ouais Alsharif, and Doina Precup. Learning with pseudo-ensembles. In NeurIPS, 2014.
  • [3] Christopher Bishop. Regularization and complexity control in feed-forward networks. In ICANN, 1995.
  • [4] Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. In ICLR, 2019.
  • [5] Binghui Chen, Weihong Deng, and Haifeng Shen. Virtual class enhanced discriminative embedding learning. In NeurIPS, 2018.
  • [6] Sumit Chopra, Raia Hadsell, Yann LeCun, et al. Learning a similarity metric discriminatively, with application to face verification. In CVPR, 2005.
  • [7] Kevin Clark, Minh-Thang Luong, Christopher D Manning, and Quoc V Le. Semi-supervised sequence modeling with cross-view training. In EMNLP, 2018.
  • [8] Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. Autoaugment: Learning augmentation policies from data. In CVPR, 2019.
  • [9] Morris H DeGroot and Stephen E Fienberg. The comparison and evaluation of forecasters. Journal of the Royal Statistical Society: Series D (The Statistician), 32(1-2):12–22, 1983.
  • [10] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
  • [11] Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In CVPR, 2019.
  • [12] Terrance DeVries and Graham W Taylor. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017.
  • [13] Abhimanyu Dubey, Otkrist Gupta, Ramesh Raskar, and Nikhil Naik. Maximum-entropy fine grained classification. In NeurIPS, 2018.
  • [14] Haijun Liu Feng Wang, Weiyang Liu and Jian Cheng. Additive margin softmax for face verification. IEEE Signal Processing Letters, 25(7):926–930, 2018.
  • [15] Tommaso Furlanello, Zachary C Lipton, Michael Tschannen, Laurent Itti, and Anima Anandkumar. Born again neural networks. In ICML, 2018.
  • [16] Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In ICML, 2017.
  • [17] Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In ICML, 2018.
  • [18] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In ICCV, 2017.
  • [19] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
  • [20] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In ECCV, 2016.
  • [21] Mohammad Rastegari Hessam Bagherinezhad, Maxwell Horton and Ali Farhadi. Label refinery: Improving imagenet classification through label progression. In ECCV, 2018.
  • [22] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
  • [23] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In CVPR, 2017.
  • [24] Harini Kannan, Alexey Kurakin, and Ian Goodfellow. Adversarial logit pairing. arXiv preprint arXiv:1803.06373, 2018.
  • [25] Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao, and Li Fei-Fei. Novel dataset for fine-grained image categorization. In CVPR, 2011.
  • [26] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
  • [27] Kimin Lee, Honglak Lee, Kibok Lee, and Jinwoo Shin. Training confidence-calibrated classifiers for detecting out-of-distribution samples. In ICLR, 2018.
  • [28] Weiyang Liu, Yandong Wen, Zhiding Yu, Ming Li, Bhiksha Raj, and Le Song. Sphereface: Deep hypersphere embedding for face recognition. In CVPR, 2017.
  • [29] Weiyang Liu, Yandong Wen, Zhiding Yu, and Meng Yang. Large-margin softmax loss for convolutional neural networks. In ICML, 2016.
  • [30] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579–2605, 2008.
  • [31] Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training : A regularization method for supervised and semi-supervised learning. IEEE transactions on pattern analysis and machine intelligence, 41(8):1979–1993, 2018.
  • [32] Rafael Müller, Simon Kornblith, and Geoffrey E Hinton. When does label smoothing help? In NeurIPS, 2019.
  • [33] Mahdi Pakdaman Naeini, Gregory Cooper, and Milos Hauskrecht. Obtaining well calibrated probabilities using bayesian binning. In AAAI, 2015.
  • [34] Alexandru Niculescu-Mizil and Rich Caruana. Predicting good probabilities with supervised learning. In ICML, 2005.
  • [35] Steven J Nowlan and Geoffrey E Hinton. Simplifying neural networks by soft weight-sharing. Neural computation, 4(4):473–493, 1992.
  • [36] Gabriel Pereyra, George Tucker, Jan Chorowski, Łukasz Kaiser, and Geoffrey Hinton. Regularizing neural networks by penalizing confident output distributions. In ICLR workshops, 2017.
  • [37] Eduard Hovy Minh-Thang Luong Qizhe Xie, Zihang Dai and Quoc V. Le. Unsupervised data augmentation for consistency training. arXiv preprint arXiv:1904.12848, 2019.
  • [38] Ariadna Quattoni and Antonio Torralba. Recognizing indoor scenes. In CVPR, 2009.
  • [39] Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. Fitnets: Hints for thin deep nets. In ICLR, 2015.
  • [40] Christian Szegedy Sergey Ioffe. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
  • [41] Suraj Srinivas and François Fleuret. Knowledge transfer with jacobian matching. In ICML, 2018.
  • [42] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958, 2014.
  • [43] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In CVPR, 2016.
  • [44] Antti Tarvainen and Harri Valpola.

    Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results.

    In NeurIPS, 2017.
  • [45] Giorgos Tolias, Ronan Sicre, and Hervé Jégou.

    Particular object retrieval with integral max-pooling of cnn activations.

    In ICLR, 2016.
  • [46] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 Dataset. Technical Report CNS-TR-2011-001, California Institute of Technology, 2011.
  • [47] Hao Wang, Yitong Wang, Zheng Zhou, Xing Ji, Dihong Gong, Jingchao Zhou, Zhifeng Li, and Wei Liu. Cosface: Large margin cosine loss for deep face recognition. In CVPR, 2018.
  • [48] Kilian Q Weinberger and Lawrence K Saul. Distance metric learning for large margin nearest neighbor classification. Journal of Machine Learning Research, 10(Feb):207–244, 2009.
  • [49] Yandong Wen, Kaipeng Zhang, Zhifeng Li, and Yu Qiao. A discriminative feature learning approach for deep face recognition. In ECCV, 2016.
  • [50] Qi Tian Wengang Zhou, Houqiang Li. Recent advance in content-based image retrieval: A literature survey. arXiv preprint arXiv:1706.06064, 2017.
  • [51] Yandong Wen Zhifeng Li Xiao Zhang, Zhiyuan Fang and Yu Qiao. Range loss for deep face recognition with long-tail. In ICCV, 2017.
  • [52] Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In CVPR, 2017.
  • [53] Ting-Bing Xu and Cheng-Lin Liu. Data-distortion guided self-distillation for deep neural networks. In AAAI, 2019.
  • [54] Sergey Zagoruyko and Nikos Komodakis. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. In ICLR, 2017.
  • [55] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. In ICLR, 2017.
  • [56] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In ICLR, 2018.
  • [57] Linfeng Zhang, Jiebo Song, Anni Gao, Jingwei Chen, Chenglong Bao, and Kaisheng Ma. Be your own teacher: Improve the performance of convolutional neural networks via self distillation. In ICCV, 2019.
  • [58] Xiao Zhang, Rui Zhao, Yu Qiao, Xiaogang Wang, and Hongsheng Li. Adacos: Adaptively scaling cosine logits for effectively learning deep face representations. In CVPR, 2019.

Appendix A Effects of hyper-parameters

To examine the effect of main hyper-parameters and , we additionally test the hyper-parameters across an array of and on PreAct ResNet-18 using the CIFAR-100 dataset. The results are presented in Table 7. Except for the hyper-parameters under consideration, we keep all settings the same as in Section 3.1. Overall, we found our method is fairly robust on and , except for some extreme cases, such as the small value of , and the large value of .

0.1 0.5 1 2 3 4 10 20
0.1 25.16 24.03 23.91 24.38 24.05 24.21 24.39 27.61
0.5 24.14 24.05 24.15 23.49 23.78 23.23 23.90 25.96
1 24.15 23.32 22.80 22.26 22.87 23.18 24.35 25.58
4 22.87 22.03 21.66 22.45 22.68 22.81 32.25 35.45
10 22.68 22.36 21.98 22.04 21.95 31.76 31.80 37.50
20 22.96 22.39 22.03 22.37 22.00 22.39 30.23 24.05
Table 7: Top-1 error rates (%) of PreAct ResNet-18 on CIFAR-100 dataset over various hyper-parameters and . The best results are indicated in bold.

Appendix B Qualitative analysis of CS-KD

To examine the effect of our method, we investigate prediction values in softmax scores, i.e., , from PreAct ResNet-18 trained by the standard cross-entropy loss and our method for TinyImageNet dataset. We report commonly misclassified samples by both the cross-entropy and our method in Figure 6, and softmax scores of the samples show our method not only moderates the overconfident predictions, but also enhances the prediction values of classes correlated to the ground-truth class.

Figure 6: Predictive distributions on misclassified samples. We use PreAct ResNet-18 trained on TinyImageNet dataset. For misclassified samples, softmax scores of the ground-truth class are increased by training DNNs with class-wise regularization.

Moreover, we additionally compare our method with the cross-entropy method by plotting log-probabilities of the softmax scores on commonly misclassified samples for TinyImageNet, CUB-200-2011, Stanford Dogs, and MIT67 datasets. The corresponding results are reported in Figures 10, 10, 10, and 10. Log-probabilities of the softmax scores on the predicted class show how overconfident the predictions are, and our method produces less confident predictions on the misclassified samples compared to the cross-entropy method for overall datasets. On the other hand, log-probabilities of the softmax scores on the ground-truth class show relations between the predictions and the ground-truth class, and our method increases the ground-truth scores for overall datasets. These results imply that our method induces meaningful predictions that are more related to the ground-truth class than the cross-entropy method.

Figure 8: Histogram of log-probabilities of (a) the predicted label, i.e., top-1 softmax score, and (b) the ground-truth label on misclassified samples by networks trained by the cross-entropy (baseline) and CS-KD. The networks are trained on ResNet-18 for CUB-200-2011.
Figure 9: Histogram of log-probabilities of (a) the predicted label, i.e., top-1 softmax score, and (b) the ground-truth label on misclassified samples by networks trained by the cross-entropy (baseline) and CS-KD. The networks are trained on ResNet-18 for Stanford Dogs.
(a) Log-probabilities of predicted labels on misclassified samples
(b) Log-probabilities of ground-truth labels on misclassified samples
(a) Log-probabilities of predicted labels on misclassified samples
(b) Log-probabilities of ground-truth labels on misclassified samples
(a) Log-probabilities of predicted labels on misclassified samples
(b) Log-probabilities of ground-truth labels on misclassified samples
(a) Log-probabilities of predicted labels on misclassified samples
(b) Log-probabilities of ground-truth labels on misclassified samples
Figure 7: Histogram of log-probabilities of (a) the predicted label, i.e., top-1 softmax score, and (b) the ground-truth label on misclassified samples by networks trained by the cross-entropy (baseline) and CS-KD. The networks are trained on PreAct ResNet-18 for TinyImageNet.
Figure 8: Histogram of log-probabilities of (a) the predicted label, i.e., top-1 softmax score, and (b) the ground-truth label on misclassified samples by networks trained by the cross-entropy (baseline) and CS-KD. The networks are trained on ResNet-18 for CUB-200-2011.
Figure 9: Histogram of log-probabilities of (a) the predicted label, i.e., top-1 softmax score, and (b) the ground-truth label on misclassified samples by networks trained by the cross-entropy (baseline) and CS-KD. The networks are trained on ResNet-18 for Stanford Dogs.
Figure 10: Histogram of log-probabilities of (a) the predicted label, i.e., top-1 softmax score, and (b) the ground-truth label on misclassified samples by networks trained by the cross-entropy (baseline) and CS-KD. The networks are trained on ResNet-18 for MIT67.
Figure 7: Histogram of log-probabilities of (a) the predicted label, i.e., top-1 softmax score, and (b) the ground-truth label on misclassified samples by networks trained by the cross-entropy (baseline) and CS-KD. The networks are trained on PreAct ResNet-18 for TinyImageNet.