Lifelong Intent Detection via Multi-Strategy Rebalancing

08/10/2021 ∙ by Qingbin Liu, et al. ∙ 0

Conventional Intent Detection (ID) models are usually trained offline, which relies on a fixed dataset and a predefined set of intent classes. However, in real-world applications, online systems usually involve continually emerging new user intents, which pose a great challenge to the offline training paradigm. Recently, lifelong learning has received increasing attention and is considered to be the most promising solution to this challenge. In this paper, we propose Lifelong Intent Detection (LID), which continually trains an ID model on new data to learn newly emerging intents while avoiding catastrophically forgetting old data. Nevertheless, we find that existing lifelong learning methods usually suffer from a serious imbalance between old and new data in the LID task. Therefore, we propose a novel lifelong learning method, Multi-Strategy Rebalancing (MSR), which consists of cosine normalization, hierarchical knowledge distillation, and inter-class margin loss to alleviate the multiple negative effects of the imbalance problem. Experimental results demonstrate the effectiveness of our method, which significantly outperforms previous state-of-the-art lifelong learning methods on the ATIS, SNIPS, HWU64, and CLINC150 benchmarks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Intent Detection (ID) aims to accurately understand the user intent from a user utterance to guide downstream dialogue policy decisions (Hemphill et al., 1990; Coucke et al., 2018; Yan et al., 2020)

. It is an essential component of dialogue systems and is therefore widely used in real-world applications, such as personal assistants and customer service. In these systems, ID models usually classify a user utterance into an intent class. For example, an ID model should be able to recognize the intent of “booking a flight” from the utterance “I am flying to Chicago next Wednesday”.

Figure 1. Lifelong Intent Detection: The lifelong learning method (Lifelong Learner) continually trains an ID model when new data becomes available.

Lifelong Intent Detection: The lifelong learning method (Lifelong Learner) continually trains an ID model when new data becomes available.

Existing ID models usually adopt an offline learning paradigm, which performs once-and-for-all training on a fixed dataset. This paradigm can only handle a fixed number of user intents. However, online dialogue systems typically need to handle continually emerging new user intents, which makes previous ID models impractical in real-world applications. Recently, lifelong learning has received increasing attention and is considered to be the most promising approach to address this problem (Ring, 1995; Thrun, 1998). Therefore, to handle continually emerging new intents, we propose the Lifelong Intent Detection (LID) task, which introduces lifelong learning into the ID task. As shown in Fig 1, the LID task continually trains an ID model using only new data to learn newly emerging intents. At any time, the updated ID model should be able to perform accurate classifications for all classes observed so far. In this task, it is infeasible to retrain the ID model from scratch every time new data becomes available due to storage budgets and computational costs (Cao et al., 2020).

A plain lifelong learning method is to fine-tune a model pre-trained on old data directly on new data. However, this method faces a serious challenge, namely catastrophic forgetting, where models fine-tuned on new data usually suffer from a significant performance degradation on old data (McCloskey and Cohen, 1989; French, 1999). To address this issue, the current mainstream lifelong learning methods either identify and retain parameters that are important to the old data (Kirkpatrick et al., 2017; Aljundi et al., 2018), or maintain a memory to reserve a small number of old training samples (known as the reply-based methods) (Rebuffi et al., 2017; Wang et al., 2019)

. At each time, reply-based methods combine the reserved old data with the new data to retrain the model. Due to the simplicity and effectiveness of replay-based methods, they become an excellent solution for lifelong learning in natural language processing scenarios

(Han et al., 2020; Cao et al., 2020).

Figure 2. Illustrations of the multiple negative effects caused by the data imbalance problem in the LID task and our solutions.

The adverse effects caused by the data imbalance and our solutions.

However, when adapting existing replay-based methods to lifelong intention detection, our study found that these methods suffer from a data imbalance problem. Specifically, at each step of the lifelong learning process, there is generally a large amount of new class data, yet only a small amount of old data is reserved, leading to a significant imbalance between old and new data. Under such circumstances, the focus of the training process will be significantly biased towards new classes, thus leading to a series of negative effects in the ID model, as shown in Figure 2

: (1) Magnitude Imbalance: the magnitude of feature vectors and class embeddings of new classes is significantly larger than those of old classes, (2) Knowledge Deviation: the knowledge of the previous model, i.e., the feature distribution and the probability distribution of old classes, is not well preserved, (3) Class Confusion: the class embeddings of new classes and those of old classes are very close to each other in the high-dimensional vector space. These adverse effects severely mislead the ID model, causing it to tend to predict new classes while catastrophically forgetting old classes.

Our work is inspired by lifelong learning in image classification tasks (Hou et al., 2019; Castro et al., 2018; Tao et al., 2020), which also targets the data imbalance problem. In this paper, we find multiple adverse effects caused by the imbalance problem in the LID task and propose corresponding solutions.

To address the problem of data imbalance, we propose a novel lifelong learning framework, namely Multi-Strategy Rebalancing (MSR), which aims to learn a balanced ID model. Specifically, MSR contains three components to alleviate the above three adverse effects: (1) Cosine Normalization, which balances the magnitude of feature vectors and class embeddings between old and new classes by constraining these vectors in a high-dimensional sphere to eliminate the bias caused by the difference in magnitude. (2) Hierarchical Knowledge Distillation, which preserves the knowledge of the previous model from the feature level and the prediction level to retain the feature distribution and the probability distribution of old classes. (3) Inter-Class Margin Loss, which provides a large margin to separate the new class embeddings and the old class embeddings. With multi-strategy rebalancing, the ID model can effectively handle the adverse effects caused by data imbalance. We constructed four benchmarks for the LID task based on four widely used ID datasets to systematically compare different lifelong learning methods

(Hemphill et al., 1990; Coucke et al., 2018; Liu et al., 2019; Larson et al., 2019). Experimental results show that our proposed framework significantly outperforms previous state-of-the-art lifelong learning methods on these benchmarks.

In summary, the contributions of this work are as follows:

  • To the best of our knowledge, we are the first to propose the Lifelong Intent Detection task, meanwhile constructed four benchmarks through four widely used ID datasets: ATIS, SNIPS, HWU64, and CLINC150.

  • We propose the Multi-Strategy Rebalancing framework, which can effectively handle the data imbalance problem in the LID task through cosine normalization, hierarchical knowledge distillation, and inter-class margin loss.

  • Experimental results show that our method outperforms previous lifelong learning methods and achieves state-of-the-art performance. The source code and benchmarks will be released for further research (http://anonymous).

2. Task Formulation

Intent detection is usually formulated as a multi-class classification task, which predicts an intent class for a given user utterance (Hemphill et al., 1990; Coucke et al., 2018; Zhang et al., 2019; E et al., 2019). In real-world applications, online systems inevitably face continually emerging new user intents. Therefore, we propose the Lifelong Intent Detection task, which continually trains the ID model on emerging data to learn new classes. In this task, there is a sequence of data . Each data () has its own label set (), i.e., one or more intent classes, and training/validation/testing sets (, , ). At each step, the lifelong learning framework trains the ID model on the new training set () to learn the new classes in . The LID task requires that the ID model should perform well on all observed classes. Therefore, after training on , the updated ID model will be evaluated on all observed testing sets (i.e., ) and uniformly classify each sample into all known classes (i.e., ).

3. Method

In this work, we propose Multi-Strategy Rebalancing to handle the data imbalance problem in the LID task. In this section, we will first show a typical replay-based method, iCaRL (Rebuffi et al., 2017), as the background. Next, we deeply analyze the data imbalance problem and introduce the proposed solutions, which are shown in Figure 3.

Figure 3. Illustrations of our method for lifelong intent detection. At each step, our method combines Cosine Normalization, Hierarchical Knowledge Distillation (KD), and Inter-Class Margin Loss to learn the imbalanced data.

Illustrations of our method for lifelong intent detection. At each step, our method combines Cosine Normalization, Hierarchical Knowledge Distillation (KD), and Inter-Class Margin Loss to learn the imbalanced data.

3.1. Background

A typical ID model contains two components: an encoder and multiple class embeddings. The encoder can be recurrent neural networks or pre-trained models

(Cho et al., 2014; Devlin et al., 2019). We adopt the current best encoder, BERT (Devlin et al., 2019), as our encoder. BERT is a multi-layer Transformer (Vaswani et al., 2017) that is pre-trained on large-scale unlabeled corpora. It encodes each sample into a sentence-level feature vector, i.e., the hidden state of the “[CLS]” token. Then, the ID model calculates the dot product similarity between the feature vector and the class embeddings as the class probability. The loss of the ID model is the standard cross-entropy loss:

(1)

where is the set of all observed classes. is the one-hot ground-truth label. is the class probability obtained by softmax.

To overcome catastrophically forgetting old data, iCaRL (Rebuffi et al., 2017) maintains a bounded memory to store a few representative old samples, which aims to introduce important information about the data distribution of previous classes into the training process. The memory can be denoted as , where is the set of samples reserved for the th class. After training on the new data, iCaRL selects the most representative samples for each class in this data through a class prototype (Snell et al., 2017), which is calculated by averaging the feature vectors of all training samples of that class. Based on the distance between the feature vector of each training sample and the prototype, iCaRL sorts the training samples of each class and selects the top nearest samples as exemplars to store, where is the memory size and is the number of all observed classes. To allocate space for the current classes, iCaRL removes training samples for each old class, where is the number of new classes. iCaRL removes samples that are far from the prototype according to the sorted list. In this way, the most representative samples are reserved in the memory.

In addition, iCaRL combines the cross-entropy loss with a knowledge distillation (KD) loss (Hinton et al., 2015) to retrain the model. The distillation loss enables the model at the current step to learn the probability distribution of the model trained in the last step:

(2)

where and

are the soft labels (i.e., the results before the softmax layer) predicted by the last model and the current model for old classes (

), respectively. . is the temperature scalar, which is used to increase the weight of small probability values. The KD loss is an effective way to alleviate catastrophic forgetting by learning the soft label of the last model.

However, at each step, the new data is usually significantly more than the reserved old data, leading to a serious data imbalance problem. It makes previous methods tend to predict new classes and catastrophically forgetting old classes.

3.2. Multi-Strategy Rebalancing

In this work, we address the data imbalance problem from multiple aspects by incorporating three components, cosine normalization, hierarchical knowledge distillation, and inter-class margin loss.

3.2.1. Cosine Normalization

We find that the magnitude of both feature vectors and class embeddings of new classes is significantly larger than that of old classes. It may make the current model tend to predict new classes. To solve this problem, we replace the original dot product similarity with cosine normalization as:

(3)

where

measures the cosine similarity between the feature vector

and the class embedding . The hyper-parameter is used to control the peak of the softmax distribution since the cosine similarity ranges between -1 and 1. Geometrically, we constrain these vector in a high-dimensional sphere to effectively eliminate the bias caused by the imbalanced magnitudes.

3.2.2. Hierarchical Knowledge Distillation

The knowledge (i.e., the feature distribution and the probability distribution) of the model trained on new data usually deviates heavily from that of the model trained on old data. It makes the model forget the important information of old classes. We propose hierarchical knowledge distillation to preserve the previous knowledge from two levels.

In the Feature-Level KD, we reserve the geometric structure of the feature vector of the current model by reducing the angle between it and the feature vector of the last model:

(4)

where is the feature vector extracted by the last model.

encourages the features extracted by the current model to be close to the features extracted by the last model in the high-dimensional sphere. Besides, we fix the old class embeddings to reserve their spatial structure.

(a) ATIS Benchmark
(b) SNIPS benchmark
(c) HWU64 benchmark
(d) CLINC150 benchmark
Figure 4. Performance () changes with increasing classes on the ATIS, SNIPS, HWU64, CLINC150 benchmarks, respectively. We show the training time (measured on GeForce RTX 2080Ti) in the brackets.

In the Prediction-Level KD, we encourage the current model to reserve the probability distribution of the last model through a knowledge distillation loss, as in Eq. 2, which learns the soft label predicted by the last model.

3.2.3. Inter-Class Margin Loss

Another negative effect of the imbalance problem is class confusion, i.e., new and old class embeddings are usually mixed in the high-dimensional space. This is due to the fact that a large number of new training samples are likely to activate neighboring samples with different labels (Hou et al., 2019; Tao et al., 2020). To solve this problem, we introduce an inter-class margin loss to separate these class embeddings as:

(5)

where is the margin. This loss expects the angle between (,) to be greater than

. Through this loss, these embeddings can be uniformly distributed on the high-dimensional sphere without confusion.

3.3. Training

At each step of LID, our MSR framework combines the above losses to train the ID model on the new data and the reserved old data. The overall loss is defined as follows:

(6)

where , , and are hyper-parameters to balance the performance between old and new classes. , , and are calculated for both the new data and the reserved old data. is calculated for all new class embeddings.

4. Experiment

4.1. Lifelong Intent Detection Benchmarks

Since we are the first to propose the LID task, we construct four benchmarks based on the following method: for an ID dataset, we arrange its classes in a fixed random order. Each class has its own data. In a class-incremental manner, the lifelong learning methods continually train an ID model on one or multiple new classes. Based on four widely used datasets, ATIS (Hemphill et al., 1990), SNIPS (Coucke et al., 2018), HWU64 (Liu et al., 2019), CLINC150 (Larson et al., 2019), we constructed four benchmarks. To provide a comprehensive evaluation, we set different numbers of new classes per step in different benchmarks. We set 1, 1, 5, and 15 new classes per step in the ATIS, SNIPS, HWU64, and CLINC150 benchmarks, respectively. Since the class data in ATIS and HWU64 has a long-tail distribution, we use the data of the top 10 and 50 frequent classes. The statistics of the four benchmarks are shown in Appendix A.

4.2. Implementation Details

At each step of the LID task, we report the accuracy on the testing data of all observed classes, denoted as . After the last step, we report Average Acc, which is the average accuracy of all step (), and Whole Acc, which is the accuracy on the whole testing data of all classes. We use BERT in the HuggingFace’s Transformers library. All hyper-parameters are obtained by a grid search on the validation set. The learning rate is and the batch size is 64. The hyper-parameters , , ,, and are , , , , and . in our method. The memory size is 200.

4.3. Baselines

In this work, we propose a model-agnostic lifelong learning method to handle the LID task. Therefore, we adopt other model-agnostic lifelong learning methods that achieve state-of-the-art performance on other tasks as our baselines. EWC (Wang et al., 2019) adopts an loss to slow down the update of important parameters. LwF (Li and Hoiem, 2017) uses knowledge distillation to learn the soft labels of the last model. EMR (Wang et al., 2019) randomly stores some old samples. iCaRL (Rebuffi et al., 2017) combines knowledge distillation and prototype-based sample selection in their method. EEIL (Castro et al., 2018) handles the data imbalance problem by resampling a balanced subset. EMAR (Han et al., 2020)

uses K-Means to select samples and consolidates the model by old prototypes.

FineTune directly fine-tunes the pre-trained model on new data. UpperBound use training data of all observed classes to train the model, which is regarded as the upper bound.

4.4. Main Results

Figure 4 shows the accuracy () during the whole lifelong learning process. We also list Average Acc and Whole Acc after the last step in Appendix B. From the results, we can see that: (1) our MSR achieves state-of-the-art performance, significantly outperforming the baselines by 2.27%, 1.68%, 3.16%, and 3.57% whole accuracy on the ATIS, SNIPS, HWU64, CLINC150 benchmarks, respectively. These baselines either ignore the data imbalance problem or handle it by a simple resampling approach, which leads to catastrophic forgetting. (2) compared to EMAR, our method saves computation time because our method is more refined. (3) There is still a gap between our method and the upper bound. It indicates that there remain some challenges to be addressed.

4.5. Ablation Study

In this section, we perform ablation studies on the proposed three components. The results are shown in Appendix C. Removing any component brings a performance degradation. It shows that our method can alleviate catastrophic forgetting through multi-strategy rebalancing, which addresses multiple adverse effects caused by the data imbalance problem.

5. Conclusion

In this paper, we propose the lifelong intent detection task to handle continually emerging user intents. In addition, we propose multi-strategy rebalancing to address multiple adverse effects caused by the data imbalance problem. Experimental results on four constructed benchmarks demonstrate the effectiveness of our method.

References

  • R. Aljundi, F. Babiloni, M. Elhoseiny, M. Rohrbach, and T. Tuytelaars (2018)

    Memory aware synapses: learning what (not) to forget

    .
    In Proceedings of the ECCV, pp. 139–154. Cited by: §1.
  • P. Cao, Y. Chen, J. Zhao, and T. Wang (2020) Incremental event detection via knowledge consolidation networks. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pp. 707–717. Cited by: §1, §1.
  • F. M. Castro, M. J. Marín-Jiménez, N. Guil, C. Schmid, and K. Alahari (2018) End-to-end incremental learning. In Computer Vision - ECCV 2018, Vol. 11216, pp. 241–257. Cited by: §1, §4.3.
  • K. Cho, B. van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio (2014) Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of EMNLP, pp. 1724–1734. Cited by: §3.1.
  • A. Coucke, A. Saade, A. Ball, T. Bluche, A. Caulier, D. Leroy, C. Doumouro, T. Gisselbrecht, F. Caltagirone, T. Lavril, M. Primet, and J. Dureau (2018) Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces. CoRR abs/1805.10190. External Links: 1805.10190 Cited by: §1, §1, §2, §4.1.
  • J. Devlin, M. Chang, K. Lee, and K. Toutanova (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the NAACL-HLT, pp. 4171–4186. Cited by: §3.1.
  • H. E, P. Niu, Z. Chen, and M. Song (2019) A novel bi-directional interrelated model for joint intent detection and slot filling. In Proceedings of the 57th ACL, pp. 5467–5471. Cited by: §2.
  • R. M. French (1999) Catastrophic forgetting in connectionist networks. Trends in cognitive sciences 3 (4), pp. 128–135. Cited by: §1.
  • X. Han, Y. Dai, T. Gao, Y. Lin, Z. Liu, P. Li, M. Sun, and J. Zhou (2020) Continual relation learning via episodic memory activation and reconsolidation. In Proceedings of the 58th ACL, pp. 6429–6440. Cited by: §1, §4.3.
  • C. T. Hemphill, J. J. Godfrey, and G. R. Doddington (1990) The ATIS spoken language systems pilot corpus. In Speech and Natural Language: Proceedings of a Workshop, Cited by: §1, §1, §2, §4.1.
  • G. Hinton, O. Vinyals, and J. Dean (2015)

    Distilling the knowledge in a neural network

    .
    arXiv preprint arXiv:1503.02531. Cited by: §3.1.
  • S. Hou, X. Pan, C. C. Loy, Z. Wang, and D. Lin (2019) Learning a unified classifier incrementally via rebalancing. In IEEE Conference on CVPR, pp. 831–839. Cited by: §1, §3.2.3.
  • J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, et al. (2017) Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences 114 (13), pp. 3521–3526. Cited by: §1.
  • S. Larson, A. Mahendran, J. J. Peper, C. Clarke, A. Lee, P. Hill, J. K. Kummerfeld, K. Leach, M. A. Laurenzano, L. Tang, and J. Mars (2019) An evaluation dataset for intent classification and out-of-scope prediction. In Proceedings of the 2019 EMNLP-IJCNLP, pp. 1311–1316. Cited by: §1, §4.1.
  • Z. Li and D. Hoiem (2017) Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence 40 (12), pp. 2935–2947. Cited by: §4.3.
  • X. Liu, A. Eshghi, P. Swietojanski, and V. Rieser (2019)

    Benchmarking natural language understanding services for building conversational agents

    .
    In Increasing Naturalness and Flexibility in Spoken Dialogue Interaction - 10th IWSDS, Vol. 714, pp. 165–183. Cited by: §1, §4.1.
  • M. McCloskey and N. J. Cohen (1989) Catastrophic interference in connectionist networks: the sequential learning problem. In Psychology of learning and motivation, Vol. 24, pp. 109–165. Cited by: §1.
  • S. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert (2017) Icarl: incremental classifier and representation learning. In Proceedings of the IEEE conference on CVPR, pp. 2001–2010. Cited by: §1, §3.1, §3, §4.3.
  • M. B. Ring (1995) Continual learning in reinforcement environments. Ph.D. Thesis, University of Texas at Austin, TX, USA. Cited by: §1.
  • J. Snell, K. Swersky, and R. Zemel (2017) Prototypical networks for few-shot learning. In Advances in neural information processing systems, pp. 4077–4087. Cited by: §3.1.
  • X. Tao, X. Hong, X. Chang, S. Dong, X. Wei, and Y. Gong (2020) Few-shot class-incremental learning. In IEEE/CVF Conference on CVPR, pp. 12180–12189. Cited by: §1, §3.2.3.
  • S. Thrun (1998) Lifelong learning algorithms. In Learning to Learn, pp. 181–209. Cited by: §1.
  • A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: §3.1.
  • H. Wang, W. Xiong, M. Yu, X. Guo, S. Chang, and W. Y. Wang (2019) Sentence embedding alignment for lifelong relation extraction. In Proceedings of the 2019 Conference of the NAACL-HLT, pp. 796–806. Cited by: §1, §4.3.
  • G. Yan, L. Fan, Q. Li, H. Liu, X. Zhang, X. Wu, and A. Y. S. Lam (2020)

    Unknown intent detection using gaussian mixture model with an application to zero-shot intent classification

    .
    In Proceedings of the 58th ACL, pp. 1050–1060. Cited by: §1.
  • C. Zhang, Y. Li, N. Du, W. Fan, and P. Yu (2019) Joint slot filling and intent detection via capsule neural networks. In Proceedings of the 57th ACL, pp. 5259–5267. Cited by: §2.

Appendix A Statistics of benchmarks

In this section, we show the statistics of the four constructed benchmarks in Table 1.

Benchmark Training Validation Test Classes Steps
ATIS 4384 490 817 10 10
SNIPS 13084 700 700 7 7
HWU64 14465 4827 4845 50 10
CLINC150 15000 3000 3000 150 10
Table 1. Statistics of the ATIS, SNIPS, HWU64, and CLINC150 benchmarks. “Training” is the number of training samples.

Appendix B Results on the four benchmarks

In this section, we list the results after the last step in Table 2. The average accuracy of all steps and the whole accuracy of the whole testing data are shown in different columns. In both metrics, our method MSR significantly outperforms the baselines and achieves state-of-the-art performance on the four benchmarks. It implies that our method is effective in handling the LID task via multi-strategy rebalancing.

Method ATIS SNIPS HWU64 CLINC150
Average Acc Whole Acc Average Acc Whole Acc Average Acc Whole Acc Average Acc Whole Acc
FineTune 83.91 77.48 38.37 17.71 19.49 2.72 30.15 10.37
UpperBound 99.78 99.27 99.27 97.71 71.57 68.34 97.25 95.63
LwF 85.28 79.12 70.23 33.86 24.30 8.72 40.57 22.73
EWC 87.97 81.76 80.84 47.57 29.92 11.66 54.33 31.03
EMR 96.83 94.55 96.07 88.29 56.38 45.97 85.12 71.30
iCaRL 97.07 95.23 94.31 85.57 56.98 46.54 85.27 73.47
EEIL 97.50 95.42 95.26 85.86 58.63 48.98 86.74 74.43
EMAR 97.87 95.53 96.93 91.89 56.28 44.69 85.14 72.80
MSR (Ours) 99.03 97.80 97.64 93.57 60.81 52.14 89.53 78.00
Table 2. Average Acc and Whole Acc after the last step.
Method ATIS SNIPS HWU64 CLINC150
Average Acc Whole Acc Average Acc Whole Acc Average Acc Whole Acc Average Acc Whole Acc
MSR (Ours) 99.03 97.80 97.64 93.57 60.81 52.14 89.53 78.00
- CN 98.88 96.94 97.54 93.36 60.34 51.95 89.46 77.10
- FKD 98.31 96.21 97.51 93.14 59.75 50.03 89.41 76.77
- PKD 98.61 96.82 97.31 92.57 59.61 51.02 89.08 75.70
- HKD 98.23 95.84 96.63 92.00 59.60 49.14 88.54 74.27
- ICML 98.52 96.70 97.04 92.29 59.56 48.24 89.26 76.90
- CN and HKD 97.79 95.23 96.29 91.43 58.97 47.78 87.34 72.23
- MSR 96.83 94.55 96.07 88.29 56.38 45.97 85.12 71.30
Table 3. Ablation studies of multi-strategy rebalancing. We compare MSR with variants employing different components.

Appendix C Ablation Study

Our method consists of three components: cosine normalization, hierarchical knowledge distillation, and inter-class margin loss. We show the ablation studies of the three components. The results are shown in Table 3. For “- CN”, we replace cosine normalization with the dot product similarity. For “- FKD”, we remove the feature-level knowledge distillation. For “- PKD”, the prediction-level knowledge distillation is removed. For “- HKD”, this model does not adopt the proposed hierarchical knowledge distillation. For “- ICML”, the model removes the inter-class margin loss. For “- CN and HKD”, we remove both cosine normalization and hierarchical knowledge distillation. The model without multi-strategy rebalancing (“- MSR”, i.e., the model EMR) is shown in the last row. We can see that these variants achieve low performance. It indicates that simultaneously utilizing these multiple strategies is very effective.