Speech Representation Learning Through Self-supervised Pretraining And Multi-task Finetuning

10/18/2021
by   Yi-Chen Chen, et al.
National Taiwan University
Nvidia
0

Speech representation learning plays a vital role in speech processing. Among them, self-supervised learning (SSL) has become an important research direction. It has been shown that an SSL pretraining model can achieve excellent performance in various downstream tasks of speech processing. On the other hand, supervised multi-task learning (MTL) is another representation learning paradigm, which has been proven effective in computer vision (CV) and natural language processing (NLP). However, there is no systematic research on the general representation learning model trained by supervised MTL in speech processing. In this paper, we show that MTL finetuning can further improve SSL pretraining. We analyze the generalizability of supervised MTL finetuning to examine if the speech representation learned by MTL finetuning can generalize to unseen new tasks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

10/14/2021

Conformer-Based Self-Supervised Learning for Non-Speech Audio Tasks

Representation learning from unlabeled data has been of major interest i...
05/03/2021

SUPERB: Speech processing Universal PERformance Benchmark

Self-supervised learning (SSL) has proven vital for advancing research i...
03/23/2021

Self-supervised representation learning from 12-lead ECG data

We put forward a comprehensive assessment of self-supervised representat...
11/08/2021

Characterizing the adversarial vulnerability of speech self-supervised learning

A leaderboard named Speech processing Universal PERformance Benchmark (S...
02/07/2021

Representation Learning for Natural Language Processing

This book aims to review and present the recent advances of distributed ...
10/23/2019

Speech-XLNet: Unsupervised Acoustic Model Pretraining For Self-Attention Networks

Self-attention network (SAN) can benefit significantly from the bi-direc...
07/01/2021

Pretext Tasks selection for multitask self-supervised speech representation learning

Through solving pretext tasks, self-supervised learning leverages unlabe...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recently many SSL approaches have been proposed for pretraining models for speech processing tasks, including pretraining with generative losses [1, 2, 3, 4] or discriminative losses [5, 6, 7, 8, 9]. Some approaches use multi-task learning with multiple SSL objectives [10, 11]. After a shared model is pretrained with SSL to extract general representations, it can then be specialized on downstream tasks with task-specific head models and simple finetuning. This method achieves state-of-the-art performance in many applications.

To fairly evaluate the generalizability of SSL approaches without further heavy downstream task-specific finetuning, the SUPERB benchmark [12] is proposed. The SUPERB benchmark measures the performance of a shared model across a wide range of speech processing tasks without heavy finetuning. Ten tasks are included to investigate four aspects of speech: content, speaker, semantics, and paralinguistics. To evaluate a general model trained with SSL, the pretrained model parameters are frozen, and the fixed representations are extracted and fed into each task-specific prediction head (small downstream model) for training. During the evaluation, the pretrained shared model and trained prediction heads are used on all tasks. In the above scenario, some self-supervised models show outstanding performances on all the ten tasks in SUPERB.

Supervised multi-task learning (MTL) is to train a shared model on various downstream tasks [13, 14, 15, 16, 17]. This paper wants to investigate if MTL on various downstream tasks can further improve the representations from SSL. In CV [18, 19] and NLP [20, 21], general models trained by MTL approaches can be evaluated on benchmarks that include various tasks. However, in speech, there has not been a systematic study of general representation learning models trained by MTL of various speech processing tasks. SpeechNet [22] proposes a general modularized model to perform a variety of speech processing tasks, but the purpose of SpeechNet is to allow different tasks to utilize different subsets of the general modules in SpeechNet, rather than proposing one shared model to extract general representations for training task-specific heads of downstream tasks.

In this paper, we investigate two MTL training scenarios and also one task transfer learning scenario. For the two MTL scenarios, we select a state-of-the-art SSL pretrained shared model in the SUPERB benchmark as the starting point for MTL. Then we train the shared model with MTL in two different scenarios:

  • All-task MTL Finetuning: Finetune the SSL pretrained shared model with all tasks in SUPERB. It serves as a strong baseline for SSL approaches and the following scenarios.

  • Leave-one-out MTL Finetuning: Finetune the SSL pretrained shared model with all but one tasks in SUPERB. We can observe the influence of removing one task on the learned representations and their performance on the other tasks.

To further examine if the representations learned with supervised MTL can generalize to an unseen new task, we have an additional Task Transfer Learning scenario.

  • We take a shared model from the Leave-one-out MTL scenario and freeze its parameters. We extract the representations with this shared model for training the prediction head of the remaining task that is not involved in MTL finetuning.

Through the different training scenarios above, we perform a preliminary study of the generalizability of representation learning by MTL of various speech processing tasks on a standard benchmark. The code is released for reproduction and future extension111https://github.com/s3prl/s3prl/tree/multi-task-distributed.

Figure 1: Four different training scenarios. Scenarios (b-1) and (b-2) require the pretrained shared model from (a). Scenario (c) requires the finetuned shared model from (b-2). The parameters of the shared model in scenarios (a) and (c) are frozen.

2 Training Scenarios

In this section, we describe four related training scenarios in the following subsections: SSL Pretraining, All-task MTL Finetuning, Leave-one-out MTL Finetuning, and Task Transfer Learning. The two MTL finetuning scenarios require the SSL pretrained shared model, and the Task Transfer Learning scenario requires the shared model from the Leave-one-out MTL Finetuning scenario.

2.1 SSL Pretraining

Many SSL approaches are evaluated and compared on the ten tasks in SUPERB [12]. For each SSL approach, we first pretrain a model with SSL objectives. Then we use this pretrained model as the shared model to extract representations for all downstream tasks. The parameters of the pretrained model are frozen. Then we train each task-specific prediction head (small downstream model) with the fixed representations, as illustrated in Figure 1 (a).

2.2 All-task MTL Finetuning

We take the shared model pretrained in Subsection 2.1 as the starting point. Then we further finetune the shared model with MTL by jointly training it with downstream task-specific heads of all tasks in SUPERB, as illustrated in Figure 1 (b-1). In this way, the shared model can be updated by the gradients of all tasks to fit the respective objectives of each task. Therefore, the representations extracted from the shared model can perform well on the tasks involved in MTL. It serves as a strong baseline for SSL approaches and the following scenarios. To further examine the generalizability of representation learning by supervised MTL, we have two additional training scenarios below.

2.3 Leave-one-out MTL Finetuning

Similarly, we take the shared model pretrained in Subsection 2.1 as the starting point. Then we further finetune the shared model with MTL by jointly training it with downstream task-specific heads of all but one tasks in SUPERB, as illustrated in Figure 1 (b-2). Compared to the finetuned shared model in Subsection 2.2, we can observe the influence of removing one task on the learned representations and their performance on the other tasks.

2.4 Task Transfer Learning

We take the finetuned shared model in Subsection 2.3 as the pretrained shared model in this scenario. Then we freeze the shared model, and only train the downstream head model of the remaining task that is not used in MTL finetuning, as illustrated in Figure 1 (c).

Comparing the representations in this scenario with those learned with only SSL approaches in Subsection 2.1, we can observe the generalizability of the representations learned with MTL on a new task compared to SSL only. On the other hand, in comparison with the representations learned with All-task MTL in Subsection 2.2, we can observe how the performance of a task is influenced if this task is not involved in the MTL finetuning.

3 Experimental Setup

Scenario Tasks for MTL ASR PR SF SD ER IC KS ASV SID
Finetuning WER PER F1 CER DER ACC ACC ACC EER ACC
(a) SSL N/A 6.42 5.41 88.53 25.20 5.88 64.24 98.34 96.30 5.11 81.42
(b-1) SSL+MTL all 6.22 3.61 87.56 26.76 4.93 67.28 99.60 97.34 6.76 90.86
(b-2): SSL+MTL all but ASR X 3.63 87.28 27.11 4.89 65.07 99.63 97.57 7.78 90.69
all but PR 6.79 X 86.94 27.66 4.81 66.73 99.66 97.44 7.94 91.16
all but SF 6.10 3.39 X X 4.73 65.71 99.58 97.18 7.61 90.70
all but SD 6.28 3.54 87.94 26.31 X 66.73 99.63 97.11 7.49 90.79
all but ER 6.17 3.40 87.45 26.90 4.77 X 99.55 97.27 7.19 90.51
all but IC 6.13 3.34 87.65 26.94 4.78 66.08 X 97.27 6.74 90.55
all but KS 6.17 3.55 87.83 26.88 4.91 66.27 99.71 X 7.86 90.67
all but ASV 5.90 2.79 87.88 26.52 3.61 64.88 99.58 97.44 X 85.06
all but SID 5.95 3.25 87.33 27.39 4.50 68.66 99.55 97.27 9.00 X
(c) Task Transfer N/A 6.27 5.79 88.14 26.24 5.80 64.24 97.42 96.33 7.55 62.05
Table 1: Experimental Results of training scenarios described in Section 2. We have the numbers in Scenario (b-2) be bold or underlined if they are better or worse than (b-1) respectively.
The improvement relations of tasks.
The hurt relations of tasks.
Figure 2: Two relation graphs of tasks. If a task A performs worse/better after removing a task B in MTL, we connect an edge from B to A in the improvement/hurt graphs, indicating B can improves/hurts the performance of A. The width of an edge is thin/medium/thick if the relative score change is in the range [0.5%, 2%), [2%, 8%), or larger than 8%. If the the relative score change is less than 0.5%, we consider it negligible and no edge is connected.

3.1 Tasks In SUPERB

Ten tasks in SUPERB can be used to investigate four aspects of speech: content

(Phoneme Recognition (PR), Automatic Speech Recognition (ASR), Keyword Spotting (KS), and Query by Example Spoken Term Detection (QbE)),

speaker (Speaker Identification (SID), Automatic Speaker Verification (ASV), and Speaker Diarization (SD)), semantics (Intent Classification (IC) and Slot Filling (SF)), and paralinguistics (Emotion Recognition (ER)). Since no downstream model training is required in QbE, we only perform MTL experiments and compare the results on the other nine tasks.

  • PR

    converts an utterance into a sequence of phonemes. Alignment modeling is included in the PR task to avoid the potential inaccurate forced alignment. The evaluation metric is the phone error rate (PER).

  • ASR transcribes an utterance into a sequence of words. While PR analyzes the performance of modeling phonetics, ASR reflects the performance of recognizing more common text units in a real-world scenario. The evaluation metric is the word error rate (WER).

  • KS

    identifies preregistered keywords in an utterance by classifying the utterance into a predefined set of words. The task is important for on-device speech processing and requires low response time. The evaluation metric is the accuracy (ACC).

  • SID classifies the speaker identity of an utterance in a multi-class classification setting, where the set of speakers are the same for both training and testing. The evaluation metric is the accuracy (ACC).

  • ASV verifies whether the speakers of a pair of utterances match in a binary classification setting. Different from SID, the speakers in the testing set may not appear in the training set. Therefore, ASV is more challenging than SID. The evaluation metric is the equal error rate (EER).

  • SD segments an utterance and classifies the segments into speaker identities, i.e., who is speaking when. Multiple speakers can speak simultaneously. Rich and various speaker characteristics should be encoded in the extracted representations for each frame to represent mixtures of signals. The evaluation metric is the diarization error rate (DER).

  • IC classifies an utterance into predefined classes of speaker intents. The evaluation metric is the accuracy (ACC).

  • SF converts an utterance into a sequence of semantic slot-type classes. For example, FromLocation can be a slot-type for a spoken word Taipei, which is known as a slot-value. Both slot-types and slot-values are essential for an SLU system. Therefore, we use two evaluation metrics for slot-types and slot-values respectively: the slot-type F1 score (F1) and the slot-value character error rate (CER).

  • ER predicts an emotion class for each utterance. The evaluation metric is accuracy (ACC).

As for the datasets and splits used for each task, we follow the original settings in SUPERB (PR [23], ASR [23], KS [24], SID [25], ASV [25], SD [26], IC [27], SF [28], and ER [29]).

3.2 The SSL Pretraining Approach In Experiments

Many SSL approaches are evaluated and compared on the tasks in SUPERB. Among them, HuBERT [9] achieves the overall best performance. Therefore, we select HuBERT as the SSL pretraining approach across all of our experiments.

HuBERT utilizes an offline clustering algorithm on hidden representations to provide aligned target labels for a BERT-like

[30] prediction. The clustered labels at the masked locations serve as the prediction targets. We use a weighted sum of hidden representations of all layers in the HuBERT model as the representations for downstream heads, as in SUPERB.

3.3 Model Architecture and Implementation Details

Since MTL requires more computational resources than single-task training, we adopt HuBERT Base rather than Large in SUPERB as our shared model architecture. For task-specific head architectures, we simply follow the settings in SUPERB. We use a batch size of 2 for MTL finetuning in training scenarios described in Subsections 2.2 and 2.3, and a batch size of 8 for downstream head training in the training scenario described in Subsection 2.4. Each model is trained with an Adam optimizer with a linearly warmup learning rate from 0 to 1e-5 for the first 5000 steps and then a linearly decaying learning rate to 0 for 195000 steps. For more implementation details, please refer to the released code.

4 Experimental Results

All experimental results are presented in Table 1. An up-/down-arrow beside an evaluation metric means that better performance results in a higher/lower number of that metric. The results are grouped according to four different training scenarios corresponding to the subsections in Section 2 respectively.

4.1 The Performance Of All-task MTL Finetuning

From the comparison between Scenario (a) and Scenario (b-1), we observe that the performance of the shared model finetuned with All-task MTL is better in the tasks ASR, PR, SD, ER, IC, KS, SID, and worse only in the tasks SF and ASV than SSL pretraining. It indicates that MTL is a strong baseline for SSL pretraining or other representation learning approaches.

One thing worth noting is that all tasks except for ASV do not suffer from overfitting in terms of the validation scores during training. However, the model is prone to overfit on ASV. In Scenario (a), the model checkpoint of a downstream head can be determined by the best validation score during training for each task respectively. Yet in Scenario (b-1), since the shared model is jointly trained with the downstream heads of all tasks, it is hard to select the model checkpoint based on the validation scores of all tasks. In this paper, we simply select the last model checkpoint after 200,000 training steps for all tasks. It may be a reason for the worse performance of MTL finetuning on ASV. We leave exploring a better method to select the model checkpoint with MTL in future work.

4.2 The Influence Of Removing One Task In MTL Finetuning

The rows in Scenario (b-2) show the results of finetuning the shared model with Leave-one-out MTL. To compare these results with Scenario (b-1) more clearly, we have the numbers in Scenario (b-2) be bold or underlined if they are better or worse than (b-1) respectively. Furthermore, we calculate the relative score increases/decreases of Scenario (b-2) compared to Scenario (b-1), and plot two relation graphs in Figure 2 accordingly. For SF, we use CER to plot the edges because the relative changes are small in terms of F1. If a task A improves/regresses after removing another task B in MTL, it means that B can hurt/help A in MTL.

  • For ASR, PR is an important auxiliary task for ASR, and SD helps a little. ASR performs better after removing any of the other tasks.

  • For PR, ASR helps in MTL while the other tasks hurt the performance of PR. We can observe that content recognition tasks such as ASR and PR are hurt the most by speaker recognition tasks such as ASV and SID.

  • For SF, ASR, PR, ER, and SID help in MTL. IC and KS help a little in terms of F1 but hurt in terms of CER. SD and ASV hurt SF.

  • For SD, all of the other tasks hurt SD. SD is also hurt the most by speaker recognition tasks such as ASV and SID. Although SD, ASV and SID are all related to speaker characteristics, SID and ASV focus on the utterance-level embedding, while SD aims to distinguish frame-level speaker characteristics. Therefore, the fine-grained information needed by SD may be lost when jointly trained with ASV or SID.

  • For ER, all of the other tasks except for SID help ER.

  • For IC and KS, the influences of the other tasks are negligible.

  • For ASV, all of the other tasks except for IC help ASV. As discussed in Subsection 4.1, ASV suffers from severe overfitting. Therefore, jointly learning ASV with other tasks can mitigate this issue, especially with SID.

  • For SID, all of the other tasks except for PR help SID. ASV especially helps a lot while the others help a little.

From another perspective, the results may provide a different insight in addition to the evaluation of supervised MTL as a representation learning. If we focus on a certain primary task, we may select proper auxiliary tasks to assist the primary task based on these MTL experimental results. For example, if we want to have a better performance on ER, we can jointly train ER with all of the other tasks except for SID.

The transitive relations of tasks in MTL need to be further verified. For example, suppose we know jointly training a task A with another task B can improve the performance of A; and we also know jointly training A with another task C can improve the performance of A. Yet, we cannot conclude that jointly training A with B and C simultaneously can improve A’s performance. We leave more in-depth research of MTL and its optimization on speech processing tasks in future work.

4.3 The Performance Of Task Transfer Learning

To further examine the generalizability of representation learning by MTL, we present the results of the Task Transfer Learning scenario in Table 1 (c). Compared to Table 1 (a), all of the tasks except for ASR and KS in Scenario (c) perform worse. It indicates SSL pretraining is still a more generalizable representation learning approach for a new downstream task. On the other hand, compared to Table 1 (b-1), all of the tasks except for SF in Scenario (c) perform worse. It indicates whether a task is involved in MTL is crucial to the performance of this task.

To obtain better generalizability, it is worth trying to train the shared model with both SSL and MTL simultaneously as semi-supervised MTL representation learning. We leave this exploration in future work.

5 Conclusion And Discussion

In this work, we investigate different training scenarios of supervised MTL as a speech representation learning approach along with SSL pretraining on a benchmark with various speech processing tasks. We analyze the generalizability of representations learned with supervised MTL empirically.

This paper is only a preliminary study of MTL with various speech processing tasks. The performance of MTL is dependent on many factors, such as the amount of data, task relationships, noise and so on. These factors should be isolated and investigated with more theoretical analyses and empirical experiments in the future.

References

  • [1] Yu-An Chung, Wei-Ning Hsu, Hao Tang, and James Glass,

    “An Unsupervised Autoregressive Model for Speech Representation Learning,”

    in Interspeech, 2019, pp. 146–150.
  • [2] Andy T. Liu, Shu-wen Yang, Po-Han Chi, Po-chun Hsu, and Hung-yi Lee, “Mockingjay: Unsupervised speech representation learning with deep bidirectional transformer encoders,” ICASSP, 2020.
  • [3] Andy T Liu, Shang-Wen Li, and Hung-yi Lee, “Tera: Self-supervised learning of transformer encoder representation for speech,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 29, pp. 2351–2366, 2021.
  • [4] Shaoshi Ling and Yuzong Liu, “DeCoAR 2.0: Deep contextualized acoustic representations with vector quantization,” arXiv preprint arXiv:2012.06659, 2020.
  • [5] Aaron van den Oord, Yazhe Li, and Oriol Vinyals, “Representation learning with contrastive predictive coding,” arXiv preprint arXiv:1807.03748, 2018.
  • [6] Steffen Schneider, Alexei Baevski, Ronan Collobert, and Michael Auli, “wav2vec: Unsupervised pre-training for speech recognition.,” in Interspeech, 2019.
  • [7] Alexei Baevski, Steffen Schneider, and Michael Auli, “vq-wav2vec: Self-supervised learning of discrete speech representations,” in ICLR, 2020.
  • [8] Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli, “wav2vec 2.0: A framework for self-supervised learning of speech representations,” in NeurIPS, 2020.
  • [9] Wei-Ning Hsu, Yao-Hung Hubert Tsai, Benjamin Bolte, Ruslan Salakhutdinov, and Abdelrahman Mohamed, “Hubert: How much can a bad teacher benefit asr pre-training?,” in ICASSP. IEEE, 2021, pp. 6533–6537.
  • [10] Santiago Pascual, Mirco Ravanelli, Joan Serrà, Antonio Bonafonte, and Yoshua Bengio, “Learning problem-agnostic speech representations from multiple self-supervised tasks,” in Interspeech, 2019, pp. 161–165.
  • [11] Mirco Ravanelli, Jianyuan Zhong, Santiago Pascual, Pawel Swietojanski, Joao Monteiro, Jan Trmal, and Yoshua Bengio, “Multi-task self-supervised learning for robust speech recognition,” in ICASSP, 2020, pp. 6989–6993.
  • [12] Shu wen Yang, Po-Han Chi, Yung-Sung Chuang, Cheng-I Jeff Lai, Kushal Lakhotia, Yist Y. Lin, Andy T. Liu, Jiatong Shi, Xuankai Chang, Guan-Ting Lin, Tzu-Hsien Huang, Wei-Cheng Tseng, Ko tik Lee, Da-Rong Liu, Zili Huang, Shuyan Dong, Shang-Wen Li, Shinji Watanabe, Abdelrahman Mohamed, and Hung yi Lee, “SUPERB: Speech Processing Universal PERformance Benchmark,” in Proc. Interspeech 2021, 2021, pp. 1194–1198.
  • [13] Alex Kendall, Yarin Gal, and Roberto Cipolla, “Multi-task learning using uncertainty to weigh losses for scene geometry and semantics,” in

    Proceedings of the IEEE conference on computer vision and pattern recognition

    , 2018, pp. 7482–7491.
  • [14] Zhao Chen, Vijay Badrinarayanan, Chen-Yu Lee, and Andrew Rabinovich, “Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks,” in

    International Conference on Machine Learning

    . PMLR, 2018, pp. 794–803.
  • [15] Ozan Sener and Vladlen Koltun, “Multi-task learning as multi-objective optimization,” in NeurIPS, 2018, pp. 525–536.
  • [16] Tianhe Yu, Saurabh Kumar, Abhishek Gupta, Sergey Levine, Karol Hausman, and Chelsea Finn, “Gradient surgery for multi-task learning,” NeurIPS, vol. 33, 2020.
  • [17] Zirui Wang, Yulia Tsvetkov, Orhan Firat, and Yuan Cao, “Gradient vaccine: Investigating and improving multi-task optimization in massively multilingual models,” in ICLR, 2020.
  • [18] Nathan Silberman, Derek Hoiem, Pushmeet Kohli, and Rob Fergus, “Indoor segmentation and support inference from rgbd images,” in European conference on computer vision. Springer, 2012, pp. 746–760.
  • [19] Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman, “The pascal visual object classes (voc) challenge,” International journal of computer vision, vol. 88, no. 2, pp. 303–338, 2010.
  • [20] Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher, “The natural language decathlon: Multitask learning as question answering,” arXiv preprint arXiv:1806.08730, 2018.
  • [21] Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman, “Glue: A multi-task benchmark and analysis platform for natural language understanding,” in ICLR, 2018.
  • [22] Yi-Chen Chen, Po-Han Chi, Shu-wen Yang, Kai-Wei Chang, Jheng-hao Lin, Sung-Feng Huang, Da-Rong Liu, Chi-Liang Liu, Cheng-Kuang Lee, and Hung-yi Lee, “Speechnet: A universal modularized model for speech processing tasks,” arXiv preprint arXiv:2105.03070, 2021.
  • [23] V. Panayotov, G. Chen, D. Povey, and S. Khudanpur, “Librispeech: An ASR corpus based on public domain audio books,” in ICASSP, 2015, pp. 5206–5210.
  • [24] Pete Warden, “Speech commands: A public dataset for single-word speech recognition.,” Dataset available online, 2017.
  • [25] Arsha Nagrani, Joon Son Chung, Weidi Xie, and Andrew Zisserman, “Voxceleb: Large-scale speaker verification in the wild,” Computer Speech & Language, vol. 60, pp. 101027, 2020.
  • [26] Joris Cosentino, Manuel Pariente, Samuele Cornell, Antoine Deleforge, and Emmanuel Vincent, “Librimix: An open-source dataset for generalizable speech separation,” arXiv preprint arXiv:2005.11262, 2020.
  • [27] Loren Lugosch, Mirco Ravanelli, Patrick Ignoto, Vikrant Singh Tomar, and Yoshua Bengio, “Speech model pre-training for end-to-end spoken language understanding,” in Interspeech, 2019, pp. 814–818.
  • [28] Cheng-I Lai, Yung-Sung Chuang, Hung-Yi Lee, Shang-Wen Li, and James Glass, “Semi-supervised spoken language understanding via self-supervised speech and language model pretraining,” in ICASSP, 2021.
  • [29] Carlos Busso et al., “Iemocap: Interactive emotional dyadic motion capture database,” Language resources and evaluation, vol. 42, no. 4, pp. 335–359, 2008.
  • [30] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” in NAACL, 2019, pp. 4171–4186.