Context-aware Neural-based Dialog Act Classification on Automatically Generated Transcriptions

02/28/2019
by   Daniel Ortega, et al.
University of Stuttgart
0

This paper presents our latest investigations on dialog act (DA) classification on automatically generated transcriptions. We propose a novel approach that combines convolutional neural networks (CNNs) and conditional random fields (CRFs) for context modeling in DA classification. We explore the impact of transcriptions generated from different automatic speech recognition systems such as hybrid TDNN/HMM and End-to-End systems on the final performance. Experimental results on two benchmark datasets (MRDA and SwDA) show that the combination CNN and CRF improves consistently the accuracy. Furthermore, they show that although the word error rates are comparable, End-to-End ASR system seems to be more suitable for DA classification.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

04/23/2020

End-to-end speech-to-dialog-act recognition

Spoken language understanding, which extracts intents and/or semantic co...
08/07/2018

Dialog-context aware end-to-end speech recognition

Existing speech recognition systems are typically built at the sentence ...
10/17/2018

Exploring Textual and Speech information in Dialogue Act Classification with Speaker Domain Adaptation

In spite of the recent success of Dialogue Act (DA) classification, the ...
02/20/2020

Guiding attention in Sequence-to-sequence models for Dialogue Act prediction

The task of predicting dialog acts (DA) based on conversational dialog i...
02/21/2020

Guider l'attention dans les modeles de sequence a sequence pour la prediction des actes de dialogue

The task of predicting dialog acts (DA) based on conversational dialog i...
04/06/2020

Speaker-change Aware CRF for Dialogue Act Classification

Recent work in Dialogue Act (DA) classification approaches the task as a...
11/25/2021

DA^2-Net : Diverse Adaptive Attention Convolutional Neural Network

Standard Convolutional Neural Network (CNN) designs rarely focus on the ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

According to Austin’s theory [1], every utterance in a dialog has an illocutionary force, which causes an effect over the course of the conversation. Utterances can then be grouped into dialog act (DA) categories depending on the relationship between words and the meaning of the expression [2]. A DA conveys the intention of the speaker rather than the literal meaning of words for each utterance in a dialog.

Automatic DA classification is a crucial preprocessing step for language understanding and dialog systems. This task has been approached using traditional statistical algorithms, for instance hidden Markov models [3], conditional random fields [4], and more recently deep learning (DL) models, such as convolutional neural networks [5], recurrent neural networks [6, 7] and attention mechanism (AM) [8, 7], achieve state-of-the-art results.

Several works have shown that context, i.e. preceding utterances, plays an important role at determining automatically the DA of the current utterance [5, 7, 8]. This fact is also supported by the detailed analysis of the influence of context on DA recognition presented in [9], whose main conclusion is that contextual information helps the DA classification, as long as such information is distinguishable from the current utterance information.

In alignment with the aforementioned approaches, we present a model that employs preceding utterances and the current one. However, the particularity of this model relies on using a linear chain CRF on top of a CNN architecture to predict the DA sequence at utterance level. Using linear chain CRF layers on top of neural network (NN

) models has been already introduced for sequence labeling tasks at the word level such as named entity recognition

[10], part-of-speech tagging [11] or for joint entity recognition and relation classification [12].

To the best of our knowledge, all work on DA classification has been done using only manual transcriptions (MTs). Nonetheless, this type of data differs substantially from real data, i.e. automatic transcriptions (ATs), generated by automatic speech recognition (ASR) systems. In this paper, we explore the effect of training and testing the proposed model on ATs. Our goal at this point is to bring the DA classification task into a more realistic scenario.

In sum, we introduce a model that combines CNNs and CRFs for automatic DA classification. We train and test our model on different scenarios to contrast the effect of using manual and automatically generated transcriptions from two different ASR architectures (hybrid time-delay neural network (TDNN)/HMM and End-to-End (E2E) ASR systems). Our results show that the combination of CNNs and CRFs improves consistently the accuracy of the model achieving state-of-the-art performance on MRDA and SWBD. Furthermore, results on ASR outputs reveal that, although word error rates are comparable, the E2E ASR system seems to be more suitable for DA classification.

2 Dialog Act Classification

The DA classification model proposed in this paper, depicted in Figure 1, consists of two parts: a CNN

that generates vector representations for consecutive utterances and a

CRF that performs DA sequence labeling.

Figure 1: Model architecture. stands for concatenation.

2.1 Utterance representation

Based on [8], the grid-like representations of the current utterance and previous ones are concatenated and used as input for a CNN that generates a vector representation for each of the utterances.

CNNs perform a discrete convolution using a set of different filters on an input matrix, where each column of the matrix is the word embedding of the corresponding word. We use 2D filters (with width ) spanning over all embedding dimensions as described by the following equation:

(1)

After convolution, an utterance-wise max pooling operation is applied in order to extract the highest activation. Then, the feature maps are concatenated resulting in one vector per utterance that is represented in Figure

1 as and .

2.2 Crf-based Da sequence labeling

Given that a dialog is a sequence of utterances, we approach DA classification as a sequence labeling problem. Therefore, we employ CRFs for this task. The first step is to generate the score vectors, depicted in Figure 1 as and , by the means of a linear function in each time step :

(2)

where (weight matrix) and (bias) are trainable parameters. Using score vectors as input we perform sequence labeling with a CRF layer.

CRFs are probabilistic models that calculate the likelihood of a possible output given an observation . They are commonly represented as factor graphs, in which each factor computes the aforementioned likelihood. Mathematically, each factor graph is defined as:

(3)

where , a normalization function, is the sum of all possible outputs for each observation .

To perform sequence labeling, we consider a linear chain CRF

. Analogous to Equation 3, the probability of an output sequence

given a sequence of observations is:

(4)

In Equation 4, not only the factors associating input and output are calculated, but also the likelihood between adjacent labels , where and are neighbors. In this case the normalization function takes the sequence as input.

3 Automatic Speech Recognition

In recent times, deep learning techniques have boosted the

ASR performance significantly  [13]. In this section, we introduce the two types of ASR architectures used to generate ATs.

3.1 Hybrid Tdnn/Hmm architecture

In hybrid ASR systems, NNs are used to predict emission probabilities of HMM given speech frames. Recently, various DL models have been proposed and developed to improve ASR performance. Most of them are variations of CNNs or RNNs [13, 14]. [15] presented a hybrid TDNN/HMM system trained with lattice-free maximum mutual information, which is fast to train and outperforms significantly other models on many ASR tasks. To the extent of our knowledge, it is one of the best hybrid ASR systems available for research and thus was selected for our experiments.

3.2 End-to-End architecture

More recently, an E2E architecture was introduced, which simplifies the training process and achieves competitive results in several benchmark datasets [16]. Many studies have proposed E2E architectures based on either connectionist temporal classification (CTC) [17] or AM [18].

ESPnet, an End-to-End speech processing toolkit, benefits from two major E2E ASR implementations based on CTC and attention-based encoder-decoder network [16]. It employs the multiobjective learning framework to improve robustness and achieve faster convergence. For decoding, ESPnet executes a joint decoding by combining both attention-based and CTC

scores in a one-pass beam search algorithm to eliminate irregular alignments. The training loss function is defined in Equation 5, where

and are the CTC-based and attention-based cross entropy, respectively.

is the tuning parameter to linearly interpolate both objective function.

(5)

During beam search, the following score combination with attention and CTC log probabilities is performed

(6)

where is a hypothesis of output label at position given a history and encoder output [16].

4 Experimental setup

4.1 Data for Da classification

We evaluate our model on two DA labeled corpora: 1) MRDA: ICSI Meeting Recorder Dialog Act Corpus [19, 20, 21], a dialog corpus of multiparty meetings. The 5-tag-set used in this work was introduced by [22], and 2) SwDA: NXT-format Switchboard Corpus [23], a dialog corpus of 2-speaker conversations.

Train, validation and test splits on both datasets were taken as defined in [5]111Concerning SwDA, the data setup in [5] was preferred over [3]’s, because it was not clearly found in the latter which conversations belong to each split.. Table 1 presents statistics about the corpora. Both datasets contain a highly unbalanced distribution of classes. The majority class is % on MRDA and % on SwDA.

Dataset C V Train Validation Test
MRDA 5 12k 78k 16k 15k
SwDA 42 20k 193k 23k 5k
Table 1: Data statistics. C: number of classes, V: vocabulary size and Train/Validation/Test: number of utterances.

4.1.1 Hyperparameters and training

In Table 2

, we present the model hyperparameters for both corpora. Most of them were taken from

[8]. However we tuned the optimizer, the learning rate and the mini-batch size. We found the most effective hyperparameters by changing one at a time while keeping the other ones fixed based on the model performance on the validation split.

Hyperparameter MRDA SwDA
Activation function ReLU
Dropout rate 0.5
Filter width 3, 4, 5
Filters per width 100
Learning rate 0.01 0.07
Mini-batch size 70 170
Optimizer SGD AdaGrad
Pooling size utterance-wise
Word embeddings word2vec (dim. 300)
Table 2: Hyperparameters.

Training was done with context length ranging from 1-5. After tuning different stochastic learning algorithms with several learning rates, stochastic gradient descent (SGD) [24] seemed to work best on MRDA and adaptive gradient algorithm (AdaGrad) [25] on SwDA. The learning rate was initialized at 0.01 on MRDA and 0.07 on SwDA. Any kind of parameter tuning was done only on the validation split. Word vectors were initialized with the 300-dimensional pretrained word vectors from word2vec [26] and fine-tuned during training.

4.2 Data for automatic speech recognition

We employed KALDI [27] to build the hybrid TDNN/HMM ASR system. In the recipe, 40 Mel-frequency cepstral coefficients were computed at each time step and each frame was append a 100-dimensional iVector to the 40-dimensional MFCC input. Speaker adaptive feature transform techniques and data augmentation techniques were implemented. The Gaussian Mixture Model (GMM)/HMM system generated the alignments for NN training [15]. For the Switchboard Dialog Act Corpus (SwDA) dataset, we interpolated the 3-gram language model trained on the transcriptions and the 4-gram Fisher model [28]. For ICSI Meeting Recorder Dialog Act Corpus (MRDA), we employed a 3-gram language model trained on the MTs.

End-to-End Speech Processing Toolkit (ESPnet) was used to build the E2E ASR system. The 80-bins log-mel filterbank features with speed-perturbation were used to train the VGG+BLSTM model with five layers encoder with 1024 units and one layer decoder with 1024 units [16]. The language model utilized 100 subword units based on byte-pair-encoding technique, which seems to perform better than the character-level language model [29].

Both hybrid TDNN/HMM and E2E ASR systems were trained on the same train and validation splits and were later used to generate the automatic transcriptions of all splits (train, validation and test) for the DA classification model. Table  3 shows the performance of hybrid TDNN/HMM and E2E ASR systems on seen (train and validation splits) data and on unseen data (test split) for SwDA and MRDA.

Dataset ASR Train Validation Test
System WER WER WER
SwDA TDNN/HMM 13.8 14.29 18.02
E2E 2.90 8.90 18.80
MRDA TDNN/HMM 9.89 19.28 21.48
E2E 2.30 16.80 18.80
Table 3: ASR performance in WER(%) on train, validation and test splits from SwDA and MRDA.

5 Experimental results

5.1 Experiments on manual transcriptions

Table 4 shows the results from a baseline model and our proposed model trained on MTs with context length varying from 1 to 5. The baseline model is a CNN

that receives as input an utterance at a time followed by a max pooling operation and a softmax layer.

Context MRDA SwDA
0 (baseline) 80.2 (80.4, 80.0) 72.0 (72.2, 71.6)
1 84.6 (84.6, 84.7) 74.1 (73.2, 74.9)
2 84.7 (84.6, 84.7) 74.6 (74.5, 74.8)
3 84.6 (84.5, 84.6) 74.5 (74.2, 74.8)
4 84.7 (84.4, 84.8) 74.1 (73.6, 74.6)
5 84.6 (84.4, 84.8) 74.2 (73.8, 74.5)
Table 4: Baseline model and proposed model’s accuracy (%). For the latter we report for contexts from 1 to 5. Results appear like average (minimum, maximum) calculated on 5 runs.

On average, for MRDA the best results were obtained with context 2 and 4 achieving 84.7%, whereas for SwDA the model with context 2 achieves the highest performance, i.e. 74.6%. To the best of our knowledge and under the setup in [5], these are state-of-the-art results on MRDA and SwDA outperforming [8]. For further experimentation in this paper, the context is fixed to 2.

5.2 Experiments on automatic transcriptions

We tested the pretrained models on ATs from both ASR systems in order to see the impact on the accuracy (see Table 5). As expected, the performance dropped down dramatically due to the WER and the lack of punctuation. On both datasets, the negative impact was higher when the model was tested on TDNN/HMM transcriptions.

Transcriptions MRDA SwDA
MTs 84.7 (84.6, 84.7) 74.6 (74.5, 74.8)
TDNN/HMM 59.2 (58.9, 59.7) 65.7 (65.4, 66.0)
E2E 66.1 (65.7, 66.3) 67.4 (66.6, 67.9)
Table 5: Accuracy (%) of the model trained on MTs with context 2 and tested on MTs and ATs.

Afterwards, we retrained the DA model with ATs. Tables 6 and 7 show the accuracy of training with TDNN/HMM and E2E transcriptions, respectively. Training on ATs increases the accuracy when testing on ATs and decreases it when testing MTs as expected. In case of MRDA, the accuracy is slightly worse when training on ATs from one system and testing on the other. However in case of SwDA, the accuracy is always better when testing on the ATs generated from the E2E system. Overall, we observed the best performance when training and testing on ATs generated from the E2E system on both datasets (76.6% on SwDA and 68.7% on MRDA. See Table 7).

Transcriptions MRDA SwDA
MTs 64.2 (62.8, 65.7) 66.9 (64.4, 69.5)
TDNN/HMM 74.0 (73.9, 74.1) 67.9 (67.5, 68.2)
E2E 71.1 (70.8, 71.7) 68.6 (68.1, 68.8)
Table 6: Accuracy (%) of the model trained on TDNN/HMM transcriptions with context 2 and tested on MTs and ATs.
Transcriptions MRDA SwDA
MTs 70.9 (68.3, 72.7) 66.6 (65.3, 70.0)
TDNN/HMM 73.2 (73.1, 73.3) 67.1 (66.2, 67.6)
E2E 76.6 (76.5, 76.7) 68.7 (68.4, 69.0)
Table 7: Accuracy (%) of the model trained on E2E transcriptions with context 2 and tested on MTs and ATs.

One of the main differences between MTs and ATs is that the latter has no punctuation. In [7], it was shown that punctuation provides strong lexical cues. Therefore, we retrained the model on MRDA’s MTs without punctuation. SwDA was not considered because the NXT-SwDA has no punctuation.

MRDA With Without
transcriptions punctuation punctuation
MTs 84.7 (84.6, 84.7) 81.3 (81.1, 81.5)
TDNN/HMM 59.2 (58.9, 59.7) 69.3 (69.3, 69.4)
E2E 66.1 (65.7, 66.3) 76.2 (76.0, 76.4)
Table 8: Accuracy (%) of the model with context trained on MRDA’s MTs without punctuation and tested on MTs and ATs.

It can be seen from Table 8 that punctuation is a strong cue for DA classification. Nonetheless, it leads to a high negative impact while testing on AT without punctuation. If MTs are used to train a model, it is advisable to remove punctuation. According to our results, by doing this a 10% improvement in accuracy terms is achieved on both ASR transcriptions of MRDA.

6 Conclusion

We explored dialog act classification on MTs with a novel approach for context modeling that combines CNNs and CRFs, reaching state-of-the-art results on two benchmark datasets (MRDA and SwDA). We also investigated the impact of ATs from two different automatic speech recognition systems (hybrid TDNN/HMM and End-to-End) on the final performance. Experimental results showed that although the WERs are comparable, the End-to-End ASR system might be more suitable for dialog act classification. Moreover, results confirm that punctuation yields central cues for the task suggesting that punctuation should be integrated into the ASR output in future works.

References

  • [1] J. L. Austin, How to Do Things with Words, Oxford university press, 1976.
  • [2] K. Bach and R. M. Harnish, Linguistic Communication and Speech Acts, MIT Press, 1979.
  • [3] A. Stolcke et al., “Dialogue act modeling for automatic tagging and recognition of conversational speech,” Computational Linguistics, vol. 26, pp. 339–373, 2000.
  • [4] M. Zimmermann, “Joint segmentation and classification of dialog acts using conditional random fields,” in Proc. of ISCA, 2009.
  • [5] J. Y. Lee and F. Dernoncourt, “Sequential short-text classification with recurrent and convolutional neural networks,” in Proc. of NAACL, 2016.
  • [6] N. Kalchbrenner and P. Blunsom, “Recurrent convolutional neural networks for discourse compositionality,” in Proc. of CVSC, 2013.
  • [7] D. Ortega and N. T. Vu, “Lexico-acoustic neural-based models for dialog act classification,” in Proc. of ICASSP, 2018.
  • [8] D. Ortega and N. T. Vu, “Neural-based context representation learning for dialog act classification,” in Proc. of SIGDIAL, 2017.
  • [9] Eugénio Ribeiro, Ricardo Ribeiro, and David Martins de Mato, “The influence of context on dialogue act recognition,” CoRR, 2015.
  • [10] G. Lample et al., “Neural architectures for named entity recognition,” in Proc. of NAACL, 2016.
  • [11] D. Andor et al., “Globally normalized transition-based neural networks,” in Proc. of ACL, 2016.
  • [12] H. Adel et al., “Global normalization of convolutional neural networks for joint entity and relation classification,” in Proc. of EMNLP, 2017.
  • [13] W. Xiong et al., “The microsoft 2017 conversational speech recognition system,” in Proc. of ICASSP, 2018.
  • [14] A. Graves et al.,

    “Speech recognition with deep recurrent neural networks,”

    in Proc. of ICASSP, 2013.
  • [15] D. Povey et al., “Purely sequence-trained neural networks for ASR based on lattice-free MMI,” in Proc. of INTERSPEECH, 2016.
  • [16] S. Watanabe et al., “ESPnet: end-to-end speech processing toolkit,” in Proc. of INTERSPEECH, 2018.
  • [17] Y. Miao et al., “EESEN: end-to-end speech recognition using deep RNN models and wfst-based decoding,” in Proc. of IEEE ASRU, 2015.
  • [18] D. Bahdanau et al., “End-to-end attention-based large vocabulary speech recognition,” in Proc. of ICASSP, 2016.
  • [19] A. Janin et al., “The ICSI meeting corpus,” in Proc. of ICASSP, 2003.
  • [20] E. Shriberg et al., “The ICSI meeting recorder dialog act (MRDA) corpus,” in Proc. of SIGdial Workshop on Discourse and Dialogue at HLT-NAACL, 2004.
  • [21] R. Dhillon et al., “Meeting recorder project: Dialog act labeling guide,” Tech. Rep., ICSI Berkeley CA, 2004.
  • [22] J. Ang et al., “Automatic dialog act segmentation and classification in multiparty meetings,” in Proc. of ICASSP, 2005.
  • [23] S. Calhoun et al, “The NXT-format switchboard corpus: A rich resource for investigating the syntax, semantics, pragmatics and prosody of dialogue,” Language Resources and Evaluation, vol. 44, pp. 387–419, 2010.
  • [24] B. T. Polyak et al., “Acceleration of stochastic approximation by averaging,” SICON, vol. 30, pp. 838–855, 1992.
  • [25] J. Duchi et al.,

    Adaptive subgradient methods for online learning and stochastic optimization,”

    Journal of Machine Learning Research

    , vol. 12, pp. 2121–2159, 2011.
  • [26] T. Mikolov et al.,

    “Efficient estimation of word representations in vector space,”

    in Proc. of ICLR, 2013.
  • [27] D. Povey et al., “The Kaldi speech recognition toolkit,” in Proc. of IEEE ASRU, 2011.
  • [28] C. Cieri et al., “The Fisher corpus: a resource for the next generations of speech-to-text,” in Proc. of LREC, 2004.
  • [29] Z. Xiao et al., “Hybrid ctc-attention based end-to-end speech recognition using subword units,” in Proc. of ISCSLP, 2018.