Dialogs constitute a crucial communication channel in completing a broad range of tasks, such as weather query, flight and restaurant booking, movie booking, IT helpdesk, etc. Comparing to chit-chat systems that are usually modeled with single-turn context-response pairs, task-oriented dialog systems involve retrieving information from knowledge bases and reasoning over multiple dialog turns. This makes it especially important for a system to be able to produce response that are grounded on tasks goals and user intents. In a bid to support human-computer interactions, task-oriented dialog systems have been built to allow users to converse with a computer system using natural language, such as Siri 333https://www.apple.com/siri/, Google Assistant 444https://assistant.google.com/, Amazon Alexa 555https://developer.amazon.com/en-US/alexa, Microsoft XiaoIce Zhou et al. (2020). Traditionally, a task-oriented dialog system uses a modularized pipeline with four modules that execute sequentially Gao et al. (2019). A natural language understanding (NLU) module identifies user intents and extracts associated information such as slots and corresponding values from user input. A dialog state tracker (DST) infers the belief state (or user goal) from dialog history. The belief state is often used to query a task-specific database (DB) to obtain the DB state, such as the number of entities that match the user goal. The dialog state and DB state are then passed to a dialog policy (POL
) module to select the next system action. A natural language generation (NLG) module converts the action to a natural language response.
The human ability to converse is general, flexible, and robust. In contrast, most popular tools for dialog system development adopting the above modular systems are designed for specific tasks and struggle with out-of-scope data. If we aspire to develop models beyond extensively hand-crafted rules and annotated data for each single domain/task, it is critical to develop a more unified, efficient and robust model that can more quickly learn to execute a range of different tasks in different domains.
To fuel research in this direction, we present the Raddle benchmark. It includes a collection of task-oriented dialog tasks in diverse domains (e.g. end-to-end modeling, dialog state tracking). The benchmark also has a companion online platform for model evaluation, comparison, and robustness analysis. Importantly, Raddle exhibits two unique advantages that pave the way for building more pragmatic dialog systems: Limited data setting is the major focus of Raddle, to evaluate the generalization ability of models. It aims at simulating the real-world application scenarios where only very limited amount of labelled data is available for new domains. Given this focus, Raddle is therefore a favorable benchmark to evaluate recent models in the pre-training and fine-tuning paradigm, which learn to represent linguistic knowledge in a way that facilitates sample-efficient learning and effective knowledge transfer. Robustness analysis is introduced to study model performance in various challenging scenarios, where models are evaluated with anomalous user input such as language variations, speech errors, unseen entities and out-of-domain utterances. Failing to handle these inputs often produce inappropriate responses leading to frustrating user experience. These scenarios are common for deployed systems in the real world, but are largely ignored in existing dialog benchmarks. To the best of our knowledge, Raddle presents the first work to fill this gap.
To better understand the challenges posed by Raddle, we conduct experiments with simple baselines and state-of-the-art task-oriented dialog models. We find that grounded pre-trained models with a unified multi-task learning objective outperform models separately trained on each domain. Moreover, even the best performing model (Soloist Peng et al. (2020a)) in our evaluation achieves a fairly low score in robustness analysis. This suggests that our baseline models can handle common inputs with strong regularities, but struggle with anomalous inputs that require deeper reasoning.
In summary, our key contributions are: A novel dialog benchmark with an emphasis on limited data and multiple domains/tasks, which formally creates a scenario to evaluate the grounding and generalization ability of pre-trained models. A crowd-sourced diagnostic evaluation dataset to cover a broad range of real-world sophistication to study model robustness. An online evaluation platform and leaderboard to track research progress, with human evaluation services to be granted to top-ranked submissions on a bi-monthly basis. Baseline results for major existing approaches to task-oriented dialogs.
2 Related Work
2.1 Dialog Benchmarks
To drive the progress of building dialogue systems using data-driven approaches, a number of conversational corpora have been released. They are roughly grouped into two categories: Corpora with structured semantic labels Wen et al. (2016); Shah et al. (2018). These datasets are often specifically annotated, and used to study an individual module in the dialog pipeline. For example, DialoGLUE Mehri et al. (2020) is a recently proposed benchmark with a focus on NLU and DST tasks. Corpora with an implicit user goal Lowe et al. (2015). These datasets are often without semantic labels but can be used in end-to-end (E2E) dialog modeling Li et al. (2016); Zhu (2020); Wu et al. (2019); Zhu et al. (2019a). For example, ConvLab Lee et al. (2019); Zhu et al. (2020) is a recent platform for multi-domain E2E evaluation.
MultiWOZ Budzianowski et al. (2018) is the most related work to Raddle. It is a large-scale multi-turn conversational corpus across several domains. It can be used to develop individual dialog modules as separate tasks for existing modular-based methods, or serves as a benchmark for E2E dialogue modeling methods. Raddle inherits the advantages of MultiWOZ in its flexibility for separate/joint task modeling and its comprehensiveness in multi-domain data coverage, but differs significantly in two aspects: an emphasis on limited data settings and an unique robustness checklist. Both are essential qualities in building task bots at scale.
Further, Raddle provides an online platform for model evaluation and fair comparison based on privately-held test data, inspired by GLUE Wang et al. (2018). To the best of our knowledge, Raddle is the first online platform for DST and E2E tasks in the dialog community. This can reduce the inconsistency caused by different researchers/teams using varying processing/evaluation scripts to dilute where the gain comes from.
|Standard||Language variations / Speech Errors||Unseen||OOD|
|Task||Dialog State Tracking / End-to-End Modeling||DST / IC||DST / OOD|
|Metrics||Joint Goal Accuracy / Combined Score||JGA / Acc.||JGA / F1|
2.2 Evaluation of Pre-trained Models
Pre-trained language models (PLMs) have substantially advanced the state of the art across a variety of language understanding and generation tasks Peters et al. (2018); Devlin et al. (2019); Yang et al. (2019); Liu et al. (2019); Radford et al. (2019); Keskar et al. (2019); Dong et al. (2019); Peng et al. (2020b, c); Li et al. (2020a). PLMs are often trained to predict words based on their context on massive text data, and the learned models can be fine-tuned to quickly adapt to various downstream tasks, exhibiting strong generalization capacity even with just a few in-domain training examples. Building task bots at scale requires the model to deal with the limited data problem for each domain, which can be used as a testbed to evaluate the generalization ability of PLMs. To this end, we limit the number of task-specific training examples in Raddle to evaluate the sample-efficiency of models.
Meanwhile, task-oriented dialogues pose a unique set of challenges for PLMs Gao et al. (2020): a dialog is intrinsically goal-driven, multi-turn and often informal/noisy. Indeed, dialog-specific PLMs are proposed Wu et al. (2020a); Peng et al. (2020a). However, the robustness of PLMs to linguistic perturbations often occurring in dialog settings (See Section 4 for details) is largely unexplored. Note that our notion of robustness emphasizes natural language variations, which is different from adversarial examples/training that aim to fool a trained model Nie et al. (2019). From this perspective, Raddle provides an unique benchmark for assessing PLMs with a robustness orientation.
Raddle is centered on five English dialog scenarios in daily life, which cover a broad range of data collection schemes, task types and complexities. As our first goal of Raddle is to spur development of generalizable dialog systems, we design the benchmark such that a good performance requires a model to leverage substantial knowledge (e.g., pre-trained parameters) learned from its previous life cycle, while still maintaining some task-specific components Coope et al. (2020); Henderson et al. (2020); Peng et al. (2020a); Wu et al. (2020b). Specifically, we deliberately keep a small number of training examples for each scenarios. This is consistent with the common practice that only limited labelled data is provided when deploying a dialog system to new domains. Table 1 shows the data statistics. Four domains in the standard-setting are sampled from MultiWOZ2.0 Budzianowski et al. (2018). Reminder is intentionally only utilized for unseen entity tracking. Because it is a human-machine corpus with a relatively smaller action space meaning that the impact of policy learning on models is largely alleviated. Therefore, the performance of models on this corpus will mostly reflect its capability of unseen entity tracking. Note that The number of training examples is limited to 50, an accepted scale that users can provide. Though it is possible to train a single model for each task from scratch without outside sources of knowledge, we expect that our focus on data-scarce settings will render this approach uncompetitive.
Furthermore, a typical task-oriented dialog system uses a modularized pipeline that has four modules and executes sequentially. Recent research has shown promising results on parameterizing the modularized pipeline using a single neural auto-regressive model, and training it in an end-to-end manner Peng et al. (2020a); Ham et al. (2020); Hosseini-Asl et al. (2020). In fact, a single auto-regressive model can significantly ease the workflow of training and deploying dialog systems for new tasks, compared to existing modularized tools and methods. Therefore, we design the benchmark to allow evaluations on end-to-end dialog modeling, in addition to the modularized evaluation on dialog state tracking. To reveal the gap between the complexity of dialogues in lab environments and that in real scenarios, we construct a suite of tasks to study the robustness of models. We describe these tasks below and in Table 1.
On the evaluation front, we concentrate on simulation-based methodologies, in order to facilitate automation. Although we only offer human-based evaluations Gao et al. (2019) to top-ranked submissions at this point, we emphasize realistic scenarios in pursuit of system robustness (see Section 4).
Task 1: Dialog State Tracking
A robust NLU and DST is the first step towards building a reliable dialog system. The dialogue state is a summary of the entire conversation till the current turn. In a task-oriented system, it is represented in the form of slot-value pairs, where slot indicates the category/attribute of the user goal expressed in the utterance, and value
is the corresponding information. For the evaluation metric, we reportjoint goal accuracy, which indicates the proportion of dialogue turns where all the user’s search goal constraints are correctly identified Mrksic et al. (2017). To specially study the NLU performance, we consider intent classification, which aims to automatically extract meaning from a natural language utterance in order to understand user’s goal Hemphill et al. (1990); Zhu et al. (2019b).
Task 2: End-to-end Modeling
The end-to-end (E2E) dialog models consider dialog history as input, and produce the natural language response. It jointly implements the dialogue management (including DST and POL) and response generation (i.e., NLG) components. Following Budzianowski et al. (2018), , , and scores are reported. The first two metrics evaluate dialog task completion: measures if the system provides a correct entity (inform rate), meanwhile measures the exact matching of answering all the requested information (success rate,) and if the answered information matches users’ goal. evaluates how fluent the generated responses are compared to human-written responses. A combined score () is also reported using as an overall quality measure, as suggested in Budzianowski et al. (2018).
4 Robustness Diagnostic Checklist
Existing benchmarks assume a world of a “perfect” user who always provides precise, concise, and semantically unambiguous utterances. These goal-oriented dialog datasets are largely collected by crowd-sourcing, where a crowd-sourced worker enacts the part of a real user by following a set of template instructions provided for the task. This method results in a dataset where most user utterances are straight-forward, stick to the goal and tend to leave out the variation/errors commonly found in real-world conversational data. To this end, we collect a suite of language variations to reveal the dialog sophistication in the real world, and measure the robustness of dialog models.
|(a) Standard dialog session||(b) Paraphrase|
|(c) Verbosity||(d) Simplification|
|(e) Typos||(f) Speech errors|
|(g) Unseen entities||(h) Out-of-domain utterance|
It is well-known that humans communicate using language with fairly large variations such as different ways of expressions or personalized styles Sacks et al. (1978), while template-based crowd-sourcing fails in covering the linguistic variations Schegloff et al. (1977); Moore and Arar (2019). Specifically, we consider four types of variations in Raddle: Paraphrase widely exists among different users, who may present restatements of the meaning of a text or message using other words. Verbosity describes a quality that users may express their intents using more words than needed. Simplification is a quality that users express their intents using fewer words to be concise. Typos often result from illegitimate abbreviations. In Figure 1(b)-(e), we provide examples to illustrate these language variations.
It is desirable that dialog systems can leverage automatic speech recognition (ASR) techniques to serve the speech modality, as in Amazon Alexa. However, almost all dialog systems have typically assumed that the user input is written text, and hoped that the system would seamlessly integrate with speech inputs. Recently, It has been empirically shown inGopalakrishnan et al. (2020) that dialog systems trained on written data is very sensitive to various types of synthetic and actual ASR hypotheses in the dialog history. To bring attention to this gap, Raddle promotes speech robustness as an evaluation criterion. For example in Figure 1(f), “what’s available” can be transcribed as “once available” due to ASR deficiency, and a robust dialog system is expected to still correctly perceive user intents.
Most existing DST methods are not designed to handle slot values that are not known to the tracker. The assumption that a pre-defined ontology exists for the dialog and one can enumerate all possible values for each slot is often not valid in real-world scenarios. Even if such lists or dictionaries exist, they can be very large in size and highly dynamic Xu and Hu (2018). Therefore, unseen entities are common in dialogs, i.e., entities that are not observed during training, but appear in the testing stage. In Figure 1(g), the entity Bellevue downtown is in the knowledge base but never appears in model training, a robust DST should be able to recognize it as a city/place, via generalizing from other similar entities learned during training.
Most deployed task-oriented dialog systems are built for a closed set of target domains. Thus, they are fragile when dealing with out-of-domain (OOD) utterances Lee and Shalyminov (2019). Failure to detect OOD utterances often prevents the model from responding with an appropriate fallback action, hence leading to frustrating user experience. Therefore, it is important to endow task bots with the ability to detect OOD utterances for special handling Larson et al. (2019). For example, in Figure 1
(h), the user suggests an excursion to a task bot trained in college consulting, which is out of the bot’s scope. The bot is expected to raise a flag to label the utterance as an outlier, and guides the user to focus on the current domain.
The standard setting is sampled from MultiWOZ2.0 Budzianowski et al. (2018) but re-purposed in a few-shot learning setting. The language variations corpus is then created by workers on Amazon Mechanical Turks based on the standard corpus. To maximize the quality, we require workers in US locale and have a minimal previous approval rate of 90%. Assignments are constructed at the turn level. Given a user utterance and associated dialog history, workers are required to answer four questions, what are the paraphrase, typos, verbose, and simplified versions of the user utterance. Moreover, in each assignment, the workers are instructed to exactly mention the slot values in the answers if the given user utterance has them. For the speech recognition errors setting, we employ the audio-level error simulation Gopalakrishnan et al. (2020), which generates audio signals from texts, adds noise into the audio, and then decodes the audio with an ASR model to obtain hypotheses. In particular, we employ Microsoft Cognition text-to-speech service to synthesize audio signals. After injecting background noise into the audio signals, we use the speech recognition service to obtain a corpus of Word Error Rate (WER) of 30%. For the reminder domain that is applied for unseen entity evaluation, we firstly simulate several dialogs as seed scenarios using an agenda-based simulator and then randomly replace the slots in the dialogs with new values. Similar to constructing the language variations corpus, we then hire workers to rewrite the corpus as diverse and realistic as possible. Finally, the out-of-domain corpus is developed following Lee and Shalyminov (2019). We randomly choose 50% utterances in DSTC Henderson et al. (2014) for the Attraction domain as the training set. For the test set, besides utterance from DSTC, we also introduce utterance from a diverse set of domains like Stanford Eric and Manning (2017), Reddit, Twitter Sordoni et al. (2015) to evaluate the capability of handling different out-of-domain utterances.
For baselines, we consider three representative methods, holding state-of-the-art positions on existing benchmarks such as MultiWoZ Budzianowski et al. (2018).
represents a single multi-task learning model with impressive results on general language understanding and generation tasks. GPT-2 is an auto-regressive language model that leverages 12-24 layers of masked, multi-head self-attention Transformers. GPT-2 is pre-trained on extremely massive text data OpenWebText (Radford et al., 2019). It has demonstrated superior performance on characterizing human language data distribution and knowledge transfer. Given text prompts, GPT-2 can often generate fluent sentences. Its ancestral work GPT (with a smaller model size and less training data) has shown impressive results on language understanding tasks. In this paper, we consider GPT-2 as the approach of directly fine-tuning the pre-trained GPT-2 on a specific domain. Hence, GPT-2 can be viewed as SOLOIST without grounded pre-training, and serve as a strong baseline for both DST and E2E task.
represents recent model variants Ham et al. (2020); Hosseini-Asl et al. (2020) to parameterize dialog system as a single auto-regressive model. SOLOIST subsumes different dialog modules (e.g. state tracker, dialog policy, response generator) into a single Transformer model. It has the similar capability with GPT-2 in understanding and generating natural language sentences but is pre-trained on large heterogeneous dialog corpora to gain additional capability of grounding text response in user goals and real-world knowledge for task completion Peng et al. (2020a); Gao et al. (2020).
We leverage the pre-trained checkpoints from the corresponding work, and fine-tune them on Raddle
. Each domain is trained separately. We train our models with Adam with initial learning rate 5e-5 and batch size 1 for 20 epochs. To allow for fair comparisons with the both models, we do not tune hyper parameters or training settings for each model.
The Raddle benchmark follows the same evaluation model as GLUE Wang et al. (2018) or Kaggle666https://www.kaggle.com/. To evaluate a system on the benchmark, one must run the system on the provided test data for the tasks, then upload the results to the website http://aka.ms/raddle for scoring The benchmark site shows per-task scores and a macro-average of those scores to determine a system’s position on the leaderboard. The website also provides fine- and coarse-grained results on the robustness diagnostic datasets. We will provide human evaluation services for top-ranked submissions on a bimonthly basis. The human evaluation protocol follows (Peng et al., 2020a; Li et al., 2020b)
6 Benchmark Results
6.1 Overall Results
We first present the results of baseline methods across all tasks on the Raddle benchmark in Table 2. As shown, GPT-2 fine-tuned with domain-specific dialog corpora outperforms the strong modular-based method DAMD. This highlights the efficacy of pre-trained language models. Soloist is the best-performing model and improves upon GPT-2 over 10 points in terms of average score, and consistently performs better than GPT-2 across all the tasks. These strong results indicate that large-scale task-specific pre-training on dialog corpora is crucial for effective and robust task adaptation.
6.2 Robustness Diagnostic Checklist Results
Table 2 shows the overall performance of DST and E2E modeling under different variation settings.
It is noticeable that all the models incur significant performance drops under each type of variation. Among all variation types, Typos has the most substantial impact on both JGA and score resulting in 10 to 20 points of drop in performance. This is expected as misspelled keywords pose significant challenges for state tracking. The influence of other three types of variations are also prominent. The results reveal that existing SOTA dialog models trained on limited task-specific examples are not robust enough to handle various types of user utterances.
We observe a clear degradation in all metrics for all models. This shows that during inference, models trained on textual data are sensitive and not robust to actual ASR hypotheses introduced in dialog history.
Without task-specific pre-training, GPT-2 only achieves less than 30% of JGA and 51.20 of dialog act accuracy even on a simple domain with most of the common entity values. Soloist performs significantly better than GPT-2 by achieving 69.05% JGA and 96.98 dialog act accuracy but remains imperfect. These results imply that task-specific pre-training can improve the generalization capability of models but is still far from enough for production environments.
It is non-trivial for conventional modular-based dialog systems to handle out-of-domain detection. It often requires an additional component to classify whether a user utterance as in-domain or not. As such, we omit the result of DAMD in our experiments. We observe that pre-trained models handle out-of-domain detection relatively well. GPT-2achieves 83.96 F1 score while Soloist has 96.18 F1 score, which shows that task-specific pre-training can improve robustness of models to out-of-domain utterances.
6.3 Robustness detailed case studies
First, to better understand the impact of different language variations, we evaluated Soloist on the corpus of different variation levels. Env-0 denotes the standard corpus, while Env-1, Env-2, Env-3 represent that 10%, 50% 80% of the standard corpus is replaced with language variation examples, respectively. Table 3 lists the detailed results on the end-to-end task and Table 4 shows the performance of state tracking. In general, the performance drops as the variation level increases for all types of variations across four domains. Even for a small variation level Env-1 (10%), the performance drops significantly. We found that the degradation is mainly due to incorrectly tracked dialog states. Moreover, as depicted in Fig. 2 and shown in Table 3, although the combined score drops on Env-2 and Env-3, Soloist still maintains good BLEU scores. These observations indicate that policy and response generation of Soloist are relatively robust to language variations and the dialog state tracking capability is the major bottleneck towards robust dialog models. An intriguing possibility to improve robustness is to apply adversarial training Liu et al. (2020) to task-specific pre-training.
Next, similar to the experiments on language variations, we evaluated Soloist on corpus with different levels speech errors. Results are shown in Table 5. We observe that compared with language variations, speech errors have a smaller impact on the performance for Soloist. It is noteworthy that the evaluation corpus we choose has considerably higher word error rates, ranging from 10% to 30%, than a modern speech recognizer which usually has single-digit word error rate in quiet environments. We speculate that pre-trained dialog models trained on textual data has the potential to be deployed to smart home devices like Amazon Alexa, Apple Homepod. However, it still has defects when used in noisy environments such as smart assistant in cars or outdoor usage. There is less work on jointly pre-training speech and text modalities in dialog community. We believe that adding the speech modality to dialog pre-training may enhance robustness to speech errors.
Evaluation results on unseen entities are listed in Table 7. We observe that GPT-2 is able to handle unseen entities like Name and Time to some extent in this controlled experiment but fails in tracking Day properly, leading to inferior results in terms of joint goal accuracy and action selection. In contrast, with task-specific pre-training, Soloist substantially improves the performance in all the metrics. Nevertheless, for this simple task, string matching can effortlessly achieve near 100% accuracy. Therefore, 69.05 joint goal accuracy is insufficient to affirm that Soloist is robust to unseen entities. Incorporating knowledge into pre-training can be a solid basis for further research to improve robustness to unseen entities.
We also present the evaluation results of out-of-domain detection using varying sizes of training examples on different target domains in Table 6. In the homologous DSTC domain Lee and Shalyminov (2019), GPT-2 performs similarly with Soloist. They are both able to identify out-of-domain utterances with near 100% F1 score when injecting 50% OOD data. However, Soloist leads 3 points in F1 score when trained using only 10% data. In the other heterogeneous domains, Soloist performs consistently better than GPT-2. In Reddit and Twitter domains that are distinct from DSTC, Soloist outperforms by over 20 points in F1 score when trained using 50% data, showing that Soloist is more robust than GPT-2 to out-of-domain utterances. An inspiring observation is that injecting out-of-domain data can increase the performance of state tracking. While task specific pre-training helps with OOD detection, involving open-domain data into pre-training or initializing from open-domain dialog models such as DialoGPT Zhang et al. (2019b) might further enhance robustness of dialog models.
|(a) DSTC-8||(b) DSTC-9|
Finally, it is worth pointing out some important trends in the dialog research community, based on the DSTC challenge Kim et al. (2019); Gunasekara et al. (2020) in the last 2 years (Figure 3). In DSTC8 Kim et al. (2019), the winning submission by Team 5 is the only one that uses pre-trained models (GPT-2). When moving from corpus evaluation to human evaluation, it exhibits the least performance drop relative to other submissions, which is strong evidence to demonstrate robustness of pre-trained models. By the time of DSTC9 Gunasekara et al. (2020), the community have witnessed a general trend shift from modular systems to pre-trained end-to-end architectures. However, the significant performance gap between corpus evaluation and human evaluation indicates that pre-trained methods remain sensitive to noisy inputs. Such observations underscore the importance of robustness-oriented design and evaluation, for which Raddle fills a major void.
We introduce Raddle
, a platform and collection of resources for evaluating and analyzing task-oriented dialog systems. We confirm the utility of grounded pre-training and transfer learning methods in dialog systems: pre-training improves generalization in a limited data setting, but still leaves room for improvement. When evaluating these models on our diagnostic dataset, we find that they fail (often spectacularly) on many robustness test cases, suggesting possible avenues for future work. In summary, the question of how to design unified, efficient, robust models remains largely unexplored, and we believe thatRaddle can provide fertile soil for addressing this challenge.
- Multiwoz - a large-scale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling. arXiv preprint arXiv:1810.00278. Cited by: §2.1, §3, §3, §4, §5.
- Span-convert: few-shot span extraction for dialog with pretrained conversational representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, D. Jurafsky, J. Chai, N. Schluter, and J. R. Tetreault (Eds.), pp. 107–121. External Links: Cited by: §3.
- BERT: pre-training of deep bidirectional transformers for language understanding. NAACL. Cited by: §2.2.
- Unified language model pre-training for natural language understanding and generation. In Advances in Neural Information Processing Systems, pp. 13042–13054. Cited by: §2.2.
- Key-value retrieval networks for task-oriented dialogue. arXiv preprint arXiv:1705.05414. Cited by: §4.
- Neural approaches to conversational ai. Foundations and Trends® in Information Retrieval 13 (2-3), pp. 127–298. Cited by: §1, §3.
- Robust conversational ai with grounded text generation. arXiv preprint arXiv:2009.03457. Cited by: §2.2, §5.
- Are neural open-domain dialog systems robust to speech recognition errors in the dialog history? an empirical study. arXiv preprint arXiv:2008.07683. Cited by: §4, §4.
- Overview of the ninth dialog system technology challenge: dstc9. arXiv preprint arXiv:2011.06486. Cited by: §6.3.
- End-to-end neural pipeline for goal-oriented dialogue system using gpt-2. ACL. Cited by: §3, §5.
- The ATIS spoken language systems pilot corpus. In Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27,1990, External Links: Cited by: §3.
ConveRT: efficient and accurate conversational representations from transformers.
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, EMNLP 2020, Online Event, 16-20 November 2020, T. Cohn, Y. He, and Y. Liu (Eds.), pp. 2161–2174. External Links: Cited by: §3.
- The second dialog state tracking challenge. In Proceedings of the 15th annual meeting of the special interest group on discourse and dialogue (SIGDIAL), pp. 263–272. Cited by: §4.
- A simple language model for task-oriented dialogue. arXiv preprint arXiv:2005.00796. Cited by: §3, §5.
- Ctrl: a conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858. Cited by: §2.2.
- The eighth dialog system technology challenge. arXiv preprint arXiv:1911.06394. Cited by: §6.3.
- An evaluation dataset for intent classification and out-of-scope prediction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 1311–1316. External Links: Cited by: §4.
- Contextual out-of-domain utterance handling with counterfeit data augmentation. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 7205–7209. Cited by: §4, §4, §6.3.
- ConvLab: multi-domain end-to-end dialog system platform. CoRR abs/1904.08637. External Links: Cited by: §2.1.
- Optimus: organizing sentences via pre-trained modeling of a latent space. arXiv preprint arXiv:2004.04092. Cited by: §2.2.
Results of the multi-domain task-completion dialog challenge.
Proceedings of the 34th AAAI Conference on Artificial Intelligence, Eighth Dialog System Technology Challenge Workshop, Cited by: §5.
- A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 110–119. Cited by: §2.1.
- Adversarial training for large neural language models. arXiv preprint arXiv:2004.08994. Cited by: §6.3.
- Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Cited by: §2.2.
- The ubuntu dialogue corpus: a large dataset for research in unstructured multi-turn dialogue systems. arXiv preprint arXiv:1506.08909. Cited by: §2.1.
- DialoGLUE: a natural language understanding benchmark for task-oriented dialogue. arXiv preprint arXiv:2009.13570. Cited by: §2.1.
- Conversational ux design: a practitioner’s guide to the natural conversation framework. ACM. Cited by: §4.
- Neural belief tracker: data-driven dialogue state tracking. In ACL (1), Cited by: §3.
- Adversarial NLI: a new benchmark for natural language understanding. arXiv preprint arXiv:1910.14599. Cited by: §2.2.
- SOLOIST: few-shot task-oriented dialog with a single pre-trained auto-regressive model. arXiv preprint arXiv:2005.05298. Cited by: §1, §2.2, §3, §3, §5, §5.
- Few-shot natural language generation for task-oriented dialog. arXiv preprint arXiv:2002.12328. Cited by: §2.2.
- Data augmentation for spoken language understanding via pretrained models. arXiv preprint arXiv:2004.13952. Cited by: §2.2.
- Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Cited by: §2.2.
- Language models are unsupervised multitask learners. Cited by: §2.2, §5.
- A simplest systematics for the organization of turn taking for conversation. In Studies in the organization of conversational interaction, Cited by: §4.
- The preference for self-correction in the organization of repair in conversation. Language. Cited by: §4.
Building a conversational agent overnight with dialogue self-play. arXiv preprint arXiv:1801.04871. Cited by: §2.1.
- A neural network approach to context-sensitive generation of conversational responses. arXiv preprint arXiv:1506.06714. Cited by: §4.
- GLUE: a multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. Cited by: §2.1, §5.
- A network-based end-to-end trainable task-oriented dialogue system. arXiv preprint arXiv:1604.04562. Cited by: §2.1.
- Tod-bert: pre-trained natural language understanding for task-oriented dialogues. arXiv preprint arXiv:2004.06871. Cited by: §2.2.
- ToD-BERT: pre-trained natural language understanding for task-oriented dialogues. Cited by: §3.
- Alternating recurrent dialog model with large-scale pre-trained language models. arXiv preprint arXiv:1910.03756. Cited by: §2.1.
- An end-to-end approach for handling unknown slot values in dialogue state tracking. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia, pp. 1448–1457. External Links: Cited by: §4.
- XLNet: generalized autoregressive pretraining for language understanding. NeurIPS. Cited by: §2.2.
- Task-oriented dialog systems that consider multiple appropriate responses under the same context. arXiv preprint arXiv:1911.10484. Cited by: §5.
- DialoGPT: large-scale generative pre-training for conversational response generation. arXiv preprint arXiv:1911.00536. Cited by: §6.3.
- The design and implementation of xiaoice, an empathetic social chatbot. Computational Linguistics 46 (1), pp. 53–93. Cited by: §1.
- Multi-task learning for natural language generation in task-oriented dialogue. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 1261–1266. Cited by: §2.1.
- SIM: a slot-independent neural model for dialogue state tracking. arXiv preprint arXiv:1909.11833. Cited by: §3.
- Boosting naturalness of language in task-oriented dialogues via adversarial training. arXiv preprint arXiv:2004.14565. Cited by: §2.1.
ConvLab-2: an open-source toolkit for building, evaluating, and diagnosing dialogue systems. CoRR abs/2002.04793. External Links: Cited by: §2.1.