Log In Sign Up

Multi-Domain Dialogue State Tracking – A Purely Transformer-Based Generative Approach

by   Yan Zeng, et al.

We investigate the problem of multi-domain Dialogue State Tracking (DST) with open vocabulary. Existing approaches exploit BERT encoder and copy-based RNN decoder, where the encoder first predicts the state operation, and then the decoder generates new slot values. However, in this stacked encoder-decoder structure, the operation prediction objective only affects the BERT encoder and the value generation objective mainly affects the RNN decoder. In this paper, we propose a purely Transformer-based framework that uses BERT as both encoder and decoder. In so doing, the operation prediction objective and the value generation objective can jointly optimize our model for DST. At the decoding step, we re-use the hidden states of the encoder in the self-attention mechanism of the corresponding decoder layer to construct a flat model structure for effective parameter updating. Experimental results show that our approach substantially outperforms the existing state-of-the-art framework, and it also achieves very competitive performance to the best ontology-based approaches.


page 1

page 2

page 3

page 4


Multi-Domain Dialogue State Tracking based on State Graph

We investigate the problem of multi-domain Dialogue State Tracking (DST)...

Efficient Dialogue State Tracking by Selectively Overwriting Memory

Recent works in dialogue state tracking (DST) focus on an open vocabular...

Learn to Focus: Hierarchical Dynamic Copy Network for Dialogue State Tracking

Recently, researchers have explored using the encoder-decoder framework ...

Review Networks for Caption Generation

We propose a novel extension of the encoder-decoder framework, called a ...

BERT-DST: Scalable End-to-End Dialogue State Tracking with Bidirectional Encoder Representations from Transformer

An important yet rarely tackled problem in dialogue state tracking (DST)...

Nana-HDR: A Non-attentive Non-autoregressive Hybrid Model for TTS

This paper presents Nana-HDR, a new non-attentive non-autoregressive mod...

Improving Stack Overflow question title generation with copying enhanced CodeBERT model and bi-modal information

Context: Stack Overflow is very helpful for software developers who are ...