DeepAI AI Chat
Log In Sign Up

Modeling Inter-Speaker Relationship in XLNet for Contextual Spoken Language Understanding

by   Jonggu Kim, et al.

We propose two methods to capture relevant history information in a multi-turn dialogue by modeling inter-speaker relationship for spoken language understanding (SLU). Our methods are tailored for and therefore compatible with XLNet, which is a state-of-the-art pretrained model, so we verified our models built on the top of XLNet. In our experiments, all models achieved higher accuracy than state-of-the-art contextual SLU models on two benchmark datasets. Analysis on the results demonstrated that the proposed methods are effective to improve SLU accuracy of XLNet. These methods to identify important dialogue history will be useful to alleviate ambiguity in SLU of the current utterance.


page 1

page 2

page 3

page 4


Decay-Function-Free Time-Aware Attention to Context and Speaker Indicator for Spoken Language Understanding

To capture salient contextual information for spoken language understand...

A Result based Portable Framework for Spoken Language Understanding

Spoken language understanding (SLU), which is a core component of the ta...

Speaker Role Contextual Modeling for Language Understanding and Dialogue Policy Learning

Language understanding (LU) and dialogue policy learning are two essenti...

Dynamic Time-Aware Attention to Speaker Roles and Contexts for Spoken Language Understanding

Spoken language understanding (SLU) is an essential component in convers...

Learning Context-Sensitive Time-Decay Attention for Role-Based Dialogue Modeling

Spoken language understanding (SLU) is an essential component in convers...

Speaker-Sensitive Dual Memory Networks for Multi-Turn Slot Tagging

In multi-turn dialogs, natural language understanding models can introdu...

Energy-based Self-attentive Learning of Abstractive Communities for Spoken Language Understanding

Abstractive Community Detection is an important Spoken Language Understa...