Dynamic Time-Aware Attention to Speaker Roles and Contexts for Spoken Language Understanding

09/30/2017
by   Po-Chun Chen, et al.
0

Spoken language understanding (SLU) is an essential component in conversational systems. Most SLU component treats each utterance independently, and then the following components aggregate the multi-turn information in the separate phases. In order to avoid error propagation and effectively utilize contexts, prior work leveraged history for contextual SLU. However, the previous model only paid attention to the content in history utterances without considering their temporal information and speaker roles. In the dialogues, the most recent utterances should be more important than the least recent ones. Furthermore, users usually pay attention to 1) self history for reasoning and 2) others' utterances for listening, the speaker of the utterances may provides informative cues to help understanding. Therefore, this paper proposes an attention-based network that additionally leverages temporal information and speaker role for better SLU, where the attention to contexts and speaker roles can be automatically learned in an end-to-end manner. The experiments on the benchmark Dialogue State Tracking Challenge 4 (DSTC4) dataset show that the time-aware dynamic role attention networks significantly improve the understanding performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/05/2018

Learning Context-Sensitive Time-Decay Attention for Role-Based Dialogue Modeling

Spoken language understanding (SLU) is an essential component in convers...
research
09/30/2017

Speaker Role Contextual Modeling for Language Understanding and Dialogue Policy Learning

Language understanding (LU) and dialogue policy learning are two essenti...
research
10/28/2019

Modeling Inter-Speaker Relationship in XLNet for Contextual Spoken Language Understanding

We propose two methods to capture relevant history information in a mult...
research
03/20/2019

Decay-Function-Free Time-Aware Attention to Context and Speaker Indicator for Spoken Language Understanding

To capture salient contextual information for spoken language understand...
research
11/29/2017

Speaker-Sensitive Dual Memory Networks for Multi-Turn Slot Tagging

In multi-turn dialogs, natural language understanding models can introdu...
research
05/13/2020

A Survey on Temporal Reasoning for Temporal Information Extraction from Text (Extended Abstract)

Time is deeply woven into how people perceive, and communicate about the...
research
06/24/2022

Capture Salient Historical Information: A Fast and Accurate Non-Autoregressive Model for Multi-turn Spoken Language Understanding

Spoken Language Understanding (SLU), a core component of the task-orient...

Please sign up or login with your details

Forgot password? Click here to reset