An Effective Non-Autoregressive Model for Spoken Language Understanding

08/16/2021
by   Lizhi Cheng, et al.
0

Spoken Language Understanding (SLU), a core component of the task-oriented dialogue system, expects a shorter inference latency due to the impatience of humans. Non-autoregressive SLU models clearly increase the inference speed but suffer uncoordinated-slot problems caused by the lack of sequential dependency information among each slot chunk. To gap this shortcoming, in this paper, we propose a novel non-autoregressive SLU model named Layered-Refine Transformer, which contains a Slot Label Generation (SLG) task and a Layered Refine Mechanism (LRM). SLG is defined as generating the next slot label with the token sequence and generated slot labels. With SLG, the non-autoregressive model can efficiently obtain dependency information during training and spend no extra time in inference. LRM predicts the preliminary SLU results from Transformer's middle states and utilizes them to guide the final prediction. Experiments on two public datasets indicate that our model significantly improves SLU performance (1.5% on Overall accuracy) while substantially speed up (more than 10 times) the inference process over the state-of-the-art baseline.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/24/2022

Capture Salient Historical Information: A Fast and Accurate Non-Autoregressive Model for Multi-turn Spoken Language Understanding

Spoken Language Understanding (SLU), a core component of the task-orient...
research
10/06/2020

SlotRefine: A Fast Non-Autoregressive Model for Joint Intent Detection and Slot Filling

Slot filling and intent detection are two main tasks in spoken language ...
research
11/22/2022

A Scope Sensitive and Result Attentive Model for Multi-Intent Spoken Language Understanding

Multi-Intent Spoken Language Understanding (SLU), a novel and more compl...
research
06/04/2019

Improving Long Distance Slot Carryover in Spoken Dialogue Systems

Tracking the state of the conversation is a central component in task-or...
research
02/19/2020

Non-Autoregressive Dialog State Tracking

Recent efforts in Dialogue State Tracking (DST) for task-oriented dialog...
research
06/05/2019

Memory Consolidation for Contextual Spoken Language Understanding with Dialogue Logistic Inference

Dialogue contexts are proven helpful in the spoken language understandin...
research
04/02/2022

Inverse is Better! Fast and Accurate Prompt for Few-shot Slot Tagging

Prompting methods recently achieve impressive success in few-shot learni...

Please sign up or login with your details

Forgot password? Click here to reset