-
A Hierarchical Decoding Model For Spoken Language Understanding From Unaligned Data
Spoken language understanding (SLU) systems can be trained on two types ...
read it
-
Improving Spoken Language Understanding By Exploiting ASR N-best Hypotheses
In a modern spoken language understanding (SLU) system, the natural lang...
read it
-
Jointly Encoding Word Confusion Network and Dialogue Context with BERT for Spoken Language Understanding
Spoken Language Understanding (SLU) converts hypotheses from automatic s...
read it
-
ASR error management for improving spoken language understanding
This paper addresses the problem of automatic speech recognition (ASR) e...
read it
-
Exploiting Sentence and Context Representations in Deep Neural Models for Spoken Language Understanding
This paper presents a deep learning architecture for the semantic decode...
read it
-
Warped Language Models for Noise Robust Language Understanding
Masked Language Models (MLM) are self-supervised neural networks trained...
read it
-
Towards Semi-Supervised Semantics Understanding from Speech
Much recent work on Spoken Language Understanding (SLU) falls short in a...
read it
Robust Spoken Language Understanding with RL-based Value Error Recovery
Spoken Language Understanding (SLU) aims to extract structured semantic representations (e.g., slot-value pairs) from speech recognized texts, which suffers from errors of Automatic Speech Recognition (ASR). To alleviate the problem caused by ASR-errors, previous works may apply input adaptations to the speech recognized texts, or correct ASR errors in predicted values by searching the most similar candidates in pronunciation. However, these two methods are applied separately and independently. In this work, we propose a new robust SLU framework to guide the SLU input adaptation with a rule-based value error recovery module. The framework consists of a slot tagging model and a rule-based value error recovery module. We pursue on an adapted slot tagging model which can extract potential slot-value pairs mentioned in ASR hypotheses and is suitable for the existing value error recovery module. After the value error recovery, we can achieve a supervision signal (reward) by comparing refined slot-value pairs with annotations. Since operations of the value error recovery are non-differentiable, we exploit policy gradient based Reinforcement Learning (RL) to optimize the SLU model. Extensive experiments on the public CATSLU dataset show the effectiveness of our proposed approach, which can improve the robustness of SLU and outperform the baselines by significant margins.
READ FULL TEXT
Comments
There are no comments yet.