Using multiple ASR hypotheses to boost i18n NLU performance

12/07/2020
by   Charith Peris, et al.
0

Current voice assistants typically use the best hypothesis yielded by their Automatic Speech Recognition (ASR) module as input to their Natural Language Understanding (NLU) module, thereby losing helpful information that might be stored in lower-ranked ASR hypotheses. We explore the change in performance of NLU associated tasks when utilizing five-best ASR hypotheses when compared to status quo for two language datasets, German and Portuguese. To harvest information from the ASR five-best, we leverage extractive summarization and joint extractive-abstractive summarization models for Domain Classification (DC) experiments while using a sequence-to-sequence model with a pointer generator network for Intent Classification (IC) and Named Entity Recognition (NER) multi-task experiments. For the DC full test set, we observe significant improvements of up to 7.2 and Portuguese, respectively. In cases where the best ASR hypothesis was not an exact match to the transcribed utterance (mismatched test set), we see improvements of up to 6.7 Portuguese, respectively. For IC and NER multi-task experiments, when evaluating on the mismatched test set, we see improvements across all domains in German and in 17 out of 19 domains in Portuguese (improvements based on change in SeMER scores). Our results suggest that the use of multiple ASR hypotheses, as opposed to one, can lead to significant performance improvements in the DC task for these non-English datasets. In addition, it could lead to significant improvement in the performance of IC and NER tasks in cases where the ASR model makes mistakes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/17/2022

AISHELL-NER: Named Entity Recognition from Chinese Speech

Named Entity Recognition (NER) from speech is among Spoken Language Unde...
research
01/11/2020

Improving Spoken Language Understanding By Exploiting ASR N-best Hypotheses

In a modern spoken language understanding (SLU) system, the natural lang...
research
12/10/2021

Sequence-level self-learning with multiple hypotheses

In this work, we develop new self-learning techniques with an attention-...
research
06/11/2021

N-Best ASR Transformer: Enhancing SLU Performance using Multiple ASR Hypotheses

Spoken Language Understanding (SLU) systems parse speech into semantic s...
research
11/03/2022

Hybrid-SD (H_SD): A new hybrid evaluation metric for automatic speech recognition tasks

Many studies have examined the shortcomings of word error rate (WER) as ...
research
09/09/2023

Leveraging Large Language Models for Exploiting ASR Uncertainty

While large language models excel in a variety of natural language proce...
research
06/22/2017

Automatic Quality Estimation for ASR System Combination

Recognizer Output Voting Error Reduction (ROVER) has been widely used fo...

Please sign up or login with your details

Forgot password? Click here to reset