Recent Advances in End-to-End Spoken Language Understanding

09/29/2019
by   Natalia Tomashenko, et al.
0

This work investigates spoken language understanding (SLU) systems in the scenario when the semantic information is extracted directly from the speech signal by means of a single end-to-end neural network model. Two SLU tasks are considered: named entity recognition (NER) and semantic slot filling (SF). For these tasks, in order to improve the model performance, we explore various techniques including speaker adaptation, a modification of the connectionist temporal classification (CTC) training criterion, and sequential pretraining.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/14/2020

Dialogue history integration into end-to-end signal-to-concept spoken language understanding systems

This work investigates the embeddings for representing dialog history in...
research
06/24/2021

Where are we in semantic concept extraction for Spoken Language Understanding?

Spoken language understanding (SLU) topic has seen a lot of progress the...
research
01/28/2022

Improving End-to-End Models for Set Prediction in Spoken Language Understanding

The goal of spoken language understanding (SLU) systems is to determine ...
research
06/08/2021

Sequential End-to-End Intent and Slot Label Classification and Localization

Human-computer interaction (HCI) is significantly impacted by delayed re...
research
12/14/2021

On the Use of External Data for Spoken Named Entity Recognition

Spoken language understanding (SLU) tasks involve mapping from speech au...
research
09/30/2020

End-to-End Spoken Language Understanding Without Full Transcripts

An essential component of spoken language understanding (SLU) is slot fi...
research
08/06/2020

Semantic Complexity in End-to-End Spoken Language Understanding

End-to-end spoken language understanding (SLU) models are a class of mod...

Please sign up or login with your details

Forgot password? Click here to reset