Adapting Pretrained Transformer to Lattices for Spoken Language Understanding

11/02/2020
by   Chao-Wei Huang, et al.
2

Lattices are compact representations that encode multiple hypotheses, such as speech recognition results or different word segmentations. It is shown that encoding lattices as opposed to 1-best results generated by automatic speech recognizer (ASR) boosts the performance of spoken language understanding (SLU). Recently, pretrained language models with the transformer architecture have achieved the state-of-the-art results on natural language understanding, but their ability of encoding lattices has not been explored. Therefore, this paper aims at adapting pretrained transformers to lattice inputs in order to perform understanding tasks specifically for spoken language. Our experiments on the benchmark ATIS dataset show that fine-tuning pretrained transformers with lattice inputs yields clear improvement over fine-tuning with 1-best results. Further evaluation demonstrates the effectiveness of our methods under different acoustic conditions. Our code is available at https://github.com/MiuLab/Lattice-SLU

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/06/2020

Learning Spoken Language Representations with Neural Lattice Language Modeling

Pre-trained language models have achieved huge improvement on many NLP t...
research
11/19/2021

Lattention: Lattice-attention in ASR rescoring

Lattices form a compact representation of multiple hypotheses generated ...
research
07/12/2022

PLM-ICD: Automatic ICD Coding with Pretrained Language Models

Automatically classifying electronic health records (EHRs) into diagnost...
research
12/16/2022

Effectiveness of Text, Acoustic, and Lattice-based representations in Spoken Language Understanding tasks

In this paper, we perform an exhaustive evaluation of different represen...
research
10/28/2022

Just-DREAM-about-it: Figurative Language Understanding with DREAM-FLUTE

Figurative language (e.g., "he flew like the wind") is challenging to un...
research
07/19/2022

Benchmarking Transformers-based models on French Spoken Language Understanding tasks

In the last five years, the rise of the self-attentional Transformer-bas...
research
02/02/2022

RescoreBERT: Discriminative Speech Recognition Rescoring with BERT

Second-pass rescoring is an important component in automatic speech reco...

Please sign up or login with your details

Forgot password? Click here to reset