Disfluency Detection with Unlabeled Data and Small BERT Models

04/21/2021
by   Johann C. Rocholl, et al.
0

Disfluency detection models now approach high accuracy on English text. However, little exploration has been done in improving the size and inference time of the model. At the same time, automatic speech recognition (ASR) models are moving from server-side inference to local, on-device inference. Supporting models in the transcription pipeline (like disfluency detection) must follow suit. In this work we concentrate on the disfluency detection task, focusing on small, fast, on-device models based on the BERT architecture. We demonstrate it is possible to train disfluency detection models as small as 1.3 MiB, while retaining high performance. We build on previous work that showed the benefit of data augmentation approaches such as self-training. Then, we evaluate the effect of domain mismatch between conversational and written text on model performance. We find that domain adaptation and data augmentation strategies have a more pronounced effect on these smaller models, as compared to conventional BERT models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/18/2023

Making More of Little Data: Improving Low-Resource Automatic Speech Recognition Using Data Augmentation

The performance of automatic speech recognition (ASR) systems has advanc...
research
10/29/2019

Improving sequence-to-sequence speech recognition training with on-the-fly data augmentation

Sequence-to-Sequence (S2S) models recently started to show state-of-the-...
research
07/20/2022

Improving Data Driven Inverse Text Normalization using Data Augmentation

Inverse text normalization (ITN) is used to convert the spoken form outp...
research
09/05/2022

Distilling the Knowledge of BERT for CTC-based ASR

Connectionist temporal classification (CTC) -based models are attractive...
research
11/24/2022

German Phoneme Recognition with Text-to-Phoneme Data Augmentation

In this study, we experimented to examine the effect of adding the most ...
research
07/01/2019

Improving Performance of End-to-End ASR on Numeric Sequences

Recognizing written domain numeric utterances (e.g. I need 1.25.) can be...
research
04/12/2022

Overlapping Word Removal is All You Need: Revisiting Data Imbalance in Hope Speech Detection

Hope Speech Detection, a task of recognizing positive expressions, has m...

Please sign up or login with your details

Forgot password? Click here to reset