Effects of Layer Freezing when Transferring DeepSpeech to New Languages

02/08/2021
by   Onno Eberhard, et al.
0

In this paper, we train Mozilla's DeepSpeech architecture on German and Swiss German speech datasets and compare the results of different training methods. We first train the models from scratch on both languages and then improve upon the results by using an English pretrained version of DeepSpeech for weight initialization and experiment with the effects of freezing different layers during training. We see that even freezing only one layer already improves the results dramatically.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/15/2021

Scribosermo: Fast Speech-to-Text models for German and other Languages

Recent Speech-to-Text models often require a large amount of hardware re...
research
09/21/2018

Paraphrase Detection on Noisy Subtitles in Six Languages

We perform automatic paraphrase detection on subtitle data from the Opus...
research
07/20/2023

Cross-Corpus Multilingual Speech Emotion Recognition: Amharic vs. Other Languages

In a conventional Speech emotion recognition (SER) task, a classifier fo...
research
08/10/2018

Homophonic Quotients of Linguistic Free Groups: German, Korean, and Turkish

In 1993, the homophonic quotient groups for French and English (the quot...
research
10/09/2019

Spoken Language Identification using ConvNets

Language Identification (LI) is an important first step in several speec...
research
09/20/2021

BERT Cannot Align Characters

In previous work, it has been shown that BERT can adequately align cross...
research
07/07/2023

DWReCO at CheckThat! 2023: Enhancing Subjectivity Detection through Style-based Data Sampling

This paper describes our submission for the subjectivity detection task ...

Please sign up or login with your details

Forgot password? Click here to reset