Multilingual transfer of acoustic word embeddings improves when training on languages related to the target zero-resource language

06/24/2021
by   Christiaan Jacobs, et al.
0

Acoustic word embedding models map variable duration speech segments to fixed dimensional vectors, enabling efficient speech search and discovery. Previous work explored how embeddings can be obtained in zero-resource settings where no labelled data is available in the target language. The current best approach uses transfer learning: a single supervised multilingual model is trained using labelled data from multiple well-resourced languages and then applied to a target zero-resource language (without fine-tuning). However, it is still unclear how the specific choice of training languages affect downstream performance. Concretely, here we ask whether it is beneficial to use training languages related to the target. Using data from eleven languages spoken in Southern Africa, we experiment with adding data from different language families while controlling for the amount of data per language. In word discrimination and query-by-example search evaluations, we show that training on languages from the same family gives large improvements. Through finer-grained analysis, we show that training on even just a single related language gives the largest gain. We also find that adding data from unrelated languages generally doesn't hurt performance.

READ FULL TEXT
research
02/06/2020

Multilingual acoustic word embedding models for processing zero-resource languages

Acoustic word embeddings are fixed-dimensional representations of variab...
research
06/02/2020

Improved acoustic word embeddings for zero-resource languages using multilingual transfer

Acoustic word embeddings are fixed-dimensional representations of variab...
research
07/05/2023

Leveraging multilingual transfer for unsupervised semantic acoustic word embeddings

Acoustic word embeddings (AWEs) are fixed-dimensional vector representat...
research
06/24/2020

Multilingual Jointly Trained Acoustic and Written Word Embeddings

Acoustic word embeddings (AWEs) are vector representations of spoken wor...
research
11/09/2018

Multilingual and Unsupervised Subword Modeling for Zero-Resource Languages

Unsupervised subword modeling aims to learn low-level representations of...
research
05/24/2020

Acoustic Word Embedding System for Code-Switching Query-by-example Spoken Term Detection

In this paper, we propose a deep convolutional neural network-based acou...
research
02/05/2017

An Empirical Evaluation of Zero Resource Acoustic Unit Discovery

Acoustic unit discovery (AUD) is a process of automatically identifying ...

Please sign up or login with your details

Forgot password? Click here to reset