Transfer Learning from Audio-Visual Grounding to Speech Recognition

by   Wei-Ning Hsu, et al.

Transfer learning aims to reduce the amount of data required to excel at a new task by re-using the knowledge acquired from learning other related tasks. This paper proposes a novel transfer learning scenario, which distills robust phonetic features from grounding models that are trained to tell whether a pair of image and speech are semantically correlated, without using any textual transcripts. As semantics of speech are largely determined by its lexical content, grounding models learn to preserve phonetic information while disregarding uncorrelated factors, such as speaker and channel. To study the properties of features distilled from different layers, we use them as input separately to train multiple speech recognition models. Empirical results demonstrate that layers closer to input retain more phonetic information, while following layers exhibit greater invariance to domain shift. Moreover, while most previous studies include training data for speech recognition for feature extractor training, our grounding models are not trained on any of those data, indicating more universal applicability to new domains.


page 1

page 2

page 3

page 4


Data-selective Transfer Learning for Multi-Domain Speech Recognition

Negative transfer in training of acoustic models for automatic speech re...

Cantonese Automatic Speech Recognition Using Transfer Learning from Mandarin

We propose a system to develop a basic automatic speech recognizer(ASR) ...

Speech Recognition with Augmented Synthesized Speech

Recent success of the Tacotron speech synthesis architecture and its var...

Transfer Learning for Speech and Language Processing

Transfer learning is a vital technique that generalizes models trained f...

Fine-Grained Grounding for Multimodal Speech Recognition

Multimodal automatic speech recognition systems integrate information fr...

Listening to the World Improves Speech Command Recognition

We study transfer learning in convolutional network architectures applie...

Beyond Instructional Videos: Probing for More Diverse Visual-Textual Grounding on YouTube

Pretraining from unlabelled web videos has quickly become the de-facto m...