Embodied Self-supervised Learning by Coordinated Sampling and Training

06/20/2020
by   Yifan Sun, et al.
0

Self-supervised learning can significantly improve the performance of downstream tasks, however, the dimensions of learned representations normally lack explicit physical meanings. In this work, we propose a novel self-supervised approach to solve inverse problems by employing the corresponding physical forward process so that the learned representations can have explicit physical meanings. The proposed approach works in an analysis-by-synthesis manner to learn an inference network by iteratively sampling and training. At the sampling step, given observed data, the inference network is used to approximate the intractable posterior, from which we sample input parameters and feed them to a physical process to generate data in the observational space; At the training step, the same network is optimized with the sampled paired data. We prove the feasibility of the proposed method by tackling the acoustic-to-articulatory inversion problem to infer articulatory information from speech. Given an articulatory synthesizer, an inference model can be trained completely from scratch with random initialization. Our experiments demonstrate that the proposed method can converge steadily and the network learns to control the articulatory synthesizer to speak like a human. We also demonstrate that trained models can generalize well to unseen speakers or even new languages, and performance can be further improved through self-adaptation.

READ FULL TEXT
research
04/20/2021

Distill on the Go: Online knowledge distillation in self-supervised learning

Self-supervised learning solves pretext prediction tasks that do not req...
research
07/08/2021

Improved Language Identification Through Cross-Lingual Self-Supervised Learning

Language identification greatly impacts the success of downstream tasks ...
research
08/23/2023

Self-Supervised Knowledge-Driven Deep Learning for 3D Magnetic Inversion

The magnetic inversion method is one of the non-destructive geophysical ...
research
03/12/2023

Improving Masked Autoencoders by Learning Where to Mask

Masked image modeling is a promising self-supervised learning method for...
research
03/01/2023

ParrotTTS: Text-to-Speech synthesis by exploiting self-supervised representations

Text-to-speech (TTS) systems are modelled as mel-synthesizers followed b...
research
04/05/2022

Repeat after me: Self-supervised learning of acoustic-to-articulatory mapping by vocal imitation

We propose a computational model of speech production combining a pre-tr...
research
03/10/2023

Ignorance is Bliss: Robust Control via Information Gating

Informational parsimony – i.e., using the minimal information required f...

Please sign up or login with your details

Forgot password? Click here to reset