Semi-supervised Learning for Singing Synthesis Timbre

11/05/2020
by   Jordi Bonada, et al.
0

We propose a semi-supervised singing synthesizer, which is able to learn new voices from audio data only, without any annotations such as phonetic segmentation. Our system is an encoder-decoder model with two encoders, linguistic and acoustic, and one (acoustic) decoder. In a first step, the system is trained in a supervised manner, using a labelled multi-singer dataset. Here, we ensure that the embeddings produced by both encoders are similar, so that we can later use the model with either acoustic or linguistic input features. To learn a new voice in an unsupervised manner, the pretrained acoustic encoder is used to train a decoder for the target singer. Finally, at inference, the pretrained linguistic encoder is used together with the decoder of the new voice, to produce acoustic features from linguistic input. We evaluate our system with a listening test and show that the results are comparable to those obtained with an equivalent supervised approach.

READ FULL TEXT
research
06/29/2022

Improving Deliberation by Text-Only and Semi-Supervised Training

Text-only and semi-supervised training based on audio-only data has gain...
research
10/11/2021

Pitch Preservation In Singing Voice Synthesis

Suffering from limited singing voice corpus, existing singing voice synt...
research
01/10/2021

Target Detection and Segmentation in Circular-Scan Synthetic-Aperture-Sonar Images using Semi-Supervised Convolutional Encoder-Decoders

We propose a saliency-based, multi-target detection and segmentation fra...
research
10/14/2021

An Approach to Mispronunciation Detection and Diagnosis with Acoustic, Phonetic and Linguistic (APL) Embeddings

Many mispronunciation detection and diagnosis (MD D) research approach...
research
07/29/2022

Transfer Learning for Segmentation Problems: Choose the Right Encoder and Skip the Decoder

It is common practice to reuse models initially trained on different dat...
research
10/16/2019

Contextual Joint Factor Acoustic Embeddings

Embedding acoustic information into fixed length representations is of i...
research
12/30/2020

Multi-view Temporal Alignment for Non-parallel Articulatory-to-Acoustic Speech Synthesis

Articulatory-to-acoustic (A2A) synthesis refers to the generation of aud...

Please sign up or login with your details

Forgot password? Click here to reset