Singing is widely employed in most cultures as means of entertainment and self-expression. It should also be noted that singing is an important way to convey linguistic information. Singing voice conversion, a task to convert a song sang by a source singer to the voice of a target singer, has many practical applications. For instance, a user can first sing a song and then replace his voice with another person’s voice. This potential use demonstrates a fun and creative way to generate unique, collaborative content. Moreover, the user can pretend that he or she is singing a song by replacing the original singer’s voice with his or her own voice, displaying his or her singing on social media platforms.
Singing voice conversion and conventional speech voice conversion are similar tasks. In general, these two types of voice conversions need to divide content into person-dependent and person-independent content. Both singing and speech voice conversions switch the person-dependent content from source to target and retain person-independent content. However, in speech voice conversion, the manner of speaking (including the speech pattern, pitch, dynamics, duration of words, etc.) contains important information about the speaker. Therefore, the manner of speaking belong to person-dependent content and need to be modeled and changed from the source speaker to the target speaker. On the other hand, in singing voice conversion, the manner of singing is primarily determined by the song itself. Consequently, the manner of singing should be considered person-independent content and remain a key part of the singing voice conversion process. Only characteristics of voice identity such as the timbre are considered person-dependent content and need to be replaced.
Various singing voice conversion methods are proposed to convert the singing voice from one to another   . Parallel data is generally required to model the singing voice conversion. While voice conversion has recently gained popularity, typical voice conversion training requires parallel data    . Since it is often difficult to obtain parallel data for speech voice, various techniques for non-parallel training voice conversion  have been proposed. Experimental results show that the performance of these latter techniques is inferior to those of the VCs with parallel data. This outcome may be explained by the difficulty to accurately perform alignment on non-parallel data. More recently  a newly-proposed approach mapped electromagnetic articulography (EMA) features to speech for a foreign accent conversion task. In 
, the authors proposed using Phoneme State Posterior Probabilities (PSPP) for speaker independent content modeling and lip motion animation. In, the authors proposed model content use of Phonetic Posterior Grams (PPG) to encode the speaker independent content and to map this feature to speech for voice conversion.
In this paper, we propose using a parallel data free, many-to-one technique and using phonetic posteriors as the major person-independent content for singing voice conversion. To our knowledge, this is the first study that uses non parallel data to train singing voice conversion models. In the training stage, we only use some unlabeled target speech data that is relatively easy to obtain. We first decode the speech data into a phonetic posterior probability sequence using a robust Automatic Speech Recognition Engine (ASR). These phonetic posterior probability sequences contain only the content of the speech data and no user identity information. From the speech data we also apply parameter analysis to extract acoustic features. Parameter analysis contains both speech content and the speaker characteristics in order to reconstruct the speech via a vocoder. Those phonetic posteriors and acoustic features are used as input and output to train a Recurrent Neural Network (RNN) with a Deep Bidirectional Long Short Term Memory (DBLSTM) structure. As a result, this process builds a mapping from the person-independent phonetic posteriors to acoustic features that contain both person-dependent and person-independent content. In the conversion stage 1, a phonetic posterior sequence that encodes the person-independent content is generated by decoding a singing voice through the ASR. In stage 2, the trained DBLSTM-RNN is used to map the phonetic posterior to the acoustic features of the target singing voice. F0 and aperiodic are obtained through the original singing voice, and used together with acoustic features to reconstruct the target singing voice through a vocoder.
The paper is organized according to the following sections. In Section 2, we describe the Deep Neural Network(DNN) model for singing voice recognition. In section 3, we describe the DBLSTM-RNN structure used to model the map between encoded phonetic posteriors and acoustic features. Section 4 offers an in-depth description of the training and conversion stages of the proposed method. We then describe our experiment set up and demonstrate subjective evaluation results in Section 5, and conclude our work in Section 6. We also include a selection of samples.111https://sites.google.com/site/singingvoiceconversion2018/
2 Deep Neural Network acoustic model for singing content recognition
, the authors proposed using various classifiers such as a Support Vector Machine (SVM), Multi Layer Perceptron (MLP) and Gaussian Mixture Model (GMM) for phoneme recognition on a singing voice, observing that the recognition accuracy improved with harmonics analysis. In , the authors proposed the use of a traditional HMM/GMM acoustic model with MLLR adaptation to enable automatic recognition of lyrics expressed in a singing voice. Recently, Deep Neural Network (DNN) based acoustic modeling has demonstrated superior performance when compared to the traditional method  and has become the state of the art technology in speech recognition. It is therefore of interest to apply this technology to transcribe singing voice into a phonetic posterior probability sequence in order to encode the content.
As a brief review, a DNN is a multi-layer perceptron (MLP) with many hidden layers. Each hidden layer computes the activations of conditionally independent units given the activations of the previous layer. If we denote the input vector of a hidden layer as , then the output vector of the layer can be computed as
are the weight matrix and bias vector of layer, and
is the predefined activation function. Choosingas nonlinear functions may allow networks to model nontrivial problems. There are multiple layers in order to model complex signals such as speech.
To model the probabilities for the phoneme class vector , the softmax activation function is predominantly used in the last layer of a DNN:
Since the vector sums to one and all its elements are between zero and one,
represents a categorical probability distribution.
3 Deep Bidirectional LSTM - Recurrent Neural Network (RNN)
was used as the memory block. Inspired by the recent success of Deep learning models, a multiple stacked layer DBLSTM was introduced and yielded good performance in speech recognition and voice conversion . We first offer a brief review of this approach:
Given an input sequence, a proposed corresponding recurrent neural network calculates the hidden vector sequence and output sequence by iterating layer by layer, as illustrated in figure 1. Each layer contains both forward feeding and backward feeding. Generally, multiple layers are stacked between input and output layers. Each cell represents a memory block, which is a Long Short Term Memory (LSTM) block. An illustration of LSTM block is presented in figure 2. This architecture uses purpose-built memory cells to store information and exploit long range context, and is very powerful in presenting the mapping power between encoded phoneme sequences and corresponding acoustic features.
4 Singing Voice Conversion
In this section we describe the technical steps of our singing voice conversion method. Since our method is parallel data free, we could use any speech data from the target to train a model in the training stage. This data is fully independent of the singing voice to be converted. We call this process the ”many to one method” because it can be applied to any source singing voice after the model is trained.
4.1 Training stage
In the training stage, we try to train a DBLSTM model to map the encoded phonetic posteriors to target speaker’s acoustic features. For different target singers we need to train different models. As showed in figure 3, we only use unlabeled target voice data for training. For each utterance, first, a speaker independent DNN phoneme acoustic model is used to extract the posterior probabilities for each phoneme at each frame, and the information is encoded as a matrix to represent the content information of the giving segment of voice data. Second, an acoustic feature parameter extraction tool is used to extract Mel Cepstral (MCEP) feature. We collect a dataset of paired encoded content information and its corresponding acoustic features and use them to train the mapping with DBLSTM.
4.2 Conversion stage
With a trained DBLSTM model, we can map the person-independent content to target singer’s acoustic features and use that to synthesize new singing voice. As shown in figure 4, given a specific singing voice clip, a robust automatic speech recognition (ASR) engine is used to generate the encoded phonetic posteriors that contains singer-independent content of the singing voice clip. This encoded phonetic posteriors sequence is then mapped to the corresponding MCEP acoustic features of target singer using our DBLSTM model trained in the training stage for the target singer. From the source singing voice clip, F0 and Aperiodic information is also extracted by using a parameter extraction tool, and is kept unchanged. Taking these three piece of information, a vocoder is used to synthesize a singing voice that shares the voice identity of the target speaker while retaining the lyrics and melody espoused by the original singer.
In this section we describe the technical details of how we conduct experiments and also the subjective evaluation of our results.
5.1 Automatic Speech Recognition (ASR) module
In our experiments, TIMIT database is used for training the phoneme recognition module using DNN. The training data set consists of 3696 utterances from 462 speakers. There are 39 phonemes. The acoustic features are 13 MFCCs that are extracted with a 5ms shift. A context of 17 Frames of feature, shaped by mean variance normalization, is used by DNN as the input. The input dimension is 221. The ASR DNN contains four fully connected layers, and each layer contains 2048 hidden nodes. The output layer of the DNN contains 39 phoneme classes. TNet is used to conduct the training. The frame accuracy on validation set reaches 70.7%.
5.2 DBLSTM module
In our proposed method, we will train a DBLSTM model for each target voice. Here we use CMU Arctic data for training our models for target voice, and a female voice SLT is used for the experiments in this paper. There is a total of 1132 utterances. Only speech data from the corpus is used. We choose a Mel Cepstrual feature (MCep) as the acoustic feature to be modeled within the DBLSTM structure, and set the dimension to 40. The feature is extracted by using World . The input dimension for the DBLSTM is 39, and represents 39 phonemes. We train two models: one model is 128N, trained with 400 Utterances, containing a stacked 4 layers with 128 hidden nodes in each layer; the other model is 512N, trained with all available utterances, containing a stacked 4 layers with 512 hidden nodes in each layer. The network is trained by using CURRENNT .
5.3 Subjective evaluation
Our subjective evaluation is performed by 12 people listening on the original singing content and converted singing content, together with the target speakers voice. Those subjects are not professional in singing voice conversions.
Several different types of source singing voices are used, some samples are randomly picked from the MIR 1K dataset , including S01 (female) and S04 (male), who are both singing in Mandarin. S02 (male) is singing RAP in English, and S03 (male) is a general singing voice in English. We evaluated MOS on naturalness, score similarity (as showed in Table. 1). The scale is set between 1 to 5.
|1||same to the original|
|2||similar to the original|
|4||similar to the target|
|5||same to the target|
In figure 5, the average MOS score of 128N model is 3.2, compared to that of the 512N model which scores a 3.4. This shows that a complex model with more training data generates slightly better naturalness. Additionally, our average similarity score of 128N model is 3.3 versus that of the 512N model which scores 3.4, showing our proposed approach is able to alter the voice identity towards the target speaker. Overall, the 512N model outperforms the 128N model, showing that more data and more complex model can help improve the performance.
We found RAP audio clip (S02) achieves the lowest MOS score, and the explanation could be the rapid change of speech content due to fast speaking rate dragging down the ASR performance. We also found male singer in our experiment tend to have higher similarity score, this could be due to the bigger difference between the source and target speaker, which is a female voice.
We propose a novel system to use a parallel data free, many-to-one voice conversion on singing voice conversion. A speaker independent ASR is first used to extract the phonetic posteriors sequence to represent the person-independent content, and a DBLSTM model is used to model the mapping from the person-independent content to target speaker’s acoustic features. These acoustic features are used to synthesize the target singing voice via a vocoder, together with the F0 and aperiodic information extracted from the source content.
To our knowledge, this is the first attempt to use non-parallel data to train a model for singing voice conversion. Additionally, subjective evaluation reveals that the proposed method is effective without using parallel data.
For future enhancement, the authors would like to collect more singing voice data for model adaptation in speech recognition to further improve performance. In our future work, we also hope to explore neural vocoders such as wavenet , a recently proposed method that demonstrates superior performance when compared to traditional vocoders.
-  Kazuhiro Kobayashi, Tomoki Toda, Graham Neubig, Sakriani Sakti, and Satoshi Nakamura, “Statistical singing voice conversion based on direct waveform modification with global variance,” Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, vol. 2015-January, no. September, pp. 2754–2758, 2015.
-  Kazuhiro Kobayashi, Tomoki Toda, Graham Neubig, Sakriani Sakti, and Satoshi Nakamura, “Statistical Singing Voice Conversion with direct Waveform modification based on the Spectrum Differential,” Interspeech 2014, 2014.
-  Fernando Villavicencio and Jordi Bonada, “Applying Voice Conversion To Concatenative Singing-Voice Synthesis,” Interspeech 2010, 2010.
-  Y. Stylianou, O. Cappe, and E. Moulines, “Continuous probabilistic transform for voice conversion,” IEEE Transactions on Speech and Audio Processing, 1998.
T. Toda, A. W. Black, , and K. Tokuda,
“Voice conversion based on maximum-likelihood estimation of spectral parameter trajectory,”IEEE Transactions on Audio, Speech, and Language Processing, 2007.
-  Z. Wu, T. Virtanen, T. Kinnunen, E. S. Chng, and H. Li, “Exemplar-based voice conversion using non-negative spectrogram deconvolution,” Proc. 8th ISCA Speech Synthesis Workshop, 2013.
-  T. Nakashika, R. Takashima, T. Takiguchi, and Y. Ariki, “Voice conversion in high-order eigen space using Deep Belief Nets,” Proc. Interspeech, 2013.
-  D. Erro, A. Moreno, and A. Bonafonte, ““INCA algorithm for training voice conversion systems from nonparallel corpora,” IEEE Transactions on Audio, Speech, and Language Processing, 2010.
-  Sandesh Aryal and Ricardo Gutierrez-Osuna, “Articulatory-based conversion of foreign accents with deep neural networks,” in Sixteenth Annual Conference of the International Speech Communication Association, 2015.
-  Yilong Liu, Feng Xu, Jinxiang Chai, Xin Tong, Lijuan Wang, and Qiang Huo, “Video-audio driven real-time facial animation,” ACM Transactions on Graphics (TOG), vol. 34, no. 6, pp. 182, 2015.
-  Lifa Sun, Kun Li, Hao Wang, Shiyin Kang, and Helen Meng, “Phonetic posteriorgrams for many-to-one voice conversion without parallel data training,” ICME, 2016.
-  Matthias Gruhne, Konstantin Schmidt, and Christian Dittmar, “Phoneme recognition in popular music,” Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pp. 369–370, 2007.
-  A Mesaros, Singing Voice Recognition for Music Information Retrieval, vol. 1064, 2012.
-  Annamaria Mesaros and Tuomas Virtanen, “Adaptation of a speech recognizer for singing voice,” European Signal Processing Conference, , no. 1, pp. 1779–1783, 2009.
-  Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al., “Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups,” IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82–97, 2012.
-  Alex Graves and Jürgen Schmidhuber, “Framewise phoneme classification with bidirectional lstm and other neural network architectures,” Neural Networks, vol. 18, no. 5, pp. 602–610, 2005.
-  S. Hochreiter and J. Schmidhuber, “Phonetic posteriorgrams for many-to-one voice conversion without parallel data training,” Neural Computation, , no. July, 2016.
-  Alex Graves, Navdeep Jaitly, and Abdel Rahman Mohamed, “Hybrid speech recognition with Deep Bidirectional LSTM,” 2013 IEEE Workshop on Automatic Speech Recognition and Understanding, ASRU 2013 - Proceedings, pp. 273–278, 2013.
-  K.Vesely, L. Burget, and F. Grezl, “Parallel training of neural networks for speech recognition,” Proc. of Interspeech, 2010.
-  Masanori Morise, Fumiya Yokomori, and Kenji Ozawa, “World: a vocoder-based high-quality speech synthesis system for real-time applications,” IEICE TRANSACTIONS on Information and Systems, vol. 99, no. 7, pp. 1877–1884, 2016.
F. Weninger, J. Bergmann, and B. Schuller,
““Introducing CURRENNT: the Munich open-source CUDA RecurREnt
Neural Network Toolkit,”
Journal of Machine Learning Research, 2015.
-  Chao-Ling Hsu and Jyh-Shing Roger Jang, “On the improvement of singing voice separation for monaural recordings using the mir-1k dataset,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 18, no. 2, pp. 310–319, 2010.
-  A van den Oord, S Dieleman, H Zen, K Simonyan, O Vinyals, A Graves, N Kalchbrenner, A Senior, and K Kavukcuoglu, “wavenet: A generative model for raw audio,” arXiv, pp. 846–849, 2015.