Autoencoder Based Architecture For Fast & Real Time Audio Style Transfer

12/18/2018 ∙ by Dhruv Ramani, et al. ∙ IEEE 0

Recently, there has been great interest in the field of audio style transfer, where a stylized audio is generated by imposing the style of a reference audio on the content of a target audio. We improve on the current approaches which use neural networks to extract the content and the style of the audio signal and propose a new autoencoder based architecture for the task. This network generates a stylized audio for a content audio in a single forward pass. The proposed network architecture proves to be advantageous over the quality of audio produced and the time taken to train the network. The network is experimented on speech signals to confirm the validity of our proposal.

READ FULL TEXT VIEW PDF

Authors

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The task of artistic style transfer has been far studied and implemented for generating stylized images. It provides an insight that the content and style representation of visual imagery are separable. Style transfer in images can be explained as imposing the style extracted from a reference image onto the content of a target image. The seminal works of Gatys et al Gatys:16 and Johnson et al Johnson:16, shows the usage of convolutional neural networks (CNNs) for the task. CNNs prove to be advantageous for the task because of the representations learned by it in the deeper layers. These deep features can be used to represent the content and the style of an image, separately. This has led to a great increase of research in the area of style transfer which incorporates neural networks to transfer the ”style” of an image (eg. a painting) to another (eg. a photograph).

The task of audio style transfer is recently gaining popularity as an area of research because of its wide applications in audio editing and sound generation. The meaning of style and content for an audio signal is different as compared to an image. The current consensus implies that style refers to the speaker’s identity, accent, intonation, and the content refers to the linguistic information enccoded in it like phonemes and word. Over time, various methods have been proposed which involve usage of models used for image style transfer on audio. This involves converting the raw audio into a spectrogram and using neural networks to extract the required features. Further, we generate waveforms which match the high level network activations from a content signal while simultaneously matching low level statistics computed from lower level activations from a style signal. In this paper, we propose a new and a novel architecture which uses similar approaches to stylize an audio. But unlike previous proposed methods, our proposed architecture stylizes an audio in a single network pass and is thus extremely useful for real time audio style transfer. The network architecture is carefully crafted to ensure faster training time and low computational usage. In this paper, we explore previous methods which have been proposed for artistic style transfer in images and audio, propose a new architecture and analyze it’s performance.

Figure 1:

A framework for audio style transfer using a single convolutional autoencoder, trained on spectrograms of speech signals and a single style signal is used to generate stylized audio. The signal is pre-processed by applying Short Time Fourier Transform (STFT) on the raw input audio to generate an audio spectrogram. This spectrogram is passed through the transformation network to generate the stylized spectrogram. For retrieveing the audio back from the spectrogram, Griffin-Lim algorithm is used to convert the stylized spectrogram to the required stylized audio.

2 Related Work

The work of Gatys et al Gatys:16 shows the advantageous use of convolutional neural networks (CNNs) for stylizing the target image. In it, the content of an image is conceptualized as high level representation gained from the deeper layers of a CNN trained for image classification. The style representation of an image is taken as a linear combination of the gram matrix of the feature maps of different layers of the same network.

Let the filter response tensor of the

layer of the network be , where is the style image, then the gram matrix representation of this layer is given as:

(1)

The above method was slow and had to be iteratively optimized for each content-style pair to obtain a single stylized output. Johnson et al Johnson:16 proposed a transformation network based architecture. In this, they train a transformation network on content images, imposing the style of a single style image to generate the stylized output. The network learns a mapping from the content images to stylized images, which are biased towards a single style image. The stylized outputs are generated in a single forward pass of the network, hence, this method has aptly been named fast neural style transfer and is extremely useful for real time style transfer applications.

The loss for the network is given by taking into account a measure of content and style similar to what was defined by Gatys et al Gatys:16. A VGG network which had been pre-trained for image classification is used for extracting content and style from the respective images. The content and style representations are obtained in the same way as mentioned before. The loss function captures both high and low level information of the image. The method also accounts for total variation loss which improves spatial smoothness in the output image. The total loss given by,

(2)

where is the output of the transformation network, is the content image and

is the style image. This loss is minimized by backpropagation using Stochastic Gradient Descent (SGD) as an optimizer.

The work of Ulyanov Ulyanov:16 on audio style transfer, used a similar optimization framework as Gatys et al Gatys:16, but unlike Gatys et al, they did not use a deep pre-trained neural network, instead opting for a shallow network (A single layer with 4096 random filter). In this, the output spectrogram is initialized as a random noise. This is then optimized using the model till the loss between features taken from content and style audio, and the output audio, is minimized. However the results of this were limited and preservation of content and style was not to a high degree. The work by Grinstein et al Grinstein:18 utilized this meaning, however, the work only considered style loss since the original audio itself was being modified, the content loss wasn’t considered.

Audio style transfer has been experimented on iterative optimization based approaches using neural networks, opening up the scope for research in real time approaches which can generate the stylized audio in a single forward pass of a feed-forward neural network.

Figure 2:

An autoencoder based architecture for the transformation network and the loss network. The number of filters and the kernel size for each layer appear above and below the layer blocks respectively in the diagram.

3 Problem Definition and Formulation

We aim to solve the problem of neural audio style transfer using a transformation network and a loss network.

Given a content audio and a style audio , our task is to find the audio which satisfies the equation,

(3)

Here, represents content of an audio and represents the style of an audio. and are parameters which signify the amount of content or style we require in our output audio . A higher value of over would result in a predominance of content in the audio .

4 Our Proposed Architecture

The general framework we propose to adopt for the purpose of real time audio style transfer is illustrated in Figure 1.

The raw speech signal contains all the information in the temporal domain. The signal is pre-processed so that it can be reconstructed using Griffin-Lim algorithm [Griffin and Lim1984] later. Initially, Short Term Fourier Transform (STFT) is applied to bring the raw audio from time domain to frequency domain. This helps us to understand the frequency range at which the signal emphasizes. This relative emphasis helps in shaping the high level features like phoneme or emotion. The frequency domain signal is then converted into magnitude-spectral domain by taking the magnitude of the result of STFT. The magnitude is chosen over the phase as it provides higher and richer information about the high level features and helps in easier reconstruction of the signal. The spectrum of this signal is obtained by taking log of magnitude with time as the horizontal axis and frequency as the vertical axis. The frequency is transformed to the log scale to visualize the features related to human perception of natural sound. The obtained spectrum of the speech signal is known as an audio spectrogram. This can be thought of an image representation of the audio signal, except the fact that translation along frequency axis can change high-level features like emotion and doesn’t change features like words spoken. Mathematically, let be the input raw audio signal. The spectrogram for the signal for a window function is given by,

(4)

where, the function is given by,

(5)

A network architecture similar to that proposed by Johnson et al Johnson:16 is adopted. A transformation network parameterized by is utilized to find a mapping from an input space of content audio spectrogram to an output space of stylized audio spectrogram. To calculate the loss, a loss network parameterized by is used to extract the content from the respective spectograms. Subsequently, the same loss network is used to extract the style from the spectograms. The loss network is pretrained to extract a hierarchy of representation from the audio spectrogram, incorporating both low level features and high level features.

Figure 3: A framework for training the spectrogram transformation network () and loss calculation for the purpose of backpropagation of .

4.1 Loss Network

We adopt an encoder-decoder architecture [Perez et al.2017], illustrated in Figure 2

, for the loss network. It consists of 4 convolutional layers and 4 transposed convolutional layers. ReLU non-linearity followed by Batch Normalization is applied to all the layers except the last layer. The network is treated as an autoencoder. The network compresses the input spectrogram to lower dimensional latent space and further tries to reconstruct the same input. As a consequence of this process, the encoder part of the autoencoder learns to capture high level features of the input spectrogram and is able to represent the same in lower dimensions, called as latent embedding.. The decoder part is used to reconstruct the spectrogram from the latent embedding. This is then optimized with backpropagation to ensure that the reconstructed spectrogram is a similar to the input.

The latent embedding feature activation map is used to model content of a spectrogram and the linear combination gram matrix of feature activation maps of first, second and third convolutional layers is used to model style of a spectrogram.

4.2 Transformation Network

We use the same encoder-decoder architecture, illustrated in Figure 2

, for the tranformation network. Instead of training the transformation network from scratch, pretrained weights from the loss network are used. This approach is impressive as it utilizes the weights of a pretrained neural network which has learned the distribution of audio spectrograms, and therefore does not require re-learning of the content representation. The network is only optimized to accommodate the low level features of a single spectrogram of a given style which need not be related to the samples which had been used to train the network. This makes the training process faster comparatively while ensuring proper results in under one epoch of training. Using the same architecture also ensures homogeneity within the dimensions of the spectrogram. A framework for training this network is illustrated in Figure

3.

An input spectrogram () is passed through spectrogram transformation network () to generate an output spectrogram (). The weights and biases of the loss network () are frozen. is used for calculation of content of (), style of (), content of () and style of (). Since we want to preserve the content of the input spectrogram, here and will be the same spectrogram. The loss is calculated as:

loss = α×MSE(Con(Y), Con(C)) + β×MSE(Sty(Y), Sty(S)) This loss is minimized by optimizing the weights and biases of using backpropagation.

5 Experiments

To train the proposed architecture on speech signals, we use the publicly available CSTR VCTK Corpus [Yamagishi and Junichi2012]. The VCTK corpus provides text labels for the speech and is widely used in speech to text synthesis. However, since we employ an autoencoder based architecture, the text labels aren’t used. The corpus contains clean speech from 109 speakers which read out 400 sentences, the majority of which have British accents. We down sampled the audio to 16 kHz for our convenience.

To convert the raw audio signal to a spectrogram, we apply Short Term Fourier Transform (STFT) to bring the speech utterance to a frequency domain. We then apply to the magnitude of the signal to convert it into a magnitude-spectral domain. The signal is now in the form of an audio spectrogram. The loss network is trained on the spectrograms of audios from the VCTK corpus. We optimize the weights and biases using backpropgation to minimize the mean squared error between the reconstructed spectrogram and the input provided. The loss is minimized using Adam [Kingma and Ba2015] optimizer with a learning rate of , weight decay of and batch size as . The activations of this network are used to represent the content and style of a signal, separately.

While training the transformation network, the weights and biases of the loss network are kept frozen. We use pre-trained weights of loss network for training the transformation network for faster learning. The pre-processed signals from the corpus are used as content samples and a single style sample is used for training the transformation network. As a result, the network gets biased towards generating stylized output spectrograms pertaining to a single style for any content spectrogram. The loss in Equation 4.2 is minimized using Adam [Kingma and Ba2015] optimizer, with learning rate of , as 0.999 and as 0.99. The value of in the loss function is taken as 100 and is taken as . These values had been chosen after excessive experimentation.

The models have been implemented using the PyTorch deep learning framework and trained on a single Nvidia GTX 1070 Ti GPU.

(a) The original content audio spectrogram.
(b) The style audio spectrogram.
(c) The stylized output audio spectrogram.
Figure 4: Audio spectrograms in magnitude-spectral domain

6 Results

The key finding is that the low level statistical information from a style audio spectrogram, kept constant while the spectrogram transformation network is trained, can be transferred to a target spectrogram, while preserving the content of it, in a single forward pass of the network, while testing. The qualitative results in the form of spectrograms of the content, style utterance and stylized output generated using this architecture are shown in Figure 3(a),  3(b) and  3(c). From the spectrograms, it can be observed that the content is retained, however the output contains very different properties such as pitch, accent, etc. The texture of the style audio spectrogram is present in the output audio spectrogram, whereas the content, defined by the lightly shaded regions within the dark background in the original content audio spectrogram, is retained. The stylized output audio spectrogram may be converted into a raw audio signal by post-processing using Griffin-Lim algorithm for further auditory analysis to support this claim.

7 Conclusion

In this work, we propose a new architecture for real time audio style transfer. We experimented and evaluated the proposed model on several speech utterances and the model is found to show promising results. Since a single transformation network is trained for stylizing the content audio on a specific style audio, the style audio isn’t needed during testing. In future work, more conducive research and experimentation on the meaning and representation of ”style” in audio will result in the discovery of features which will help generate various forms of the stylized audio. Moreover, as the loss network is separated from the transformation network, we may incorporate several other features for calculation of loss, such as accent or music, to have only specific features transferred to the generated audio and preserving several other features.

Acknowledgments

We would like to thank Innovation Garage, NIT Warangal for their invaluable help in providing us with necessary computing capabilities which made our research possible.

References

  • [Gatys et al.2016] Leon A. Gatys, AlexandS. Ecker, Matthias Bethge. 2016. Image Style Transfer Using Convolutional Neural Networks.

    Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016.

  • [Johnson et al.2016] Justin Johnson, Alexandre Alahi, Li Fei-Fei. 2016.

    Perceptual Losses for Real-Time Style Transfer and Super-Resolution

    .
    Proceedings of European Conference on Computer Vision (ECCV) 2016.
  • [Grinstein et al.2018] Eric Grinstein, Ngoc Duong, Alexey Ozerov, Patrick Pérez. 2018. Audio style transfer. Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2018.
  • [Ulyanov et al.2016] Dmitry Ulyanov. 2016. Audio Texture Synthesis and Style Transfer. https://dmitryulyanov.github.io/audio-texture-synthesis-and-style-transfer/.
  • [Simonyan and Zisserman2015] Karen Simonyan, Andrew Zisserman. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. Proceedings of International Conference on Learning Representations (ICLR) 2015.
  • [Deng et al.2009] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei. 2009. ImageNet: A Large-Scale Hierarchical Image Database. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2009.
  • [Kingma and Ba2015] Diederik P. Kingma, Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. Proceedings of International Conference for Learning Representations (ICLR) 2015.
  • [Yamagishi and Junichi2012] Yamagishi, Junichi. 2012. English multi-speaker corpus for CSTR voice cloning toolkit. http://homepages.inf.ed.ac.uk/jyamagis/page3/ page58/page58.html .
  • [Perez et al.2017] Anthony Perez, Chris Proctor, Archa Jain. 2017. Style Transfer for Prosodic Speech. http://web.stanford.edu/class/cs224s/reports/
    Anthony_Perez.pdf .
  • [Griffin and Lim1984] D. Griffin, Jae Lim. 1984.

    Signal estimation from modified short-time Fourier transform

    .
    IEEE Transactions on Acoustics, Speech, and Signal Processing 1984.