Generally, in Music Information Retrieval (MIR) we develop dedicated systems for specific tasks. Facing new (but similar) tasks require the development of new (but similar) specific systems. This is the case of data-driven music source separation systems. Source separation aims to isolate the different instruments that appear in an audio mixture (a mixed music track) i.e., reversing the mixing process. Data-driven methods use supervised learning where the mixture signals and the isolated instruments are available for training. The usual approach is to build dedicated models for each task to isolate[1, 19]. This has been proved to show great results. However, since isolating an instrument requires a specific system, we can easily run into problems such as scaling issues (100 instruments = 100 systems). Besides, these models do not use the commonalities between instruments. If we modify them to do various tasks at once i.e., adding more filters for the last layers and having fix numbers of outputs, they reduce their performance. Conditioning learning has appeared as a solution to problems that need the integration of multiple resources of information. Concretely, when we want to process one in the context of another i.e., modulating a system computation by the presence of external data. Conditioning learning divides problems into two elements: a generic system and a control mechanism that governs it according to external data. Although there is a large diversity of domains that use it, it has been developed mainly in the image processing field for tasks such as visual reasoning or style transfer. There, it has been proved very effective, improving the state of the art results[13, 3, 20]
. This paradigm can be integrated into source separation creating a generic model that adapts to isolate a particular instrument via a control mechanism. We also believe that this paradigm can benefit to a great diversity of MIR tasks such as multi-pitch estimation, music transcription or music generation.
In this work, we propose the application of conditioning learning for music source separation. Our system relies on a standard U-Net system not specialized in a specific task but rather in finding a set of generic source separation filters, that we control differently for isolating a particular instrument, as illustrated in Figure 1. Our system takes as input the spectrogram of the mixed audio signal and the control vector. It gives as output (only one) the separated instrument defined by the control vector. The main advantages of our approach are - direct use of commonalities between different instruments, - a constant number of parameters no matter how many instruments the system is dealing with - and scalable architecture, in the sense that new instruments can be potentially added without training from scratch a new system. Our key contributions are:
the Conditioned-U-Net (C-U-Net), a joint model that changes its behavior depending on external data and performs for any task as good as a dedicated model trained for it. C-U-Net has a fixed number of parameters no matter the number of output sources.
The C-U-Net proves that conditioning learning (via Feature-wise Linear Modulation (FiLM) layers) is an efficient way of inserting external information to MIR problems.
A new FiLM layer that works as good as the original one but with a lower cost (fewer parameters).
2 Related work
We review only works related to conditioning in audio and to data-driven source separation methods. Conditioning in audio. It has been mainly explored in speech generation. In the WaveNet approach[22, 23] the speaker identity is fed to a conditional distribution adding a learnable bias to the gated activation units. A WaveNet modified version is presented in . The time-domain waveform generation is conditioned by a sequence of Mel spectrogram computed from an input character sequence (using a recurrent sequence-to-sequence network with attention). In speech recognition conditions are used in 
, applying conditional normalisation to a deep bidirectional LSTM (Long Short Term Memory) for dynamically generating the parameters in the normalisation layer. This model adapts itself to different acoustic scenarios. In, the conditions do not come from any external source but rather from utterance information of the model itself. They have been also used in music generation for accompaniments conditioned on melodies or incorporating history information (melody and chords) from previous measures in a generative adversarial network (GAN). Finally, it has been also proved to be very efficient for piano transcription : the pitch onset detection is internally concatenated to the frame-wise pitch prediction controlling if a new pitch starts or not. Both, onset detection and frame-wise prediction are trained together. Source separation based on supervised learning. We refer the reader to 
for an extensive overview of the different source separation techniques. We review only the data-driven approaches. Here, the neural networks have taken the lead. Although architectures such as RNN or CNN have been studied, the most successful one use a deep U-Net architecture (also called U-Net). In , the U-Net is applied to a spectrogram to separate the vocal and accompaniment components, training a specific model for each task. Since the output is the spectrogram, they need to reconstruct the audio signal which potentially leads to artifacts. For this reason, Wave-U-Net proposes to apply the U-Net to the audio-waveform. They also adapt their model for isolating different sources at once by adding to their dedicated version as many outputs as sources to separate. However, this multi-instruments version performs worse than the dedicated one (for vocal isolation) and has to be retrained to different source combinations. The closest work to ours is . In there, they propose to use multi-channel audio as input to a Variational Auto-Encoder (VAE) to separate 4 different speakers. The VAE is conditioned on the ID of the speaker to be separated. The proposed method outperforms its baseline.
3 Conditioning learning methodology
3.1 Conditioning mechanism.
There are many ways to condition a network (see  for a wide overview) but most of them can be formalized as affine transformations denoted by the acronym FiLM (Feature-wise Linear Modulation) . FiLM permits to modulate any neural network architecture inserting one or several FiLM layers at any depth of the original model. A FiLM layer conditions the network computation by applying an affine transformation to intermediate features:
where is the input of the FiLM layer (i.e., the intermediate feature we want to modify), and are parameters to be learned. They scale and shift based on the external information, . The output of a FiLM layer has the same dimension as the intermediate feature input . FiLM layers can be inserted at any depth in the controlled network. As described in Figure 2, the original FiLM layer applies an independent affine transformation to each feature map 111Or element-wise.: and . We call this a FiLM complex layer (Co). We propose a simpler version that applies the same and to all the feature maps (therefore and do not depend on ). We call it a FiLM simple layer (Si). The FiLM simple
layer decreases the degrees of freedom of the transformations to be carried out forcing them to be generic and less specialized. It also reduces drastically the number of parameters to be trained. As FiLM layers do not change the shape of, FiLM is transparent and can be used in any particular architecture providing flexibility to the network by adding a control mechanism.
3.2 Conditioning architecture.
A conditioning architecture has two components:
- The conditioned network.
It is the network that carries out the core computation and obtains the final output. It is usually a generic network that we want to behave differently according to external data. Its behavior is altered by the condition parameters, and via FiLM layers.
- The control mechanism - condition generator.
It is the system that produces the parameters (’s and ’s) for the FiLM layers with respect to the external information : the input conditions. It codifies the task at hand and provides the instructions to control the conditioned network. The condition generator can be trained jointly[13, 20] or separately with the conditioned network.
This paradigm clearly separates the tasks description and control instructions from the main core computation.
4 Conditioned-U-Net for multitask source separation
We formalize source separation as a multi-tasks problem where one task corresponds to the isolation of one instrument. We assume that while the tasks are different they share many similarities, hence they will benefit from a conditioned architecture. We name our approach the Conditioned-U-Net (C-U-net). It differs from the previous works where a dedicated model is trained for a single task or where it has a fixed number of outputs. As in [1, 19], our conditioned network is a standard U-Net that computes a set of generic source separation filters that we use to separate the various instruments. It adapts itself through the control mechanism (the condition generator) with FiLM layers inserted at different depths. Our external data is a condition vector (a one-hot-encoding) which specify the instrument to be separated. For example, corresponds to the drums. The vector is the input to the control mechanism/condition generator that has to learn the best and values such that, when they modify the feature maps (in the FiLM layers) the C-U-Net separates the indicated instrument i.e., it decides which features maps information is useful to get each instrument. The control mechanism/condition generator is itself a neural network that embeds into the best and . The conditioned network and the condition generator are trained jointly. A diagram is shown in Figure3.
Our C-U-Net can perform different instrument source separations as it alters its behavior depending on the value of the external condition vector . The inputs of our system are the mixture and the vector . There is only one output, which corresponds to the isolated instrument defined by . While training, the output corresponds to the desired isolated instrument that matches the activation.
4.1 Conditioned network: U-Net architecture
We used the U-Net architecture proposed for vocal separation, which is an adaptation of the microscopic images U-Net. The input and output are magnitude spectrograms of the monophonic mixture and the instrument to isolate. The U-Net follows an encoder-decoder architecture and adds a skip connection to it.
It creates a compressed and deep representation of the input by reducing its dimensionality while preserving the relevant information for the separation. It consists of a stack of convolutional layers, where each layer halves the size of the input but doubles the number of channels.
It reconstructs and interprets the deep features and transforms it into the final spectrogram. It consists of a stack of deconvolutional layers.
As the encoder and decoder are symmetric i.e., feature maps at the same depth have the same shape, the U-Net adds skip-connections between layers of the encoder and decoder of the same depth. This refines the reconstruction by progressively providing finer-grained information from the encoder to the decoder. Namely, feature maps of a layer in the encoder are concatenated to the equivalent ones in the decoder.
The final layer is a soft mask) which is applied to the input to get the isolated source . The loss of the U-Net is defined as:
where are the parameters of the system. Architecture details. Our implementation mimics the original one. The encoder
consists in 6 encoder blocks. Each one is made of a 2D convolution with 5x5 filters, stride 2, batch normalisation, and leaky rectified linear units (ReLU) with leakiness 0.2. The first layer has 16 filters and we double them for each new block. Thedecoder maps the encoder, with 6 decoders blocks with stride deconvolution, stride 2 and a 5x5 kernel, batch normalisation, plain ReLU, and a 50% dropout in the first three. The final one, the soft mask, uses a sigmoid activation. The model is trained using the ADAM optimiser  and a 0.001 learning rate. As in 
, we downsample to 8192 Hz, compute the Short Time Fourier Transform with a window size of 1024 and hop length of 768 frames. The input is a patch of 128 frames (roughly 11 seconds) from the normalised (per song to [0, 1]) magnitude spectrogram for both the mixture spectrogram and the isolated instrument.
Inserting FiLM. The U-Net has two well differentiated stages: the encoder and decoder. The enconder is the part that transforms the mixture magnitude input into a deep representation capturing the key elements to isolate an instrument. The decoder interprets this representation for reconstructing the final audio. We hypothesise that, if we can have a different way of encoding each instrument i.e., obtaining different deep representations, we can use a common ‘universal’ decoder to interpret all of them. Following this reasoning, we decided to condition only the U-Net encoder part. In the C-U-Net, a FiLM layer is inserted inside each encoding block after the batch normalisation and before the Leaky ReLU, as described in Figure 4. This decision relies on previous works where feature are modified after the normalisation[13, 3, 10]
. Batch normalisation normalises each feature map so that it has zero mean and unit variance. Applying FiLM after batch normalisation re-scale and re-shift feature maps after the activations. This allows the net to specialise itself to different tasks. As the output of our encoding blocks is transformed by the FiLM layer the data that flows through the skip connections carries on also the transformations. If we use the FiLM complex layer, the control mechanism/condition generator needs to generate 2016 parameters (1008 and 1008 ). On the other hand, FiLM simple layers imply 12 parameters: one and one for each of the 6 different encoding blocks, which means 2002 parameters less than for FiLM complex layers.
4.2 Condition generator: Embedding nets
The control mechanism/condition generator computes the and that modify our standard U-net behavior. Its architecture has to be flexible and robust to generate the best possible parameters. It has also to be able to find relationships between instruments. That is to say, we want it to produce similar and for instruments that have similar spectrogram characteristics. Hence, we explore two different embeddings: a fully connected version and a convolutional one (CNN). Each one is adapted for the FiLM complex layer as well as for the FiLM simple layer. In every control mechanism/condition generator configuration, the last layer is always two concatenated fully connected layers. Each one has as many parameters (’s or ’s) as needed. With this distinction we can control and individually (different activations).
- Fully-Connected embedding (F):
FiLM simple and 256 and 1024 for FiLM complex. All the neurons have relu activations. The last fully connected block is connected with the final control mechanism/condition generator layer i.e., the two fully connected ones ( and ). We call the C-U-Net that uses these architectures C-U-Net-SiF and C-U-Net-CoF.
- CNN embedding (C):
similarly to the previous one and inspired by , this embedding consists in a 1D convolution with filters followed by two convolution blocks (1D convolution with also
filters, 50% dropout and batch normalization). The first two convolutions have ‘same’ padding and the last one, ‘valid’. Activations are also relu. The number of filters are 16, 32 and 64 for theFiLM simple version and 32, 64, 252 for the FiLM complex one. Again, the last CNN block is connected with the two fully connected ones. The C-U-Net that uses these architectures are called C-U-Net-SiC and C-U-Net-CoC. This embedding is specially designed for dealing with several instruments because it seems more appropriated to find common and values for similar instruments.
|PARAM||39,30 (4 tasks x 9,825)||9,85||12||9,84||10,42|
The various control mechanisms only introduce a reduced number of parameters to the standard U-Net architecture remaining constant regardless of the instruments to separate, Table 1. Additionally, they make direct use of the commonalities between instruments.
Our objective is to prove that conditioned learning via FiLM (generic model+control) allows us to transform the U-Net into a multi-task system without losing performances. In Section 5.1 we review our experiment design aspects and we detail the experiment to validate the multi-task capability of the C-U-Net in Section 5.2.
5.1 Evaluation protocol
|Fix-U-Net(x4)||7.31 4.04||5.70 3.10||2.36 3.96|
|C-U-Net-SiC-np||7.35 4.13||5.74 3.18||2.34 3.69|
|C-U-Net-SiC-p||8.00 4.37||5.74 3.63||2.54 4.07|
|C-U-Net-CoC-np||7.27 4.24||5.60 2.88||2.36 3.81|
|C-U-Net-CoC-p||7.49 4.54||5.67 3.03||2.42 4.21|
|C-U-Net-SiF-np||7.23 3.97||5.59 3.01||2.22 3.67|
|C-U-Net-SiF-p||7.64 4.05||5.73 2.88||2.46 3.88|
|C-U-Net-CoF-np||7.42 4.20||5.59 3.07||2.32 3.85|
|C-U-Net-CoF-p||7.52 4.04||5.71 2.99||2.42 3.97|
We use the Musdb18 dataset . It consists of 150 tracks with a defined split of 100 tracks for training and 50 for testing. From the 100 tracks, we use 95 (randomly assigned) for training, and the remaining 5 for the validation set, which is used for early stopping. The performance is evaluated on the 50 test tracks. In Musdb18, mixtures are divided into four different sources: Vocals, Bass, Drums and Rest of instruments. The ’Rest’ task mixes every instrument that it is not vocal, bass or drums. Consequently, the C-U-Net is trained for four tasks (one task per instrument) and has four elements.
- Evaluation metrics.
We evaluate the performances of the separation using the mir evaltoolbox . We compute three metrics: Source-to-Interference Ratios (SIR), Source-to-Artifact Ratios (SAR) and Source-to-Distortion Ratios (SDR). To compute the three measure we also need the predicted ’accompaniment’ (the mixture part that does not correspond to the target source). Each task has a different accompaniment e.g., for the drums the accompaniment is rest+vocals+bass. We create the accompaniments by adding the audio signal of the needed sources.
- Audio Reconstruction method.
The system works exclusively on the magnitude of audio spectrograms. The output magnitude is obtained by applying the mask to the mixture magnitude. As in , the final predicted source (the isolated audio signal) is reconstructed concatenating temporally (without overlap) the output magnitude spectrums and using the original mix phase unaltered. We compute the predicted accompaniment subtracting the predicted isolated signal to the original mixture. Despite there are better phase reconstruction techniques such as , errors due to this step are common to both methods (U-Net and C-U-Net) and do not affect our main goal: to validate conditioning learning for source separation.
- Activation function for and .
One of the most important design choices is the activation function for and . We tested all the possible combinations of three activation functions (linear, sigmoid and tanh) in the C-U-Net-SiF configuration. As in , the C-U-Net works better when and are linear. Hence, our ’s and ’s have always linear activations.
- Training flexibility.
The conditioning mechanism gives the flexibility to have continuous values in the input
, which weights the target outputby the same value. We call this training method progressive. In practice, while training, we randomly weight and by a value between 0 and 1 every 5 instances. This is a way of dealing with ablations by making the control mechanism robust to noise. As shown in Table 2, this training procedure (p) improves the models. Thus, we adopt it in our training. Moreover, preliminary results (not reported) show that the C-U-Net can be trained for complex tasks like bass+drums or voice+drums. These complex tasks could benefit from ‘in between-class learning’ method where will have different intermediate instrument combinations.
|Fix-U-Net(x4)||10.70 4.26||5.39 3.58||3.52 4.88 (4.72)|
|C-U-Net-CoF||10.76 4.39||5.32 3.27||3.50 4.37 (4.65)|
|Wave-U-Net-D||-||-||0.55 13.67 (4.58)|
|Wave-U-Net-M||-||-||-2.10 15.41 (3.0)|
|Fix-U-Net(x4)||10.08 4.28||6.42 3.28||4.28 3.65 (4.13)|
|C-U-Net-CoF||10.03 4.34||6.80 3.25||4.30 3.81 (4.38)|
|Wave-U-Net-M||-||-||2.88 7.68 (4.15)|
|Fix-U-Net(x4)||4.64 4.76||6.51 2.68||1.46 4.31 (2.48)|
|C-U-Net-CoF||5.30 4.73||6.29 2.39||1.65 4.07 (2.60)|
|Wave-U-Net-M||-||-||-0.30 13.50 (2.91)|
|Fix-U-Net(x4)||3.83 2.84||4.47 2.85||0.19 3.00 (0.97)|
|C-U-Net-CoF||4.00 2.70||4.37 3.06||0.24 3.64 (1.71)|
|Wave-U-Net-M||-||-||1.68 6.14 (2.03)|
5.2 Multitask experiment
We want to prove that a given C-U-Net can isolate the Vocals, Drums, Bass, and Rest as good as four dedicated U-Net trained specifically for each task222with the same learning rate and optimizer as the C-U-Nets. We call this set of dedicated U-Nets, Fix-U-Nets. Each C-U-Nets version (one model) is compared with the Fix-U-Nets set (four models). We review the results at Table 2 and show a comparison per task in Table 3. Results in Table 2 for all 4 instruments highlight that FiLM simple layers work as good as the complex ones. This is quite interesting because it means that applying 6 affine transformations with just 12 scalars (6 and 6 ) at a precise point allows the C-U-Net to do several source separations. With FiLM complex layers it is intuitive to think that treating each feature map individually let the C-U-Net learn several deep representations in the encoder. However, we have no intuitive explanation for FiLM simple layers. We did the Tukey test with no significant differences between the Fix-U-Nets and the C-U-Nets for any task and metric. Another remark is that the four C-U-Nets benefit from the progressive training. Nevertheless, it impacts more the simple layers than in the complex ones. We think that the restriction of the former (fewer parameters) helps them to find an optimal state. However, these results do not prove nor discard the significant similarity between systems. For demonstrating that we have carried out a Pearson correlation experiment. The results are detailed in Figure 5
. The Pearson coefficient measures the linear relationship between two sets of results (+1 implies an exact linear relationship). It also computes the p-value that indicates the probability that uncorrelated systems have produces them. Our distinct C-U-Net configurations have a globaland . Which means that there is always more than correlation between the performance of the four dedicated U-Nets and the (various) conditional version(s). Additionally, there is almost no probability that a C-U-Net version is not correlated with the dedicated ones. We have also computed the Pearson coefficient and p-value per task and per metric with the same results. In Figure 5 shows a strong correlation between the Fix-U-Net results and the distinct C-U-Nets (independently of the task or metric). Thus, if one works well, the others too and vice versa. In Table 3 we detail the results per task and metric for the Fix-U-Net and the C-U-Net-CoF which is not the best C-U-Net but the one with the highest correlation with the dedicated ones. There we can see how their performances are almost identical. Nevertheless, our vocal isolation (in any case) is not as good as the one reported in , we believe that this is mainly due to the lack of data. These results can only be compared with the Wave-U-Net. Although they report the results (only the SDR) for the four tasks in the multi-instrument version (multiple outputs layers) they only have a dedicated version for vocals. For vocal separation, the performance of the multi-instrument version decreases more than 2.5 dB in mean, 1.5 dB in the median and the std increase in almost 2 dB. Furthermore, the C-U-Net performs better than the multi-instrument in three out of four tasks (vocals, bass, and drums)333Our experiment conditions are different in training data size (95 Vs 75) and in sampling rate (8192 Hz Vs 22050 Hz) than Wave-U-Net.. For the ’Rest’ task, the multi-instrument wave-u-net outperforms our C-U-Nets. This is normal because the dedicated U-Net has already problems with this class and the C-U-Nets inherits the same issues. We believe that they come from the vague definition of this class with many different instruments combinations at once. This proves that the various C-U-Nets behave in the same way as the dedicated U-Nets for each task and metric. It also demonstrates that conditioned learning via FiLM is robust to diverse control mechanisms/condition generators and FiLM layers. Moreover, it does not introduce any limitations which are due to other factors.
6 Conclusions and future work
We have applied conditioning learning to the problem of instrument source separations by adding a control mechanism to the U-Net architecture. The C-U-Nets can do several source separation tasks without losing performance as it does not introduce any limitation and makes use of the commonalities of the distinct instruments. It has a fixed number of parameters (much lower than the dedicated approach) independently of the number of instruments to separate. Finally, we showed that progressive training improves the C-U-Nets and introduced the FiLM simple, a new conditioning layer that works as good as the original one but requires less ’s and ’s. Conditioning learning faces problems providing a generic model and a control mechanism. This gives flexibility to the systems but introduces new challenges. We plan to extend the C-U-Net to more instruments to find its restrictions and to explore the performance for complex tasks i.e., separating two or more instruments combinations (e.g., vocals+drums). Likewise, we are exploring ways of adding new conditions (namely new instrument isolation) to a trained C-U-Net and how to detach the joint training. Additionally, we intend to integrate it other source separation architectures such as Wave-U-Net. Lastly, we believe that conditioning learning via FiLM will benefit many MIR problems because it defines a transparent and direct way of inserting external data to modify the behavior of a network. Acknowledgement. This research has received funding from the French National Research Agency under the contract ANR-16-CE23-0017-01 (WASABI project). Implementation and audio examples available at https://github.com/gabolsgabs/cunet
-  N. Montecchio R. Bittner A. Kumar T. Weyde A. Jansson, E. J. Humphrey. Singing voice separation with deep u-net convolutional networks. In Proc. of ISMIR (International Society for Music Information Retrieval), Suzhou, China, 2017.
P. Chandna, M. Miron, J. Janer, and E. Gómez.
Monoaural audio source separation using deep convolutional neural networks.In Proc. of LVA/ICA (International Conference on Latent Variable Analysis and Signal Separation), Grenoble, France, 2017.
-  H. de Vries, F. Strub, J. Mary, H. Larochelle, O. Pietquin, and A. C. Courville. Modulating early visual processing by language. In Proc. of NIPS (Annual Conference on Neural Information Processing Systems), Long Beach, CA, USA, 2017.
-  V. Dumoulin, E. Perez, N. Schucher, F. Strub, H. de Vries, Aaron Courville, and Y. Bengio. Feature-wise transformations. Distill, 2018. https://distill.pub/2018/feature-wise-transformations.
-  C. Hawthorne, E. Elsen, J. Song, A. Roberts, I. Simon, C. Raffel, J. Engel, S. Oore, and D. Eck. Onsets and frames: Dual-objective piano transcription. In Proc. of ISMIR (International Society for Music Information Retrieval), Paris, France, 2018.
-  C. A. Huang, A. Vaswani, J. Uszkoreit, N. Shazeer, C. Hawthorne, A. M. Dai, M. D. Hoffman, and D. Eck. An improved relative self-attention mechanism for transformer with application to music generation. CoRR, abs/1809.04281, 2018.
Po-Sen Huang, Minje Kim, Mark Hasegawa-Johnson, and Paris Smaragdis.
Joint optimization of masks and deep recurrent neural networks for monaural source separation.IEEE/ACM TASLP (Transactions on Audio Speech and Language Processing), 23(12), 2015.
Sergey Ioffe and Christian Szegedy.
Batch normalization: Accelerating deep network training by reducing
internal covariate shift.
Proc. of ICML (International Conference on Machine Learning), 2015.
-  H. Kameoka, Li Li, S. Inoue, and S. Makino. Semi-blind source separation with multichannel variational autoencoder. CoRR, abs/1808.00892, 2018.
-  T. Kim, I. Song, and Y. Bengio. Dynamic layer normalization for adaptive neural acoustic modeling in speech recognition. CoRR, abs/1707.06065, 2017.
-  Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proc. of ICLR (International Conference on Learning Representations), Banff, Canada, 2014.
-  F. Mayer, D. Williamson, P. Mowlaee, and D. Wang. Impact of phase estimation on single-channel speech separation based on time-frequency masking. The Journal of the Acoustical Society of America, 141:4668–4679, 2017.
E. Perez, F. Strub, H. de Vries, V. Dumoulin, and A. C. Courville.
Film: Visual reasoning with a general conditioning layer.
Proc. of AAAI (Conference on Artificial Intelligence), New Orleans, LA, USA, 2018.
-  C. Raffel, B. Mcfee, E. J. Humphrey, J. Salamon, O. Nieto, D. Liang, and D. P. W. Ellis. mir_eval: a transparent implementation of common mir metrics. In Proc. of ISMIR (International Society for Music Information Retrieval), Porto, Portugal, 2014.
-  Z. Rafii, A. Liutkus, F.-R. Stöter, S. Ioannis Mimilakis, D. Fitzgerald, and B. Pardo. An Overview of Lead and Accompaniment Separation in Music. IEEE/ACM TASLP (Transactions on Audio Speech and Language Processing), 26(8), 2018.
-  Zafar Rafii, Antoine Liutkus, Fabian-Robert Stöter, Stylianos Ioannis Mimilakis, and Rachel Bittner. The MUSDB18 corpus for music separation, 2017. https://zenodo.org/record/1117372.
-  O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In Proc. of MICCAI (International Conference on Medical Image Computing and Computer Assisted Intervention), Munich, Germany, 2015.
-  J. Shen, R. Pang, R. J. Weiss, M. Schuster, N. Jaitly, Z. Yang, Z. Chen, Y. Zhang, Y. Wang, R. J. Skerry-Ryan, R. A. Saurous, Y. Agiomyrgiannakis, and Y. Wu. Natural TTS synthesis by conditioning wavenet on mel spectrogram predictions. In Proc. of ICASSP (International Conference on Acoustics, Speech and Signal Processing), Calgary, Canada, 2018.
-  D. Stoller, S. Ewert, and S. Dixon. Wave-u-net: A multi-scale neural network for end-to-end audio source separation. In Proc. of ISMIR (International Society for Music Information Retrieval), Paris, France, 2018.
F. Strub, M. Seurin, E. Perez, H. de Vries, J. Mary, P. Preux, A. C. Courville,
and O. Pietquin.
Visual reasoning with multi-hop feature modulation.
Proc. of ECCV (European Conference on Computer Vision), Munich, Germany, 2018.
Y. Tokozume, Y. Ushiku, and T. Harada.
Between-class learning for image classification.
Proc. of CVPR (Conference on Computer Vision and Pattern Recognition), Salt Lake City, UT, USA, 2018.
-  A. van den Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. W. Senior, and K. Kavukcuoglu. Wavenet: A generative model for raw audio. CoRR, abs/1609.03499, 2016.
-  A. van den Oord, Y. Li, I. Babuschkin, K. Simonyan, O. Vinyals, K. Kavukcuoglu, G. van den Driessche, E. Lockhart, L. C. Cobo, F. Stimberg, N. Casagrande, D. Grewe, S. Noury, S. Dieleman, E. Elsen, N. Kalchbrenner, H. Zen, A. Graves, H. King, T. Walters, D. Belov, and D. Hassabis. Parallel wavenet: Fast high-fidelity speech synthesis. CoRR, abs/1711.10433, 2017.
-  E. Vincent, R. Gribonval, and C. Févotte. Performance measurement in blind audio source separation. IEEE/ACM TASLP (Transactions on Audio Speech and Language Processing), 14(4), 2006.
-  Li-Chia Yang, Szu-Yu Chou, and Yi-Hsuan Yang. Midinet: A convolutional generative adversarial network for symbolic-domain music generation using 1d and 2d conditions. CoRR, abs/1703.10847, 2017.