DeepAI
Log In Sign Up

Exploring Quality and Generalizability in Parameterized Neural Audio Effects

06/10/2020
by   William Mitchell, et al.
0

Deep neural networks have shown promise for music audio signal processing applications, often surpassing prior approaches, particularly as end-to-end models in the waveform domain. Yet results to date have tended to be constrained by low sample rates, noise, narrow domains of signal types, and/or lack of parameterized controls (i.e. "knobs"), making their suitability for professional audio engineering workflows still lacking. This work expands on prior research published on modeling nonlinear time-dependent signal processing effects associated with music production by means of a deep neural network, one which includes the ability to emulate the parameterized settings you would see on an analog piece of equipment, with the goal of eventually producing commercially viable, high quality audio, i.e. 44.1 kHz sampling rate at 16-bit resolution. The results in this paper highlight progress in modeling these effects through architecture and optimization changes, towards increasing computational efficiency, lowering signal-to-noise ratio, and extending to a larger variety of nonlinear audio effects. Toward these ends, the strategies employed involved a three-pronged approach: model speed, model accuracy, and model generalizability. Most of the presented methods provide marginal or no increase in output accuracy over the original model, with the exception of dataset manipulation. We found that limiting the audio content of the dataset, for example using datasets of just a single instrument, provided a significant improvement in model accuracy over models trained on more general datasets.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

10/15/2018

Modeling of nonlinear audio effects with end-to-end deep neural networks

In the context of music production, distortion effects are mainly used f...
05/28/2019

SignalTrain: Profiling Audio Compressors with Deep Neural Networks

In this work we present a data-driven approach for predicting the behavi...
05/11/2021

Differentiable Signal Processing With Black-Box Audio Effects

We present a data-driven approach to automate audio signal processing by...
10/22/2019

Modeling plate and spring reverberation using a DSP-informed deep neural network

Plate and spring reverberators are electromechanical systems first used ...
11/01/2018

Deep Learning for Tube Amplifier Emulation

Analog audio effects and synthesizers often owe their distinct sound to ...
11/04/2021

WaveFake: A Data Set to Facilitate Audio Deepfake Detection

Deep generative modeling has the potential to cause significant harm to ...
02/11/2022

Audio Defect Detection in Music with Deep Networks

With increasing amounts of music being digitally transferred from produc...

I Introduction

Computer technology and modeling has exploded over the last decade with the improvement in computer processing power and techniques. With these advances in speed and power, along with the advent of using graphics processing units (GPU) for batch computation, machine learning processes and neural network architectures have begun to tackle increasingly rigorous tasks and problems in almost every major research field from medicine

Deo (2015), to justice systems Hyatt and Berk (2015), to audio processing McFee et al. (2019); Martinez Ramirez and Reiss (2018); Défossez et al. (2018)

. However, neural network and deep learning processes are still a relatively new research field, and better training techniques, architecture designs, loss functions and learning processes are introduced to the academic community regularly. These features can then be further developed in new fields to determine their optimal purpose. Modeling audio effects is an especially interesting field of research because of both its complexity and its usefulness across multiple disciplines.

Being able to accurately and efficiently model audio effects provides for numerable advantages including portability, repeatability, and flexibility. Modeling analog audio effects in the digital domain allows those effects to be increasingly portable, they require no physical space or weight and can be uploaded onto numerous devices in numerous locations. Digital effects models also provide greater repeatability over analog effects, which may require calibration or experience degradation over time. Finally, digital effect models can be more flexible, with a greater range of input formats and an easier opportunity for modification. Similar state-of-the-art research in this field include a variety of model types and practices including generative adversarial networks Donahue et al. (2018); Engel et al. (2019); Marafioti et al. (2019)

, and multiple types of model autoencoders. For further discussion of relevant applications see “Autoencoders for Music Sound Modeling”

Roche et al. (2018).

The research in this paper presents the progress made on already published resultsHawley et al. (2019) on modeling nonlinear time-dependent signal processing effects associated with music production by means of a deep neural network in the waveform domain. This research has a specific focus on improving the speed, accuracy, and generalizability of our previously published SignalTrain model Hawley et al. (2019)

on nonlinear effects. Linear systems possess two mathematical properties: homogeneity and additivity. If a given system or effect doesn’t have one, or both, properties the effect is considered nonlinear. Examples of these effects are things like compression and distortion. A majority of the past research in this field utilizes spectrograms, time and frequency graphs of audio using the Fourier transform, of input and output audio to train the network. The waveform domain was considered too computationally expensive, but it’s usefulness has been explored more in recent years

Défossez et al. (2019); Lluís et al. (2018). This work focuses on the waveform domain because of the ability to retain both the frequency and phase information of the audio, something that spectrograms cannot do, in order to improve both the accuracy of training and the quality of output audio. This work also includes the application of trainable virtual “knobs” into the neural network architecture, which act as the virtual version of analog knobs that can control various settings on analog effects units. Much of the following results are from compressor effects specifically, mostly because it was found to be the hardest effect to model, but we also show that the model is trainable on a wide array of effects, both digital and analog. This area of research is significant because it presents a harder computational task than both categorizing and recognizing audio content, by being able to reproduce a specific effect that will produce a desired change in the audio content.

Audio effect modeling is also an increasingly lucrative research area, with many commercially viable applications. Current commercial products such as the Kemper Profiler Kemper GmbH and Fractal Fx-II Systems , are leading the modern surge of commercial audio modeling. The ability for the general populace to have access to more and more powerful systems has created an opportunity for research in this field to become immediately usable and practical in a commercial sense. Advanced knowledge in these tasks will not only help further the understanding and handling of audio waveforms and files but will help further the understanding of how to better optimize, train, and structure neural network models for similar tasks. Learning and experiencing how neural networks extract features and detect patterns in waveforms across these audio processes will inform both the audio technology and machine learning fields. Our research goal is to utilize the advancing power and adaptability of neural networks to effectively model commonly used audio effects (i.e. compression, echo, reverb), by training a neural network architecture on pre- and post-effect audio waveforms, along with a focus on high quality output audio. All the following presented research can be applied to any quality of input audio, but this work focuses on high-quality CD level audio, i.e. 44.1kHz sampling rate with a 16-bit depth. Following is presented an overview of the SignalTrain architecture, along with the researched methods for improving the three areas of interest: speed, accuracy, and generalizability.

Ii Methods

SignalTrain implements a deep neural network, with architecture inspired from U-Net Ronneberger et al. (2015) and TFNet Lim et al. (2018). Like U-Net, SignalTrain utilizes an “hourglass” encoder-decoder architecture with skip connections spanning across the middle. Like TF-Net, SignalTrain also works in both the time and spectral domains explicitly, as outlined in the model overview presented in Figure 1. Most of the original model architecture remains constant throughout the results presented in this paper, with very minor experimental changes made that are detailed below. The code used is the original SignalTrain source***Source code at http://github.com/drscotthawley/signaltrain

, subject to modifications that follow. The front-end module contains two 1-D convolution operators that produce a single sub-space that provides magnitude and phase features. These features are processed individually by two following deep neural networks which comprise the autoencoder module. These networks contain 7 fully connected, feed-forward neural networks, who are also conditioned by the control variables, or “knobs”, of the audio effect module. The SignalTrain model learns a mapping of the un-processed to the processed audio, by the audio effect to be profiled, and is conditioned on the vector of the effect’s controls. For a more complete description of the model see

Hawley et al. (2019).

Figure 1: Overview of the original architecture of the SignalTrain model Hawley et al. (2019).

Datasets in the following experiments are made up of appropriate audio data taken from either pre-recorded samples through the creative commons license or are personally manufactured recorded samples using popular analog audio effects processors like the Universal Audio LA2A Audio . Digital effects courtesy of Dr. Eric Tarr’s HackAudio digital effects, implemented on music only datasets of amateur songs via the creative commons license, along with established datasets like the IDMT-SMT-Audio-Effects dataset Stein (2010) are used to learn both digital-only and analog effects. This specific dataset was also used in Martinez Ramirez et al. (2020), which has been taken advantage of to provide a direct comparison. This dataset provided audio samples from a Leslie Cabinet speaker and a Universal Audio 6176 Vintage channel strip. Modeling some of the most successful analog equipment will help to better understand both why they might have been successful, and how different circuitry designs can affect processing in unexpected ways. Similar combinations of the popular FMA dataset Defferrard (2017) and the NSynth dataset Engel et al. (2017) were used in this experiment and any audio used not already contained in a previously constructed dataset can be found on Zenodo under the ”SignalTrain Concatenated Dataset” Mitchell (2020). It is expected that the power of deep neural networks using raw audio waveforms will provide the ability to model a myriad of nonlinear audio processes, not simply one facet such as compression or echo generation. After training on a dataset, new audio can be manufactured from the trained model checkpoint and be compared to the original audio.

Many of the parameters of the model and datasets were subject to change and are detailed in the following sections. Unless otherwise specified, a log-cosh loss function Chen et al. (2019) was used across all trainings and weight initialization was randomized except for input/output rates, which were initialized using a discrete Fourier Transform Winograd (1978)

. In use was a PyTorch-based model utilizing raw audio waveforms as input and output data on a single NVIDIA RTX2080Ti GPU with mixed precision training. The remaining parameters were altered in order to further our goals in a three-pronged approach: 1) improving computation speed, 2) improving overall model accuracy or, 3) lowering the signal to noise ratio of the output audio to achieve higher quality outputs. Altering these parameters gives a deeper insight into how the network is functioning and learning, which proved valuable in optimizing its performance.

Iii Experimentation/Results222Audio examples of results are available at https://tinyurl.com/signaltrain-exploring

iii.1 Speed

The first area of interest of the three-pronged approach to improving the model was increasing its efficiency, or decreasing the time it required to train on data while still achieving comparable accuracy. Methods that improve accuracy often increase training time, and those that decrease training time often bring a decrease in accuracy too. Striking the right balance between speed and accuracy is often dependent on the situation and for each following method we discuss that balance. A model, identical to that described in Hawley et al. (2019)

, was used to create baseline results for comparison. This model was trained on 200,000 datapoints of 5-second 16-bit 44.1kHz audio, for 1,000 epochs using a compressor effect with 4 variable knobs on a NVIDIA RTX 2080 GPU. Typical runs for the baseline averaged between 12-13 hours, and a single training example that lasted 12.32 hours and achieved a training loss of 4.899e-06 is used for direct comparison results.

The first attempt at improving the speed of the model was freezing transform layers. Fourier transform layers are used to go to and from spectrograms twice in the model, from input to output audio. These layers are initialized with Fourier weights, but become learnable and are updated throughout the training procedure. Without having to do the extra computations to update these weights, the model would train faster. Shown in Table 1, freezing the Fourier transform weights decreased model training time significantly, by about 11.56 percent, but accuracy worsened by a factor of 8.72, or the loss value was 8.72 times worse. A log-log graph with the validation loss values are displayed in Figure 2. While the model did become faster, such a large increase in error was considered too great and continuing to freeze layers was not performed.

Description
Training
Time (Hours)
Validation
Loss
Baseline 12.81 7.666e-6
Frozen Layers 10.90 5.128e-5
No Skip Connection 12.81 9.000e-6
Table 1: Comparison between a baseline SignalTrain model, one with frozen Fourier Transform layers, and one with removed final skip connections. All trained on the same dataset.
Figure 2: Effects of changes to model architecture. Log-Log graph comparison of the validation set loss values for the baseline, frozen transform layers (Conv1D layers in Figure 1), and removed final skip connection. Note that the ‘hump’ shapes in this and later plots results from the system’s use of a “1cycle” learning rate scheduleHawley et al. (2019).

The second attempt at increasing model efficiency was by analyzing the effect of skip connections across the model. Skip connections are widely considered to be purely beneficial to neural network training Mao et al. (2016); Drozdzal et al. (2016), but their applicability across different deep learning problems is variable. Also shown in Table 1 are the results from removing the final skip connection, between the final output and initial input. As shown, including these skip connections adds essentially zero time to the training process but improves the model’s training loss value by 24.4 percent, clearly a model design that is purely beneficial in this application. Figure 2 also presents the validation loss results of the removed skip connection trained model.

iii.2 Accuracy

The second area of interest for improving the model was to improve how accurate the model could become, or how well the model could learn to mimic the desired effect. As mentioned earlier accuracy and speed often compete when implementing model changes and becoming more accurate may include increasing the computational effort, which will increase model training time. The first result listed in Table 2 presents results of training the same baseline model used for 10 times more epochs.

Description
Training
Time (Hours)
Validation
Loss
Baseline (1,000 epochs) 12.81 7.666e-6
10,000 Epochs 131.6 3.540e-6
Vocal Only Dataset 12.54 2.838e-6
Vocal Dataset 16kHz 11.57 5.425e-6
Table 2: Comparison of the baseline SignalTrain model, a model trained on 10,000 epochs, and two models trained on a dataset containing only vocal audio.

This model again used 200,000 16-bit, 44.1kHz 5-second audio datapoints on a compressor effect with 4 variable knobs, but this time trained for 10,000 epochs instead of 1,000. This was performed primarily to test the limits of the model, and how low the loss value could become given more time to learn. As shown, the loss value of this extended training reached 3.826e-06, just marginally better than the baseline model, even with 131.6 hours of training. This indicates training for significantly longer than 1,000 epochs will produce negligibly better results. While one could train the model for significantly longer (e.g., an entire month), it is not obvious that this alone would provide the increase in quality suitable for pro audio workflows – and this was for a ’small’ input window size of only 4096 samples; larger windows would require significantly more expenditures of computational resources, beyond the scope of this academic study. Figure 3 shows the log-log graph of the validation loss values between these two trainings.

Figure 3: Effect of longer training. Log-Log graph comparison of the validation set loss values of the baseline model run for 1,000 vs. 10,000 epochs. Although the loss is lower and audio quality better for 10,000 vs 1000 epochs, the increase in quality seems disproportionately small compared to the factor of 10 increase time.

Another attempt at improving the accuracy of the model was to restrict the variability of the audio used to construct datasets. Previous work utilized datasets of randomly generated audio, and the baseline described in this paper utilized amateur recordings of music of various types, with no restrictions on genre, instrumentation, or amplitude. Shown in Table 2, restricting datasets to single instruments, in this case only vocals, proved beneficial in lowering the loss value. The vocal only dataset used was constructed using amateur recordings of acapella vocals, in multiple languages via the creative commons license. These same recordings were then down sampled to 16kHz to create an identical dataset, just with 16kHz sampled audio. Figure 4 displays a log-log graph comparing the baseline, vocal only, and vocal only 16kHz sampled validation loss results. The vocal-only run achieved a training loss value 46.95 percent better than the baseline, and the 16kHz sampled audio achieved a training loss value 11.1 percent worse than the baseline, although the validation loss was better than baseline, and it decreased training time by just over a half hour.

Figure 4: Effects of restricting the dataset. Log-Log graph comparison of the validation set loss values of the baseline dataset (consisting of a variety of sounds), a dataset of vocals only, and a vocal-only dataset with sampled to 16kHz.

Improving the accuracy of model training is important because it should directly correlate to improved audio output quality, although this is not necessarily always the case. Qualitatively, varying aspects of the training dataset will improve output audio quality with relatively no change in loss value. Current research indicates that increased amount of silence in the dataset will correspond to more noise in output audio without worsening the loss value. Similarly, datasets with more restricted contents do not generalize as well to other types of audio compared to those datasets that contain more diverse contents. For example, models trained on single guitar notes produce better sounding outputs when given only guitar notes than when given audio samples of full songs. Figure 5 shows waveforms demonstrating this phenomenon. In the lower right plot one sees that the predicted output contains severe distortion covering essentially the entire waveform, indicating that models trained only on guitar sounds will not generalize well to predicting full-band songs.

Figure 5: Waveforms showing guitar note (left column) and a portion of a full-band song (right column) for the comp4c compressor effect. Top row: Target audio. Middle row: Difference between target and predictions made by a model trained on songs. Bottom row: Difference between target and predictions of a model trained on guitar notes. Of the two lower rows, those on the main diagonal (also shown in red) are models whose training dataset was not of the same type as the prediction, whereas the others (shown in blue) reflect training datasets similar to the prediction. Note that the model trained on guitar notes fails to generalize to the full song (bottom right). Despite the appearance of plots in the left column, the model trained on guitars (bottom, blue) actually predicts the guitar note with lower log-cosh error and less audible noise than the model trained on songs (middle, red). Audio examples are available at https://tinyurl.com/signaltrain-exploring.

In addition to the approaches already mentioned, we also investigated alternative loss functions, such the log-cosh of the difference in output spectrogram values (as opposed to the time-domain values), and a log-SNR lossMimilakis et al. (2020), but were unable to observe improvements to the accuracy compared to the baseline.

iii.3 Generalization: Extending to Various Audio Effects

Up to this point in the paper, we have only considered improvements for modeling a compressor effect. The final area of interest was exploring the ability of SignalTrain to model various different audio effects beyond compressors. Previous work focused mainly on digital compressors because of their hard-to-model nature, although nothing required the modeled effect had to be a compressor, much less a digital compressor. The first progress in expanding the model’s generalizability was to include effects that differed significantly from compressors, and to extend further into the analog domain. To achieve this the freely available AudioMDPI dataset Martinez Ramirez et al. (2020), was used. This dataset contains single guitar or bass notes, lasting 2-seconds in length from the IDMT-SFT-Audio-Effects dataset Stein (2010), as their dry/input samples. These dry samples were then processed through a Leslie cabinet speaker Hammond USA horn and woofer separately, and a Universal Audio 6176 Vintage channel strip.

Table 3 presents the training results on these datasets over 1,000 epochs on the chorus and tremolo effects. It should be noted that the increased training times were due to an increase in input audio buffer size, which was necessary due to the nature of the effects. Compressors, as an effect, do not introduce any oscillatory characteristics into the audio, however Leslie cabinets have rotating horns that oscillate with some set frequency. One may expect the Chorus effects to be more computationally intensive for our model to reproduce than the Tremelo effects, as the former have rotation periods approximately 8 times longer than the former, requiring proportionately larger input buffer sizes to accommodate. Initial experimentation demonstrated that input buffer sizes that did not contain a complete cycle of the horn could not model the effect with any sort of accuracy. Increasing the input buffer size from 8192 to 98304 samples proved sufficient with these datasets.

Description
Training
Time (Hours)
Validation
Loss
Baseline (1,000 epochs) 12.81 7.666e-6
Leslie Horn Chorus 48.15 6.817e-5
Leslie Woofer Chorus 51.01 4.893e-5
Leslie Horn Tremolo 48.33 2.104e-4
Leslie Woofer Tremolo 51.35 1.225e-4
Table 3: Comparison between our baseline model and models trained on various analog effects from a Leslie Cabinet. Increased training times are due to the increased input chunk size to account for the oscillatory nature of these effects.

It should be noted that the validation loss results presented in Table 3 present the final loss value and not necessarily the lowest value achieved during training. It was found that the model experienced significant over-fitting during the training of the Leslie effects, most likely due to the relatively small size of the dataset. The AudioMDPI dataset contains approximately 251MB of data per effect, and the datasets used to model the digital compressor effects discussed previously average near 30GB of data. This indicates the SignalTrain model requires much larger datasets than AudioMDPI to achieve qualitatively good results, a characteristic that can be further defined in future research.

Iv Conclusion and Future Work

This paper presents the improvement and optimization work done to make the SignalTrain model more efficient, more accurate, and for the purposes of better audio quality. First, it explained attempted methods for trying to improve the efficiency of the model, which would decrease overall training time. It was found that freezing Fourier transform layers would improve speed, but not to the extent that it would justify the great loss in model accuracy. It was also shown that the skip connections in the original model do not increase training time and do significantly help the model achieve lower loss values. Next it presented the results of attempting to increase the model’s accuracy through allowing the model to train for significantly longer, and by restricting the variability of the datasets used for training. It was shown that for runs substantially over 1,000 epochs, the model does improve its loss value, but only marginally and no noticeable improvement in output audio quality was observed. It was also shown that restricting datasets to single instruments improved the model training significantly, without increasing training time. However, models trained on single instruments failed to generalize to other types of audio and were only effective when given instruments that it was trained on. Finally, the results at expanding the generalizability of the SignalTrain model were shown. Both analog (Leslie Cabinet) and digital (HackAudio compressors) effects were modeled effectively. These effects also varied significantly in how they alter audio, proving the SignalTrain model can learn more than just compressors. It was also indicated that differing effects require specialized parameters to produce the best results.

There are many potential avenues for future work using the SignalTrain model and in the signal-processing effects modeling field. Future research can be the continued optimization of the SignalTrain model, including specific effect optimization, and further integration of audio plugins within the model. Belmont University, with a large supply of trained listeners, provides an excellent opportunity and space for qualitative listening tests. Overall, there are many paths and opportunities for the audio machine learning field going forward.

Acknowledgements.
William Mitchell wishes to thank Scott H. Hawley for his help and mentorship, and Dr. Eric Tarr, Benjamin Coulburn and Stylianos Ioannis Mimilakis for their prior contributions to SignalTrain.

References

  • [1] U. Audio Teletronix la-2a classic leveling amplifier. Note: https://www.uaudio.com/hardware/la-2a.html Cited by: §II.
  • P. Chen, G. Chen, and S. Zhang (2019) Log hyperbolic cosine loss improves variational auto-encoder. Note: https://openreview.net/forum?id=rkglvsC9Ym Cited by: §II.
  • M. Defferrard (2017) FMA: a dataset for music analysis. Note: http://doi.org/10.5281/zenodo.1066119 Cited by: §II.
  • A. Défossez, N. Usunier, L. Bottou, and F. Bach (2019) Music source separation in the waveform domain. External Links: 1911.13254 Cited by: §I.
  • A. Défossez, N. Zeghidour, N. Usunier, L. Bottou, and F. Bach (2018) SING: symbol-to-instrument neural generator. External Links: 1810.09785 Cited by: §I.
  • R. C. Deo (2015) Machine learning in medicine. Circulation. External Links: Link Cited by: §I.
  • C. Donahue, J. McAuley, and M. Puckette (2018) Adversarial audio synthesis. External Links: 1802.04208 Cited by: §I.
  • M. Drozdzal, E. Vorontsov, G. Chartrand, S. Kadoury, and C. Pal (2016) The importance of skip connections in biomedical image segmentation. External Links: 1608.04117 Cited by: §III.1.
  • J. Engel, K. K. Agrawal, S. Chen, I. Gulrajani, C. Donahue, and A. Roberts (2019) GANSynth: adversarial neural audio synthesis. External Links: 1902.08710 Cited by: §I.
  • J. Engel, C. Resnick, A. Roberts, S. Dieleman, D. Eck, K. Simonyan, and M. Norouzi (2017) Neural audio synthesis of musical notes with wavenet autoencoders. External Links: 1704.01279 Cited by: §II.
  • [11] Hammond USA Leslie classic cabinets. Note: http://hammondorganco.com/products/leslie/classic-cabinets/ Cited by: §III.3.
  • S. Hawley, B. Colburn, and S. I. Mimilakis (2019) Profiling audio compressors with deep neural networks. In Audio Engineering Society Convention 147, External Links: Link Cited by: Exploring Quality and Generalizability in Parameterized Neural Audio Effects, §I, Figure 1, §II, Figure 2, §III.1.
  • J. Hyatt and R. Berk (2015) Machine learning forecasts of risk to inform sentencing decisions. Federal Sentencing Reporter 27, pp. 222–228. External Links: Document Cited by: §I.
  • [14] Kemper GmbH Kemper Amps. Note: https://www.kemper-amps.com/profiler/overview Cited by: §I.
  • T. Lim, R. Yeh, Y. Xu, M. Do, and M. Hasegawa-Johnson (2018)

    Time-frequency networks for audio super-resolution

    .
    pp. 646–650. External Links: Document Cited by: §II.
  • F. Lluís, J. Pons, and X. Serra (2018) End-to-end music source separation: is it possible in the waveform domain?. External Links: 1810.12187 Cited by: §I.
  • X. Mao, C. Shen, and Y. Yang (2016) Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. External Links: 1603.09056 Cited by: §III.1.
  • A. Marafioti, N. Perraudin, N. Holighaus, and P. Majdak (2019) Adversarial generation of time-frequency features with application in audio synthesis. In Proceedings of the 36th International Conference on Machine Learning, K. Chaudhuri and R. Salakhutdinov (Eds.), Proceedings of Machine Learning Research, Vol. 97, Long Beach, California, USA, pp. 4352–4362. External Links: Link Cited by: §I.
  • M. Martinez Ramirez, E. Benetos, and J. Reiss (2020) Deep learning for black-box modeling of audio effects. Applied Sciences 10, pp. 638. External Links: Document Cited by: §II, §III.3.
  • M. Martinez Ramirez and J. Reiss (2018)

    End-to-end equalization with convolutional neural networks

    .
    pp. . Cited by: §I.
  • B. McFee, J. Kim, M. Cartwright, J. Salamon, R. Bittner, and J. Bello (2019) Open-source practices for music signal processing research: recommendations for transparent, sustainable, and reproducible audio research. IEEE Signal Processing Magazine 36, pp. 128–137. External Links: Document Cited by: §I.
  • S. I. Mimilakis, K. Drossos, and G. Schuller (2020) Unsupervised interpretable representation learning for singing voice separation. In Proceedings of the 27th European Signal Processing Conference (EUSIPCO 2020), Cited by: §III.2.
  • W. Mitchell (2020) SignalTrain concatenated dataset. Note: http://doi.org/10.5281/zenodo.3755245 External Links: Link Cited by: §II.
  • F. Roche, T. Hueber, S. Limier, and L. Girin (2018) Autoencoders for music sound modeling: a comparison of linear, shallow, deep, recurrent and variational models. External Links: 1806.04096 Cited by: §I.
  • O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. External Links: 1505.04597 Cited by: §II.
  • M. Stein (2010) IDMT-smt-audio-effects. Note: https://www.idmt.fraunhofer.de/en/business_units/m2d/smt/audio_effects.html Cited by: §II, §III.3.
  • [27] F. A. Systems XL+preamp/fx processor. Note: https://www.fractalaudio.com/p-axe-fx-ii-preamp-fx-processor Cited by: §I.
  • S. Winograd (1978) On computing the discrete fourier transform. Mathematics of Computation 32, pp. 175–199. Cited by: §II.