The reverberation effect is present in all real life enclosures and provides a listener with cues that relate to properties of the room. These cues are the cumulative result of many acoustic reflections that human listeners use to infer properties of the room . Similarly, models can be learned by machines that enable them to infer properties of their auditory environments . Training data available for learning properties of the reverberation effect is often in the form of Acoustic Impulse Responses as Finite Impulse Response (FIR) filters, which are measured in real rooms . These AIRs are high-dimensional, consisting typically of thousands of coefficients, and they are small in number as their measurement is time-consuming  and often impractical. This limits the training of classifiers based on Deep Neural Networks . The motivation of this paper is to address this issue for the task of room classification, where a machine is trained to predict the room where a speech recording was made in. This finds applications in a smart-home , providing machines with understanding about the location of the user in the home, and also in forensics .
This paper presents a novel method for data augmentation for room classification from reverberant speech. The data augmentation method starts from measured AIRs and uses Generative Adversarial Networks to generate additional artificial ones. To do so, one GAN is trained for each of the rooms considered in the classification and it is then used to generate many artificial AIRs. This is an alternative to the process of measuring many more AIRs, by moving the source and receiver at various positions in the same real room. Repeating the process for a number of rooms expands the available dataset, without the need for any additional data collection. A challenge to overcome during training is related to the motivation for this work, which is the high-dimensionality of AIRs
. This is overcome by using a proposed low-dimensional representation for acoustic environments. The representation describes sparse early reflection using the parameters estimated in and uses established acoustic parameters to represent the late reverberation. Creating a low-dimensional representation also allows for the evaluation of the generated responses and their distribution across a set of parameters relevant to the task. Evaluating the data generated by GANs is typically not straightforward, which a drawback in their use . In this work, the generated samples consist of a small and semantically meaningful parameter set, which allows for easier evaluation of the results. In the experiments shown, the data augmentation method improves the accuracy of the CNN-RNN room classifier proposed in . To illustrate the effectiveness of using the proposed low-dimensional representation of AIRs, the experiments shown compare it with the use of the raw FIR taps. The AIRs generated by the GANs find uses beyond the data augmentation of classifiers, such as artificial reverberation .
The remainder of this paper is organised as follows: Section 2 discusses data augmentation for classification and Section 3 presents the proposed method for generating artificial AIRs for room classification training. The experiments in Section 4 present the results of the proposed method. Finally, Section 5 provides a discussion of the results of the experiments and a conclusion.
2 Data augmentation for classifier training
The supervised training of classifiers relies on the collection of labelled data, serving as the examples the classifier learns from. In , DNNs were presented with a set of AIRs in order to learn to discriminate between different rooms. In realistic scenarios, it is impossible for the training data to cover every point in the corresponding physical space. This means that unseen data will be presented to the classifier during inference when a speech recording is made at a source and receiver position not part of the original data collection. Substantially expanding the training data set for room classification and other tasks such as Sound Event Detection (SED) is in general challenging . However, methods exist for increasing the amount of available training data without the need for additional data collection. This process is referred to as data augmentation and uses the available training data to provide the classifier with class invariant transformations of already seen examples . This aims to increase the accuracy of the classifier during inference by improving its generalisation.
The concept of data augmentation has been studied extensively in the literature in order to improve the accuracy of classifying audio signals. A very simple yet representative example of this concept is discussed in , which describes the task of detecting bird singing. In this example, any segment of audio containing bird singing would be a positive sample. Still, any mixture of two, or more, positive samples would also be positive. This simple mechanism of overlapping audio segments allows for the expansion of the amount of available training data with a simple overlap of two existing recordings. This manual method for data augmentation does not involve a statistical model but a simple logical reasoning and human understanding of the task. Other such methods discussed in the literature include time-stretching of segments , pitch shifting  and dynamic range compression . A data augmentation method for audio data, which does not rely on such manual processes is proposed in this paper. The focus is the classification of reverberant rooms and the method relies on generative models, able to generate additional artificial AIRs.
The next Section discusses how DNNs are used to estimate generative models for different categories of reverberant rooms.
3 Generative model estimation for reverberant rooms
The above Sections have discussed the motivation for estimating generative models that allow the generation of artificial AIRs corresponding to real reverberant rooms and how this can improve the generalisation of room classifiers.
3.1 Estimation method
A generative model represents the joint probability, which is in contrast to classification DNNs that estimate the posterior
. Recent advancements in deep learning led to the proposal of alternatives to the traditional method for the estimation of parametric model distributions. The two dominant methods in the modern literature areGANs and Variational Autoencoders. Both follow a similar formulation that uses back-propagation to train network layers, which are able to estimate the generative model by filtering noise drawn from a known prior. In the literature review conducted for this work, GANs have shown to be widely adopted in the field of audio processing across different tasks such as SED , speech recognition , speech enhancement  and dereverberation [17, 18]. Furthermore, variants of the original GAN in  exist, which can be adapted in the future to lead to more exciting applications of the method proposed in this work, such as Conditional GANs , DualGANs  and many others. GANs are therefore chosen as the estimation mechanism for the generative models in this paper.
3.2 Gan training
GANs are composed of two networks that are posed as adversaries. The two networks play the roles of the generator and discriminator . The task of the discriminator is to judge whether a given sample comes from the original data distribution or not. The task of the generator , on the other hand, is to fool the discriminator into thinking that data samples it produces are originating from the original data distribution.
represents a random vector variable as.
The networks used in this work as the generator and discriminator of GANs are shown in Figure 1. A simple DNN architecture is used, composed only from Feed Forward (FF) layers. More complex architectures can be constructed that include convolutional and recurrent layers. The investigation of the benefits of using other types of layers, as well as techniques such as dropout, is reserved for future work. Given the small size of the network, LeakyReLU activations  are used instead of standard Rectified Linear Units, to counter issues that result from learning from gradients when the ReLU activations are 0. Batch normalisation is used in order to improve the training, as proposed in 
, by normalising the mean and standard deviation of activation. The networks are trained using back-propagation with Adam
as the optimizer. Inputs to the network are scaled to be within the range 0 and 1. The outputs are denormalised to restore the original scales. One normaliser-denormaliser pair is designed for each input-output neuron pair. The training is run for a total of 6000 epochs. For the reasons relating to stability discussed in, additive White Gaussian Noise (WGN) is added to the inputs of the discriminator with .
As part of the data augmentation process proposed in this paper, a GAN is trained given a set of training AIRs measured in a specific room and learns how to create new AIRs as if they were measured in the same room. Therefore, one GAN is trained for each room considered. The GANs discriminator uses AIRs to learn what is a real AIR and the generator uses them to learn how to imitate them and create fake ones. The rest of this Section investigates two choices for the way that AIRs are presented and outputted by the network.
In order to enable the discriminator and generator of GANs to respectively idenify and generate realistic data, measured AIRs are presented to them during training. The simplest way to present AIRs to the networks during training is using the taps of FIR filters. The taps represent the sound pressure at the position of a receiver placed in the room, with the room excited by a source placed within its boundaries. This is the raw format in which AIRs are typically measured  and distributed  in. An AIR measurement as an FIR filter is represented as a column vector whose elements are the taps of the filter. For AIRs measured in real rooms, their representation is , where . The ideal discriminator’s behaviour, in this case, is therefore . The ideal generator’s behaviour is , where .
3.4 GANs using a low-dimensional representation
An alternative to processing and generating AIRs as FIR filters is proposed in this paper. Describing the AIR as an FIR filer is a typical choice but leads to a sequence of thousands of taps to be processed by algorithms. An alternative low-dimensional representation of the acoustic environment leads to fewer parameters to be processed, potentially improving the efficiency and effectiveness of training. The proposed representation combines the early reflection parameters, estimated using the method of , with a set of established parameters for describing late reverberation to represent the training AIRs. With the training AIRs represented in this space, the trained GANs will learn to model the distribution of each of the parameters, instead of how to model individually the thousands of FIR taps. Furthermore, generated AIRs will be in this low-dimensional space, reducing the complexity of the generator and discriminator. FIR filters that represent the generated AIRs can be constructed from this low-dimensional representation.
The rest of this Section describes how AIRs as FIR filters are used to estimate the parameters of the proposed low-dimensional representation. Also, the inverse process is described, which uses the low-dimensional representation to construct FIR filters.
3.4.1 Proposed low-dimensional representation
The aim of the data augmentation method proposed in this paper is to improve the generalisation of room classifiers during inference. Additional examples are generated that are class invariant transformations of the available training data. The task of room classification is to identify a known room at unknown source and receiver positions. With the transformation being class invariant, the available training AIRs from a room will be used to artificially generate AIRs at new source and receiver positions from the same room. As highlighted in , early reflections have a strong and distinct structure in AIRs, which is highly related to these positions. Therefore, manipulating the structure of early reflections corresponds to a manipulation of these positions. Using a parametric representation of the early reflections, this paper uses GANs to learn the distribution of parameters of reflections from data measured within the room that enables the generation of many artificial responses. These responses will be generated as if they were measured in the same room but with the source and receiver positions changing.
A method was proposed in  for estimating the Time-of-Arrivals and scales of early reflections in an AIR and the excitation that was used to measure it. Modelling early reflections in this manner exploits knowledge about their sparse nature and enables the reconstruction of the original FIR taps as
The model can be complemented to represent the entire AIR. In , it was shown that after a mixing point, defined as the tap with index , replacing the original taps of AIRs by their reconstruction from a stochastic model led to perceptually indistinguishable results. To construct a low-dimensional representation for the entire AIR, this paper combines (1) with a stochastic model for late reverberation that is based on Polack’s model . Polack’s model is described as
where the sampling period, is the reverberation time of the room and . The expression shows a WGN process, enveloped by an exponential decay term. This resembles the decaying sound level in a reverberant room after diffusion , which is commonly referred to as the reverberant tail  of an AIR. The stochastic model that is used in this work is based on (2) and includes terms that account for the difference in the decay rates of sound energy at different frequencies. This is done by filtering the tail signal by an Infinite Impulse Response (IIR) filter, with numerator and denominator coefficients and respectively, estimated from the tail of the original AIR. In , the zeros and poles at receiver positions in reverberant rooms were analysed and it was shown that both represent properties of the environment. Poles describe properties of the enclosure, whereas zeros vary with position. An IIR filter with zeros and poles is therefore designed to convey these properties to the reverberant tail. The IIR coefficients are applied to Polack’s model to give the filtered reverberant tail
A cross-fading mechanism is used to avoid abrupt discontinuities at sample , where the early reflections are mixed with the stochastic model. The mechanism is applied to the tail to allow it to fade-in to a maximum of unity at and have symmetric values around it, giving the late reverberation model
Early reflections are described by (1) and the direct sound by
with and the ToA and scale of the direct sound. The early reflections and the late reverberation model are scaled according to the Direct-to-Reverberant Ratio (DRR)  values and . They measure the energy ratio between the direct sound and the early reflections and the direct sound and the reverberant tail in the original AIR and impose the same ratios on its reconstruction. The complete model, reconstructing taps of the FIR filter representation of the AIR is given by
This paper proposes the use of a low-dimensional representation of AIRs to train GANs, which is based on the formulation presented above. The parameters forming the representation are estimated from the original AIR taps and are able to reconstruct them. The parameters are the following:
All the above vectors are defined as column vectors. One column vector with fixed-length is used to represent each AIR. It is created using the above parameters and it is used for training the GANs. This column vector has a fixed-length and for AIR it is expressed as
The row vector is used to account for the fact that the number of early reflections detected in each AIR varies.
The excitation signal in (1) accounts for the non-idealities in the method and equipment that was used to measure the AIR. A method for its estimation is proposed in . Modelling the excitation is outside the scope of this work and it is therefore not used for training the GANs. Including the excitation signal in the construction of the FIR filter introduces the non-idealities of the measurement method in the constructed taps. However, this is useful for creating AIRs that resemble real-life measurements. For the data generated for data augmentation in later experiments, the excitation used for the construction of an artificial AIR is replicated from a randomly chosen training AIR. Skipping this step and using as the excitation represents the use of a source and receiver system with linear response and infinite bandwidth, which is unrealistic.
The IIR filter with coefficients and involves zeros and poles. It conveys information about resonances and the spectrum of reverberant speech recorded in the room . Any poles that are part of the filter that are outside of the unit circle will lead to an unstable system. To prevent this, a zero-pole analysis is performed on generated values for and and any poles outside of the unit circle are removed. With the focus being on low-dimensional representations, the small values of are chosen. The order selection for IIR models for AIRs is discussed in .
The overall process that is described in this Section is summarised in Figure 2. The following Sections present experiments that evaluate the effectiveness of generating AIRs for data augmentation for the training of room classifiers. The experiments will first present and analyse the generated AIRs. Their usefulness will be measured later in terms of the gains in accuracy provided for the task of room classification.
This paper proposes a novel method for data augmentation for the training of DNN room classifiers. The method relies on the training of GANs, which are used to generate artificial AIRs that increase the training data available for the classifiers. The experiments described in this Section illustrate the data generated by the GANs and room classification experiments evaluate their efficacy in data augmentation. The proposed method relies on a low-dimensional representation of AIRs and to highlight its usefulness it is compared to the use of the raw taps of FIR filters for the training of GANs.
The dataset used to train the GANs is a set of AIRs provided with the Acoustic Characterization of Environments (ACE) challenge database . A total of 658 responses are used to train the GANs, split evenly across 7 rooms. The AIRs
are padded to a duration of 2.1 s, the length of the longestAIR in the training dataset. All data is downsampled to 16 kHz.
4.1 Air generation
In this work, the generation of AIRs is based on training one GAN for each of the 7 rooms, part of the training database. Therefore, 7 GANs are trained and each one of them is used to generate a number of AIRs as if they were measured in the corresponding rooms. The above process is repeated 2 times, with the representation of the AIRs passed to the GANs during training changed between the two. The two representations considered are the raw FIR taps and the parameters of the low-dimensional representation proposed in Section 3.4.1. The number of parameters composing the discriminator and generator of the GANs for each of the two cases is given in Table 1. In Figure 3, generated artificial responses from GANs trained using the two representations are visualised along with responses measured in the real rooms.
One of the critically discussed issues in the literature with regard to the training of GANs is the lack of established methods for evaluating the generated data . The data can always be evaluated for their usefulness for data augmentation in terms of the increase in classification accuracy but a way to evaluate the generated samples directly saves unnecessary classifier training times. Indeed, in this work, it would also be of interest to evaluate how realistic the generated responses are before using them to train a complex room classifier. Evaluating realism is difficult and it is very hard to quantify or even precisely define. However, using the proposed representation as the space for GANs enables the inspection of semantically meaningful properties of the generated environments by humans. For instance, when training a GAN to learn how to generate responses from a meeting room, the visualisations of Figure 4 can be directly made. The Figure shows how the zeros of the IIR filters and the are distributed for the data generated by the GAN. The distributions are shown at two stages during training, at epoch 80 and then much later at epoch 960. Observing the plots shows that the distribution of the parameters starts from a near random state and then becomes very similar to that of the data measured in the real room. This is positive evidence that the distribution of these parameters is realistic.
Using the raw FIR taps of AIRs directly to train a GAN provides no semantically meaningful information about the acoustic environment, in contrast to using the proposed representation. The results of Figure (a)a is the only evidence for the quality of the results and it is inspected directly to analyse them. What can be observed from looking at the real AIRs, given by the blue lines in the Figure, is that they are composed of a direct path, sparse reflections in the early part and a decaying envelope. However, the same does not apply for the generated AIRs that resort to tracking the overall energy envelope as an approximation to the overall shape. The discriminator of Figure (b)b is therefore fooled by a simple imitation of the energy envelope of the inputs into believing that they are real AIRs. In reality, the generated responses fail to capture the sparsity of the early part. The opposite is true for the case of training GANs using the proposed representation where the sparse nature of the early part is well captured and so is the tail, as shown in Figure (b)b.
Probing into the process of training the network reveals important information, which explains the above observations. Figure 6 shows the accuracy of 4 discriminators that describe the cases of training a GAN for 2 rooms, using each of the 2 representation. The case of training GANs using FIR taps shows the discriminator being unable to discriminate between real and fake samples after a small number of epochs. The accuracy plateaus to 50%, which indicates that the discriminator is making a random decision between real and fake. A weak discriminator cannot yield a stronger generator as Nash equilibrium is reached at this point . The opposite is true for the case of using the proposed representation, as the discriminator’s accuracy almost continuously increases and reaches values as high as 90%. This is attributed to the high dimensionality of the raw AIRs. This causes the GAN to scale-up to more than 17 million parameters and the small amount of training data does not allow for the training of very large networks that would result from adding more layers. This actually brings this work back to its original motivation, which was high-dimensionality and the lack of large data availability. This reinforces the need for low-dimensional and informative representations for AIRs, such as the one proposed in this paper.
The experiments above have discussed how GANs are trained to generate AIRs for a set of rooms. One GANs is trained for each of the 7 rooms, part of the training data. Each network then generates a set of artificial responses as if they were measured in the real room. The following experiments will show how the generated responses are used as a data augmentation dataset to tackle the small availability of training data in order to improve the accuracy of a DNN room classifier.
4.2 Data augmentation for Dnn room classifiers
Room classification using state-of-the-art classifiers was investigated in . DNNs were used to classify a reverberant speech signal in terms of the room it was recorded in. The investigation has shown that the limited availability of AIRs and their high-dimensionality limit the performance of classifiers. This paper proposes novel methods that lead to an increase in the availability of training data in the form of AIRs, with the aim to increase the accuracy of DNN classifiers. The experiments above have shown how GANs are used to create artificial AIRs, given a set of real ones. These AIRs are used as part of the proposed data augmentation method.
4.2.1 Classifier Dnn
The DNN classifier used in this experiment was proposed in  for the task of room classification. The model is a CNN-RNN and it is shown in Figure 6. The training process for the classifier network is as proposed in .
The network is trained and evaluated in 3 configurations. The baseline configuration involves no data augmentation. It only uses measured AIRs from the ACE database  for training. The second configuration uses data augmentation done using artificial AIRs generated by GANs trained using the raw FIR taps of AIRs. Finally, in the last configuration, data augmentation is done using the method proposed in this paper. Each configuration is evaluated based on its accuracy on a test set, which is discussed below. The training of classifiers of each of the 3 configurations is repeated 16 times on 16 different machines to average the effect of different initialisations. The training and test data were the same across machines. The hardware used was NVIDIA Tesla K80 GPUs.
4.2.2 Training and test data
The ACE database AIRs  are used for this experiment. They are segmented to reserve a test set prior to training. The ACE database consists of a total of 700 AIR, recorded in 7 rooms. The 42 AIRs, which were recorded using the Mobile microphone array are reserved for testing. The remaining 658 AIRs are all used to train the relevant GANs and artificially create further AIRs for data augmentation. They contain 94 AIRs per room. The proposed data augmentation method involves the training of 1 GAN for each of the 7 rooms. Each GAN is trained using 94 AIRs and it is used to generate an additional 100 AIRs. This results in an additional 700 AIRs in total, doubling the size of the ACE database. The training data along with the artificially created AIRs, form the training set in the proposed method.
The experiment is investigating the classification of reverberant speech in terms of the room where the recording took place. All training AIRs are therefore convolved with 20 speech utterances each of length 5 s, taken from the TIMIT database. This is identical to the process of . Train and test speech and speaker databases are separated and are not mixed. Each utterance contains only one speaker. Convolving new speech samples with the data augmentation AIRs will introduce an additional variable in the comparison of the results. Therefore, to avoid this, the same exact speech utterances convolved with the measured AIRs for each room are convolved with the data augmentation AIRs for the corresponding rooms. The 42 test AIRs are convolved with 10 utterances each, again of length 5 s. The test and train reverberant speech are consistent throughout all the experiments and the only variable is the addition of the data augmentation AIRs. All data is sampled at 16 kHz.
The segmentation of the available data into test and training sets is discussed above. Neither the speaker, the speaker’s position, or the microphone array used to construct the test data, were presented to the classifier during training. The artificial AIRs, generated by the GANs, were generated as if they were measured in the same rooms as the training data but at different positions. The addition of these AIRs generated by the GANs aims to improve the classification test accuracy. Data augmentation performed in this fashion provides class invariant transformations of the training data to the classifier. Therefore, this experiment evaluates the improvement in the generalisation of the classifier offered by the proposed data augmentation method.
The results of evaluating the proposed method are shown in Figure 7, in terms of the classification accuracy on the test set. The results show that the proposed method outperforms the baseline in all runs. The median accuracy of the baseline is 89.4%, the proposed method’s is 95.5% and the AIR tap based method’s is 87.15%. Therefore, the proposed method increases the accuracy of the room classifier. The increased accuracy is not attributed to an increase of the speech data as the exact same speech samples were used in all 3 cases. The use of the high-dimensional raw AIR taps proved even less effective than the baseline and the trained GANs involved a total of around 17 million parameters. The proposed domain increased the classification accuracy while using only 0.29 million parameters.
. The baseline training loss, which uses no data augmentation, shows that after 10 epochs the model starts to overfit. The training loss starts to substantially decrease and the validation loss increases. However, when using the proposed data augmentation, the validation loss continues to follow a decreasing trend for longer and the training loss is approximately monotonically decreasing. Therefore, the data augmentation method improves generalisation by providing a meaningful and realistic interpolation of the availableAIRs in a low-dimensional manifold of the reverberation effect.
The presentation of the results of the experiments concludes this Section. The final Section will review the contributions of this paper and provide a conclusion.
This paper has proposed a novel method for data augmentation for the training of DNN room classifiers. The proposed method relies on the training of GANs, using AIRs in a proposed low-dimensional representation. The representation combines parameters of the early reflections and established parameters for late reverberation. The GANs are used to create artificial AIRs from a set of known rooms. The proposed method enabled GANs to generate artificial responses with realistic features, able to capture the sparse properties of the early reflections and the decaying tail. In the experiments presented, the proposed method increased the accuracy of a CNN-RNN room classifier from 89.4% to 95.5%, when compared to the case of using no data augmentation.
The training of GANs as proposed in this work uses AIRs measured in rooms in order to create a number of artificial but realistic AIRs. This process finds applications beyond room classification. Artificial reverberation applications  can benefit from such approaches, where a number of artificial environments with specific properties can be created by training GANs using a specific modality of acoustic environments. For instance, providing a GAN with enough AIRs from many concert halls will enable it to learn to generate many more artificial AIRs from many artificial concert halls. The possibilities for such methods are numerous.
-  I. Dokmanic, Y. Lu, and M. Vetterli, “Can one hear the shape of a room: The 2-D polygonal case,” in Proc. IEEE Intl. Conf. on Acoustics, Speech and Signal Processing (ICASSP), pp. 321–324, May 2011.
-  C. Papayiannis, C. Evers, and P. A. Naylor, “Discriminative feature domains for reverberant acoustic environments,” in Proc. IEEE Intl. Conf. on Acoustics, Speech and Signal Processing (ICASSP), (New Orleans, Louisiana, USA), pp. 756–760, Mar. 2017.
-  A. Farina, “Simultaneous measurement of impulse response and distortion with a swept-sine technique,” in Proc. Audio Eng. Soc. (AES) Convention, pp. 1–23, Feb. 2000.
-  C. Papayiannis, C. Evers, and P. A. Naylor, “End-to-End Classification of Reverberant Rooms using DNNs,” arXiv preprint arXiv:1812.09324, 2018.
-  A. H. Moore, M. Brookes, and P. A. Naylor, “Room identification using roomprints,” in Proc. Audio Eng. Soc. (AES) Conf. on Audio Forensics, June 2014.
-  C. Papayiannis, C. Evers, and P. A. Naylor, “Sparse Parametric Modeling of the Early Part of Acoustic Impulse Responses,” in Proc. European Signal Processing Conf. (EUSIPCO), (Kos, Greece), pp. 708–712, Aug. 2017.
-  T. Salimans, I. J. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved Techniques for Training GANs,” Computing Research Repository, vol. abs/1606.03498, 2016.
-  V. Valimaki, J. D. Parker, L. Savioja, J. O. Smith, and J. S. Abel, “Fifty years of artificial reverberation,” IEEE Trans. Audio, Speech, Lang. Process., vol. 20, pp. 1421–1448, July 2012.
-  J. Salamon and J. P. Bello, “Deep Convolutional Neural Networks and Data Augmentation for Environmental Sound Classification,” Computing Research Repository, vol. abs/1608.04363, 2016.
-  I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016.
-  N. Takahashi, M. Gygli, B. Pfister, and L. V. Gool, “Deep Convolutional Neural Networks and Data Augmentation for Acoustic Event Detection,” Computing Research Repository, vol. abs/1604.07160, 2016.
-  G. Parascandolo, H. Huttunen, and T. Virtanen, “Recurrent Neural Networks for Polyphonic Sound Event Detection in Real Life Recordings,” Computing Research Repository, vol. abs/1604.00861, 2016.
-  J. Schlüter and T. Grill, “Exploring Data Augmentation for Improved Singing Voice Detection with Neural Networks.,” in Intern. Soc. for Music Information Retrieval Conf. (ISMIR), (Malaga, Spain), pp. 121–126, Oct. 2015.
-  S. Mun, S. Park, D. Han, and H. Ko, “Generative Adversarial Network Based Acoustic Scene Training Set Augmentation and Selection Using SVM Hyper-Plane,” tech. rep., DCASE2017 Challenge, Sept. 2017.
-  A. Sriram, H. Jun, Y. Gaur, and S. Satheesh, “Robust Speech Recognition Using Generative Adversarial Networks,” Computing Research Repository, vol. abs/1711.01567, 2017.
-  C. Donahue, B. Li, and R. Prabhavalkar, “Exploring Speech Enhancement with Generative Adversarial Networks for Robust Speech Recognition,” Computing Research Repository, vol. abs/1711.05747, 2017.
-  C. Li, T. Wang, S. Xu, and B. Xu, “Single-channel Speech Dereverberation via Generative Adversarial Training,” ArXiv e-prints, June 2018.
-  K. Wang, J. Zhang, S. Sun, Y. Wang, F. Xiang, and L. Xie, “Investigating Generative Adversarial Networks based Speech Dereverberation for Robust Speech Recognition,” Computing Research Repository, vol. abs/1803.10132, 2018.
-  I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Networks,” ArXiv e-prints, June 2014.
-  M. Mirza and S. Osindero, “Conditional Generative Adversarial Nets,” Computing Research Repository, vol. abs/1411.1784, 2014.
-  Z. Yi, H. Zhang, P. Tan, and M. Gong, “DualGAN: Unsupervised Dual Learning for Image-to-Image Translation,” Computing Research Repository, vol. abs/1704.02510, 2017.
-  S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” Computing Research Repository, vol. abs/1502.03167, 2015.
-  D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” Computing Research Repository, vol. abs/1412.6980, 2014.
-  M. Arjovsky and L. Bottou, “Towards Principled Methods for Training Generative Adversarial Networks,” ArXiv e-prints, Jan. 2017.
-  J. Eaton, N. D. Gaubitch, A. H. Moore, and P. A. Naylor, “Proceeding of the ACE Challenge,” proceedings, INST_ICL, New Paltz, NY, USA, Oct. 2015.
-  A. Lindau, L. Kosanke, and S. Weinzierl, “Perceptual evaluation of model and signal-based predictors of the mixing time in binaural room impulse responses,” J. Audio Eng. Soc. (AES), vol. 60, pp. 887–898, Dec. 2012.
-  P. A. Naylor and N. D. Gaubitch, eds., Speech Dereverberation. Springer, 2010.
-  H. Kuttruff, Room Acoustics. London: CRC Press, 5th ed., 2009.
-  Y. Haneda, S. Makino, and Y. Kaneda, “Common acoustical pole and zero modeling of room transfer functions,” IEEE Trans. Speech Audio Process., vol. 2, no. 2, pp. 320–328, 1994.
-  T. W. Parks and C. S. Burrus, Digital Filter Design. Wiley, 1987.
-  M. Karjalainen, P. Antsalo, A. Mäkivirta, T. Peltonen, and V. Välimäki, “Estimation of modal decay parameters from noisy response measurements,” J. Audio Eng. Soc. (AES), vol. 11, pp. 867–878, 2002.
-  C. Evers and J. Hopgood, “Multichannel online blind speech dereverberation with marginalization of static observation parameters in a rao-blackwellized particle filter,” J. of Signal Processing Systems, vol. 63, no. 3, pp. 315–332, 2011.