Environmental sound classification (ESC) has become a topic of great interest in signal processing due to its wide range of applications. Although many previous studies have shown promising results on ESC [1, 2, 3]
, largely through the introduction of deep learning methods, ESC still faces several challenges. First, different studies have identified combinations of feature extraction methods and neural network designs that work best for individual datasets[2, 4]
, but that have failed to generalize well across different ESC tasks. Another problem is that environmental sounds often have highly variable temporal characteristics (e.g., short duration for water drops but longer duration for sea waves). An ESC model needs to be able to isolate the meaningful features for classification within the acoustic signal instead of overfitting to the background sound. To address these problems we propose a multi-stream neural network that uses only the most fundamental audio representations (waveform, short-term Fourier transform (STFT), or spectral features) as inputs while relying on convolutional neural networks (CNNs) for feature learning.
In order to localize class-differentiating features in highly variable sound signals we propose a temporal attention mechanism for CNNs that applies to all input streams. Compared with attention used for tasks such as neural machine translation our proposed attention mechanism works synchronously with CNN layers for feature learning. To handle signals of variable lengths with our fixed-dimensional CNN architecture we propose a decision fusion strategy with uncertainty. We tested our model on three published datasets that vary in the number of classes (10 to 50) and audio signal length (from 10 seconds to 30 seconds). Our system meets or surpasses the state of the art on all sets without any changes in model architecture or feature extraction method. We include an ablation study that highlights the relative importance of each system component. The rest of this paper is structured as follows: In Section 2 we introduce previous related work. We describe our system in Section 3 and experimental results in Section 4. Section 5 provides an analysis of the attention function. Section 6 concludes.
2 Related Work
. Multiple loss functions were used for the detection of rare sound events in. An ensemble network based on two input streams was proposed very recently .
3.1 Multi-Stream Network
We introduce a three-stream network that takes the raw audio waveform, short-term Fourier transform (STFT) coefficients, and delta spectrogram as inputs (Figure 1). The waveform carries both magnitude and phase information represented in the time domain. We first chunk the audio waveform into non-overlapping segments of 3.84s, resulting in samples, and then calculate the STFT spectrogram. Different resolutions of STFT highlight different details in the spectrogram and emphasize either frequency details or temporal details. We choose a set of three resolutions corresponding to 32, 128 and 1024 FFT points with a hop length of 10ms. The generated STFTs are scaled into same dimension () and stacked together as STFT set features. We further calculate delta features with a window size of 5 from each STFT layer and stack them to form a feature vector of the same dimensionality as the STFT set (). We restrict ourselves to these basic audio representations instead of higher-level features since previous work did not show significant benefit of manually engineered features .
3.1.2 Network Structure
We adopted and modified the EnvNet  architecture, which extracts log-mel features with both 1D and 2D convolutions from the waveform (Figure 1):
where is the 1D waveform. and Conv denote the corresponding convolutional operations. The is the representation learned from , which has three-dimensions (feature, temporal, channel). A 2D CNN was used to learn features and from the STFT set and the delta spectrogram as:
where are the 3D stacked STFT or delta spectrogram features.
filters for the 2D convolution with batch normalization and ReLU activation. Larger filters (128, 64 and 16) were used for 1D convolution with large strides for fast feature dimension reeduction. Since the convolution and pooling operations do not compromise spatial association we applied the same number of pooling operations to all three streams over time. This results in feature representations learned from different streams that are synchronized in the temporal dimension, and that can be merged by concatenating along time (Figure1).
3.2 Temporal Attention
Different environmental sounds can have a very different temporal structure, e.g. bell rings are different from water drops or sea waves. Most previous studies have ignored fine-grained temporal structure and have extracted features at the global signal level [8, 10, 3]
. In other domains temporal structure is typically addressed by sequence models such as long short-term memory (LSTM) networks with temporal attention. For ESC, recent studies have proposed temporal modeling by subsampling and averaging outputs from RNN layers over time ; however, this is not equivalent to weighting different parts of the signal differentially. A CNN-BLSTM model with temporal attention was proposed in ; however, attention was calculated within the BLSTM based on features extracted from the CNN; it thus did not influence feature extraction itself.
We integrate an attention function into our multi-stream CNN (Figure 1) that is calculated from the delta spectrogram features and directly affects the CNN layers themselves. This representation provides information about dynamic changes in energy, which we assume is beneficial for extracting temporal structure. Our initial experiments also showed that attention calculated from delta features has better performance relative to attention calculated from all three inputs. Temporal attention weights are calculated in two steps: 1. Repeat convolution and 1D pooling along the feature dimension (), until the feature dimension equals one (Figure 1, temporal attention block). Because the convolution and pooling operations do not compromise the temporal association of the data, the generated attention vector is temporally aligned with the inputs from all three streams. 2. Pooling along time (), which aligns our temporal attention with the features learned by the CNN after each pooling operation. The same attention vector is shared by all three input streams since all three branches are synchronized in time. The attention is applied to the learned features via dot-product operations along the time dimension (Figure 1):
where is the output from the convolutional layer with shape and is the attention vector with shape . The attention is applied by multiplying the attention vector to each of the feature vectors in along feature dimension and channel dimension as . The is the same feature after applying attention that has the same shape as .
3.3 Decision Fusion With Uncertainty
In order to handle audio signals of different lengths with a network structure that requires fixed-length inputs we propose a late fusion strategy that computes classification outputs for each input window and then fuses the softmax layer outputs for each window by averaging. Instead of simply averaging the softmax probabilities over time
, we further augment the training data with white-noise segments and use a uniform probability distribution over all classes as the target distributions for these segments. Enriching the training data with these maximally uncertain segments biases the system to predict high-entropy softmax outputs when the input does not contain useful information. This is critical in order to prevent the final decision from being overly influenced by noise or silence segments.
3.4 Data Augmentation
To avoid possible overfitting caused by limited training data, we adopt the between-class training approach to data augmentation  and modify it as follows. We create mixed training samples
where and are two randomly selected clips from the training data and and are two randomly selected starting points in time. Fixed-length audio segments are selected from each clip based on the start times. The parameter is a random mixture ratio between and used for mixing the two segments. denotes the combined three input vectors (wav, STFT, delta spectrogram). The class labels used for the mixed samples are chosen with the same proportion. We use this procedure instead of the gain-based mixture (calculating the mixture ratios based on the signal amplitude) suggested in  for two reasons: 1. The gain-based mixture is substantially (
times) slower than our approach, and 2. The gain-based mixture does not apply to 3D features. We rerun data augmentation at each epoch of neural network training.
4 Experimental Results
4.1 Datasets and Training Procedure
We tested our system on three commonly used datasets:
ESC-10 and ESC-50: ESC-50 is a collection of 2,000 environmental sound recordings. The dataset consists of 5-second-long recordings organized into 50 semantic classes (40 examples per class). The data is split into 5 groups for training and testing. We use 5-fold cross-validation and report the average accuracy. ESC-10 is a subset of ESC-50 that contains 10 labels.
TUT Acoustic scenes 2016 dataset (DCASE): This data set consists of recordings from various acoustic scenes, all having distinct recording locations. For each recording location, a 3 to 5 minute-long audio recording was captured. The original recordings were then split into 30-second segments. The data set comes with an official training and testing split. We report the average accuracy score on four training and testing configurations in line with as previous research.
4.2 Results and Comparison
|ESC 10||DCASE||ESC 50|
|Random Forest ||0.727||/||0.443|
|Google Net ||0.632||/||0.678|
|EnvNet&BC Training ||0.894||/||
|CNN mixup ||0.917||/||0.839|
Table 1 compares our results against previous outcomes reported in the literature; note that we used the same feature representation and model structure for all datasets. Results show that our model achieves state-of-the-art or better performance on all three datasets (Table 1) while most previously proposed approaches show highly diverging performance on different datasets (Table 1, gray shaded rows). Also note that the SoundNet system  shows high performance on the DCASE dataset but has been pre-trained on video and audio data, whereas our network is trained purely on audio.
We further analyzed the contribution of each component in our system: the three input streams, the temporal attention and the decision fusion mechanism. The results (Table 2) show that: 1. The three-stream network works better than using a combination of any two of the input streams (Table 2 first three rows). 2. Temporal attention improves the performance on all three datasets, which demonstrates the effectiveness of our method. 3. Decision fusion leads to roughly 2.5% accuracy gain across all datasets. 4. Noise augmentation for decision fusion lead to roughly 1% performance gain on all datasets. 5. The data augmentation is necessary for all three dataset, without augmentation, the network will quickly overfit to the relatively limited training data.
|ESC 10||DCASE||ESC 50|
|Without delta spectrogram||0.821||0.825||0.697|
|Without raw audio||0.792||0.781||0.745|
|Without decision fusion||0.915||0.857||0.812|
|Without data augmentation||0.815||0.770||0.712|
5 Visualize and Understand Attention
Environmental sounds have different temporal structures. Sounds may be continuous (e.g. rain and sea waves), periodic (e.g., clock tics and crackling fire), or non-periodic (e.g., dog, rooster). To have a better understanding how temporal attention helps with recognizing different sounds, we visualized the attention weights generated for sounds with different temporal structures (Figure 2).
From the visualization we can see that the proposed attention is able to locate important temporal events while de-weighting the background noise (Figure 2, top row). The attention curve has a periodic shape for periodic sounds ((Figure 2, middle row) while being continuous for continuous sounds ((Figure 2, bottom row), regardless of sound volume changes ((Figure 2, sea waves).
We have described a multi-stream CNN with temporal attention and decision fusion for ESC. Our system was evaluated on three commonly used benchmark data sets and achieved state-of-the-art or better performance with a single network architecture. In the future we will extend this work to larger data sets such as Audioset and incorporate mechanisms to handle overlapping sounds.
-  Karol J Piczak, “ESC: dataset for environmental sound classification,” in Proceedings of the 23rd ACM international conference on Multimedia. ACM, 2015, pp. 1015–1018.
-  Yusuf Aytar, Carl Vondrick, and Antonio Torralba, “Soundnet: Learning sound representations from unlabeled video,” in Advances in Neural Information Processing Systems, 2016, pp. 892–900.
-  Yuji Tokozume, Yoshitaka Ushiku, and Tatsuya Harada, “Learning from between-class examples for deep sound recognition,” arXiv preprint arXiv:1711.10282, 2017.
-  Dharmesh M Agrawal, Hardik B Sailor, Meet H Soni, and Hemant A Patil, “Novel TEO-based gammatone features for environmental sound classification,” in Signal Processing Conference (EUSIPCO), 2017 25th European. IEEE, 2017, pp. 1809–1813.
-  Baskaran Sankaran, Haitao Mi, Yaser Al-Onaizan, and Abe Ittycheriah, “Temporal attention model for neural machine translation,” arXiv preprint arXiv:1608.02927, 2016.
-  Jia-Ching Wang, Jhing-Fa Wang, Kuok Wai He, and Cheng-Shu Hsu, “Environmental sound classification using hybrid SVM/KNN classifier and MPEG-7 audio low-level descriptor,” in Neural Networks, 2006. IJCNN’06. International Joint Conference on. IEEE, 2006, pp. 1731–1735.
-  Selina Chu, Shrikanth Narayanan, and C-C Jay Kuo, “Environmental sound recognition with time–frequency audio features,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 17, no. 6, pp. 1142–1158, 2009.
-  Nicholas D Lane, Petko Georgiev, and Lorena Qendro, “DeepEar: robust smartphone audio sensing in unconstrained acoustic environments using deep learning,” in Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, 2015, pp. 283–294.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton,
“Imagenet classification with deep convolutional neural networks,”in Advances in neural information processing systems, 2012, pp. 1097–1105.
-  Venkatesh Boddapati, Andrej Petef, Jim Rasmusson, and Lars Lundberg, “Classifying environmental sounds using image recognition networks,” Procedia Computer Science, vol. 112, pp. 2048–2056, 2017.
-  Muhammad Huzaifah, “Comparison of time-frequency representations for environmental sound classification using convolutional neural networks,” arXiv preprint arXiv:1706.07156, 2017.
-  Weiran Wang, Chieh-Chi Kao, and Chao Wang, “A simple model for detection of rare sound events,” Proc. Interspeech 2018, pp. 1344–1348, 2018.
-  Boqing Zhu, Changjian Wang, Feng Liu, Jin Lei, Zengquan Lu, and Yuxing Peng, “Learning environmental sounds with multi-scale convolutional neural network,” arXiv preprint arXiv:1803.10219, 2018.
-  Zhichao Zhang, Shugong Xu, Shan Cao, and Shunqing Zhang, “Deep convolutional neural network with mixup for environmental sound classification,” arXiv preprint arXiv:1808.08405, 2018.
-  Shaobo Li, Yong Yao, Jie Hu, Guokai Liu, Xuemei Yao, and Jianjun Hu, “An ensemble stacked convolutional neural network model for environmental event sound recognition,” Applied Sciences, vol. 8, no. 7, pp. 1152, 2018.
-  Jinxi Guo, Ning Xu, Li-Jia Li, and Abeer Alwan, “Attention based cldnns for short-duration acoustic scene classification,” Proc. Interspeech 2017, pp. 469–473, 2017.