Following the successes of image classification, convolutional neural network (CNN) architectures have become popularized for audio classification. Hershey, et al. 
showed that large image classification CNNs trained with huge amount of weakly labeled Youtube data leads to semantically meaningful representations, the basis of a powerful classifier. In the recent DCASE acoustic scene classification task, the top submissions are mostly CNN-based  , . Likewise, many of the top results for the ESC-50 corpus  use various forms of CNNs , , , .
While the prior work on CNN audio classifiers have focused on accuracy of the performance for a particular tasks, none that we are aware of have treated computational efficiency as a primary objective. The first contribution of this paper is a scalable architecture, that at the high-end has one of the best accuracies in the ESC-50, and at the low-end offers the flexibility to scale down to extremely small model sizes. The advantage of a scalable architecture is that it allows for flexibility in inference platforms with various system constraints. The efficiency brought by our architecture allows for low-power always-on inference on DSPs or dedicated neural net accelerators , . The second contribution is the application of mixup data augmentation  for audio classification, and we show it is a big contributor to the high accuracy due to improved generalization.
AclNet is an end-to-end (e2e) CNN architecture, which takes raw time-domain input waveform as opposed to the more popular technique of using spectral features like mel-filterbank or mel-frequency cepstral coefficients (MFCC). One of the advantages of an e2e architecture is that the front-end feature makes no assumptions of the spectral content. Its feature representation is learned in a data-driven manner, thus its features are optimized for the task at hand as long as there is sufficient training data. Another advantage of e2e is that it eliminates the implementation of the spectral features, which simplifies software or hardware complexity. Although other e2e techniques , , ,  have been studied, our architecture has a focus on efficiency.
Several research results from optimization of CNNs from image domain can be borrowed to make audio classification more efficient. Han et al. 
used pruning, quantization, and Huffman encoding to reduce complexity of CNNs. Singular value decomposition has been applied to DNN to reduce model size. The AclNet gets its inspirations for efficient computations from MobileNet , which features the depthwise separable convolution we employed extensively in this work. With these tricks, human-level accuracy for ESC-50 was achieved with only parameters and million multiply-adds per second (MMACS).
2 AclNet architecture
2.1 Low-level features
The LLF can be viewed as a replacement of the spectral feature, and the two stages of 1-D strided convolutions are equivalent to FIR decimation filterbank. With the time-domain waveform as input, the LLF produces an output of 64 channels at feature frame rate ofafter the maxpool layer. In the example given on Table 1, the
input produces an output tensor with dimension.
Although the number of parameters in the LLF are invariant to stride values , and the number of intermediate channels , the choice of these values greatly influence the compute complexity and accuracy. Our experiments will show the settings which gives the most accurate results.
The example in Table 1 is for the sampling rate. The parameters , , and all kernel size scale linearly with sampling rate to ensure the same time duration of kernels and output frame rate of .
|Layer||Stride||Out dim||Out Chans||Kernel size|
2.2 High-level features
Continuing from the LLF output, transposing the first two dimensions will result in an image-like tensor with dimension . The rest of HLF thus can follow the structure similar to image classification CNNs. We experimented with a number of architectures, and found that a VGG-like architecture  provides a good classification performance and well-understood building blocks. The architecture shown in Table 2 is a modified VGG. Besides changing the depth and channel width, the final layers of the network are also modified. Conv12 layer is a convolution that reduces the number of channels to the number of classes, which in the case of ESC-50, is . Each of the channels are then average pooled over the patches and output as softmax. The advantage of these final two layers is that our architecture can incorporate arbitrary length inputs, without any need to modify the number of hidden units in fully-connected layers. An additional benefit of this way of pooling is that it is shown to be effective for training on weakly labeled datasets .
Before the input to Conv12 layer, we have a dropout layer. We found the probability of 0.2 to work well on this dataset.
|Layer||Stride||Out dim||Out Chans||Kernel Size|
2.3 Convolutional layers details
Depthwise separable convolution (DWSC): the convolution is decomposed into depthwise separable convolutions with pointwise layers each followed by batch normalization and ReLU activation as in MobileNet .
The advantage of DWSC is that they use significantly less parameters and operations compared to SC, but typically at a cost of degradation in performance. We will explore the tradeoffs between these two choices of convolutions in our experiments.
2.4 Width multiplier
As in MobileNet, our architecture also has a width multiplier (WM) to control the complexity of the network. The WM linearly scales the number of output channels from Conv3 to Conv11. This parameter is an easy way to manage the capacity of the network, and our experiments will explore its accuracy impact on the ESC-50 corpus.
3 Experimental methods
We used ESC-50 to train and evaluate the models. ESC-50 contains a total of 2000 examples of environmental sounds arranged in 50 classes. We use the default 5-folds provided by the dataset for cross validation in performance evaluation. All sound files were converted to 16-bits, at and sampling rates for two different sets of experiments. We eliminated the silent sections at the beginning and ending of each recording.
3.2 Data augmentation
We have experimented with different input lengths to the training of AclNet using ESC-50 and a proprietary dataset. Empirically we found that between 1 to 2 second input gave the best results, so for the rest of the experiments we chose input length. In our data loader of the training process, we use the following real-time data augmentation to generate each training example.
Choose a random of audio within a training file
Center the waveform to zero mean, and normalize by standard deviation
Resample the waveform by a random factor uniformly chosen in range
Crop exactly to
Multiply waveform by random gain chosen uniformly in range
During test time, only the data normalization step is used, and the length of the entire wave file is input into the network.
3.3 Mixup training
Mixup  is a recent technique to improve generalization by increasing the support of the training distribution. In this technique, a neighborhood is defined around each example in the training data by constructing virtual training examples, that is, pairs of virtual samples and virtual targets . Given two training examples, and , the new virtual pair is computed as:
. The hyperparametercontrols the amount that is mixed in from the second example. Higher values of alpha make the virtual pairs less similar to the original unmixed training examples. We experimented with values from 0.1 to 0.5.
3.4 Learning Settings
For all experiments, we used stochastic gradient descent optimizer with momentum of, weight decay of , and a batch size of
. We trained the model using the following learning rate schedule with 3 different phases: 0.2 for the first 500 epochs, 0.04 for the next 1000, and 0.016 for the last 500, for a total of 2000 epochs for each fold. Also, for the first 100 epochs we disabled the mixup procedure as a form of warm up to improve initial convergence.
4 Results and analysis
4.1 Data augmentation and mixup
Several experiments were done to assess the effectiveness of augmentation and mixup. Figure 1 shows the validation accuracy over the course of the training process for various combinations of augmentation as explained in Section 3. All experiments were done using a WM of 1.0, and SC. We see an obvious improvement with each individual augmentation, and that mixup by itself is more effective than the other form of augmentation. The best result was achieved when augmentation was combined with mixup, which had an absolute improvement of more than above the baseline without any augmentation. We note mixup is conceptually similar to between class learning, which was also shown to work well for ESC-50 .
We have experimented with the choice of in mixup, and found that values between to worked well for the larger size architectures, thus for the remainder of experiments, we default to using this combined augmentation with mixup .
|Sampling rate||Conv type||LLF params (k)||LLF MMACS||HLF params (k)||HLF MMACS||Total params (k)||Total MMACS||Width multiplier||Accuracy (%)|
4.2 Low-level feature parameters
In EnvNet , analysis showed that 2 convolutions of kernel size 8 worked best for this dataset. Our experiments confirmed that 2 convolutions being optimal, but we also found that slightly reducing the kernel size of second convolution had no impact on accuracy. Our best setting is with kernel sizes of 9 and 5 for the first two convolutions.
In order to determine the choice of other LLF parameters, we did a grid search of the parameter space over these ranges: , , .
We trained AclNet using both SC and DWSC settings with width multiplier of 1.0, and found the values of (C1, S1, S2) = for SC and for DWSC gave the best accuracy. For the remainder of experiments, we will default to using these best settings for SC and DWSC. The experiments showed that there was about a difference between the best and worst parameters for each of the settings. The best result in both cases was not the highest complexity, which is . We suspect the heavier LLF settings might be overfitting, and that with more training data we could reach a different conclusion.
4.3 Complexity versus accuracy
To understand the tradeoff between complexity and accuracy, we ran three sets of experiments using 1) input with DWSC, 2) input with DWSC, and 3) input with SC. For each set, we did the 5-fold validation with WM configured at , , , , , , , , and . Figure 2 shows the accuracy versus MMACS for each of the settings, color-coded by sets. For each of these settings, increasing complexity generally led to better accuracy. The exception is at the highest WM, where it is possible that we hit diminishing returns of higher capacity. In all cases, WM below steepens the drop in accuracy. Another observation is that for the same HLF settings, sampling rate improves accuracy by around .
Table 3 shows a subset of these experiments, with details of LLF, HLF, overall complexity and accuracy. Our best accuracy of was achieved with sampling rate, SC, and WM. At the time of this writing, this is the best single system accuracy reported for ESC-50 (second overall behind an ensemble system ). With DWSC models, we can see that the total parameter and MMACS are significantly lower than SC for the same WM. The result on kHz, DWSC, and WM has , which exceeds human accuracy of , was achieved with only parameters, and MMACS. We note that human accuracy is also exceeded with SC, WM of 0.125, a model that has a modest parameters and MMACS. As a comparison of complexity, EnvNetV2 , which at the time of this writing has the best single model accuracy of , uses parameters and MMACS. Our best model with accuracy of has about the parameters and less operations.
We have presented a novel e2e CNN architecture, AclNet, for audio classification. AclNet is a scalable architecture that achieved state-of-the-art accuracy with high compute, and better than human level accuracy of with only parameters and MMACS. To achieve the low complexity with high accuracy, AclNet used depthwise separable convolution blocks. The combination of mixup and data augmentation helped boost the accuracy by , which had a major contribution to achieving one of the best results reported on ESC-50 dataset.
-  Shawn Hershey, Sourish Chaudhuri, Daniel PW Ellis, Jort F Gemmeke, Aren Jansen, R Channing Moore, Manoj Plakal, Devin Platt, Rif A Saurous, Bryan Seybold, et al., “Cnn architectures for large-scale audio classification,” in Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on. IEEE, 2017, pp. 131–135.
-  Annamaria Mesaros, Toni Heittola, and Tuomas Virtanen, “A multi-device dataset for urban acoustic scene classification,” Submitted to DCASE2018 Workshop, 2018.
-  Yuma Sakashita and Masaki Aono, “Acoustic scene classification by ensemble of spectrograms based on adaptive temporal divisions,” Tech. Rep., DCASE2018 Challenge, September 2018.
Matthias Dorfer, Bernhard Lehner, Hamid Eghbal-zadeh, Heindl Christop, Paischer
Fabian, and Widmer Gerhard,
“Acoustic scene classification with fully convolutional neural networks and I-vectors,”Tech. Rep., DCASE2018 Challenge, September 2018.
-  Hossein Zeinali, Lukas Burget, and Honza Cernocky, “Convolutional neural networks and x-vector embedding for dcase2018 acoustic scene classification challenge,” Tech. Rep., DCASE2018 Challenge, September 2018.
-  Karol J Piczak, “Esc: Dataset for environmental sound classification,” in Proceedings of the 23rd ACM international conference on Multimedia. ACM, 2015, pp. 1015–1018.
Hardik B Sailor, Dharmesh M Agrawal, and Hemant A Patil,
“Unsupervised filterbank learning using convolutional restricted boltzmann machine for environmental sound classification,”Proc. Interspeech 2017, pp. 3107–3111, 2017.
-  Yuji Tokozume, Yoshitaka Ushiku, and Tatsuya Harada, “Learning from between-class examples for deep sound recognition,” in International Conference on Learning Representations, 2018.
-  Anurag Kumar, Maksim Khadkevich, and Christian Fügen, “Knowledge transfer from weakly labeled audio using convolutional neural network for sound events and scenes,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018, pp. 326–330.
Rishabh N Tak, Dharmesh M Agrawal, and Hemant A Patil,
“Novel phase encoded mel filterbank energies for environmental sound
International Conference on Pattern Recognition and Machine Intelligence. Springer, 2017, pp. 317–325.
-  Michael Deisher; Andrzej Polonski, “Implementation of efficient, low power deep neural networks on next-generation intel client platforms,” IEEE SigPort, 2017.
-  Mircea Horea Ionica and David Gregg, “The movidius myriad architecture’s potential for scientific computing,” IEEE Micro, vol. 35, no. 1, pp. 6–14, 2015.
-  Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz, “mixup: Beyond empirical risk minimization,” arXiv preprint arXiv:1710.09412, 2017.
-  Yusuf Aytar, Carl Vondrick, and Antonio Torralba, “Soundnet: Learning sound representations from unlabeled video,” in Advances in Neural Information Processing Systems, 2016, pp. 892–900.
-  Wei Dai, Chia Dai, Shuhui Qu, Juncheng Li, and Samarjit Das, “Very deep convolutional neural networks for raw waveforms,” 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 421–425, 2017.
-  Yuji Tokozume and Tatsuya Harada, “Learning environmental sounds with end-to-end convolutional neural network,” in Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on. IEEE, 2017, pp. 2721–2725.
-  Song Han, Huizi Mao, and William J Dally, “Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding,” arXiv preprint arXiv:1510.00149, 2015.
-  Jian Xue, Jinyu Li, Dong Yu, Mike Seltzer, and Yifan Gong, “Singular value decomposition based low-footprint speaker adaptation and personalization for deep neural network,” in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. IEEE, 2014, pp. 6359–6363.
-  Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861, 2017.
-  Karen Simonyan and Andrew Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
-  Ankit Shah, Anurag Kumar, Alexander G. Hauptmann, and Bhiksha Raj, “A closer look at weak label learning for audio events,” CoRR, vol. abs/1804.09288, 2018.