AclNet: efficient end-to-end audio classification CNN

11/16/2018 ∙ by Jonathan J Huang, et al. ∙ 0

We propose an efficient end-to-end convolutional neural network architecture, AclNet, for audio classification. When trained with our data augmentation and regularization, we achieved state-of-the-art performance on the ESC-50 corpus with 85:65 compute requirements are drastically reduced, and a tradeoff analysis of accuracy and complexity is presented. The analysis shows high accuracy at significantly reduced computational complexity compared to existing solutions. For example, a configuration with only 155k parameters and 49:3 million multiply-adds per second is 81:75 improved efficiency can enable always-on inference in energy-efficient platforms.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Following the successes of image classification, convolutional neural network (CNN) architectures have become popularized for audio classification. Hershey, et al. [1]

showed that large image classification CNNs trained with huge amount of weakly labeled Youtube data leads to semantically meaningful representations, the basis of a powerful classifier. In the recent DCASE acoustic scene classification task

[2], the top submissions are mostly CNN-based [3] [4], [5]. Likewise, many of the top results for the ESC-50 corpus [6] use various forms of CNNs [7], [8], [9], [10].

While the prior work on CNN audio classifiers have focused on accuracy of the performance for a particular tasks, none that we are aware of have treated computational efficiency as a primary objective. The first contribution of this paper is a scalable architecture, that at the high-end has one of the best accuracies in the ESC-50, and at the low-end offers the flexibility to scale down to extremely small model sizes. The advantage of a scalable architecture is that it allows for flexibility in inference platforms with various system constraints. The efficiency brought by our architecture allows for low-power always-on inference on DSPs or dedicated neural net accelerators [11], [12]. The second contribution is the application of mixup data augmentation [13] for audio classification, and we show it is a big contributor to the high accuracy due to improved generalization.

AclNet is an end-to-end (e2e) CNN architecture, which takes raw time-domain input waveform as opposed to the more popular technique of using spectral features like mel-filterbank or mel-frequency cepstral coefficients (MFCC). One of the advantages of an e2e architecture is that the front-end feature makes no assumptions of the spectral content. Its feature representation is learned in a data-driven manner, thus its features are optimized for the task at hand as long as there is sufficient training data. Another advantage of e2e is that it eliminates the implementation of the spectral features, which simplifies software or hardware complexity. Although other e2e techniques [14], [15], [16], [8] have been studied, our architecture has a focus on efficiency.

Several research results from optimization of CNNs from image domain can be borrowed to make audio classification more efficient. Han et al. [17]

used pruning, quantization, and Huffman encoding to reduce complexity of CNNs. Singular value decomposition has been applied to DNN to reduce model size

[18]. The AclNet gets its inspirations for efficient computations from MobileNet [19], which features the depthwise separable convolution we employed extensively in this work. With these tricks, human-level accuracy for ESC-50 was achieved with only parameters and million multiply-adds per second (MMACS).

The remainder of this paper is organized as follows. In Section 2 we detail the AclNet architecture. Section 3 provides the details of data augmentation and model training process. Section 4 presents the our findings from the experiments, followed by conclusion in Section 5.

2 AclNet architecture

The AclNet architecture consists of two building blocks of the network: the low-level features (LLF) shown in Table 1 and the high-level features (HLF) shown in Table 2.

2.1 Low-level features

The LLF can be viewed as a replacement of the spectral feature, and the two stages of 1-D strided convolutions are equivalent to FIR decimation filterbank. With the time-domain waveform as input, the LLF produces an output of 64 channels at feature frame rate of

after the maxpool layer. In the example given on Table 1, the

input produces an output tensor with dimension


Although the number of parameters in the LLF are invariant to stride values , and the number of intermediate channels , the choice of these values greatly influence the compute complexity and accuracy. Our experiments will show the settings which gives the most accurate results.

The example in Table 1 is for the sampling rate. The parameters , , and all kernel size scale linearly with sampling rate to ensure the same time duration of kernels and output frame rate of .

Layer Stride Out dim Out Chans Kernel size
Conv1 S1 C1 9
Conv2 S2 64 5
Maxpool1 1 64
Table 1: AclNet low-level features, with s samples. Input dimension .

2.2 High-level features

Continuing from the LLF output, transposing the first two dimensions will result in an image-like tensor with dimension . The rest of HLF thus can follow the structure similar to image classification CNNs. We experimented with a number of architectures, and found that a VGG-like architecture [20] provides a good classification performance and well-understood building blocks. The architecture shown in Table 2 is a modified VGG. Besides changing the depth and channel width, the final layers of the network are also modified. Conv12 layer is a convolution that reduces the number of channels to the number of classes, which in the case of ESC-50, is . Each of the channels are then average pooled over the patches and output as softmax. The advantage of these final two layers is that our architecture can incorporate arbitrary length inputs, without any need to modify the number of hidden units in fully-connected layers. An additional benefit of this way of pooling is that it is shown to be effective for training on weakly labeled datasets [21].

Before the input to Conv12 layer, we have a dropout layer. We found the probability of 0.2 to work well on this dataset.

Layer Stride Out dim Out Chans Kernel Size
Conv3 1 32
Maxpool2 1 32
Conv4 1 64
Conv5 1 64
Maxpool3 1 64
Conv6 1 128
Conv7 1 128
Maxpool4 1 128
Conv8 1 256
Conv9 1 256
Maxpool5 1 256
Conv10 1 512
Conv11 1 512
Maxpool6 1 512
Conv12 1 50
Avgpool1 1 50

Table 2: AclNet high-level features, with input out of LLF.

2.3 Convolutional layers details

All convolutional layers shown on Tables 1 and 2, except their first layer (i.e. Conv1 and Conv3) can be configured in one of two forms:

  • Standard convolution (SC)

    : this is the standard building blocks of convolution layer, batch normalization, and ReLU activation.

  • Depthwise separable convolution (DWSC): the convolution is decomposed into depthwise separable convolutions with pointwise layers each followed by batch normalization and ReLU activation as in MobileNet [19].

The advantage of DWSC is that they use significantly less parameters and operations compared to SC, but typically at a cost of degradation in performance. We will explore the tradeoffs between these two choices of convolutions in our experiments.

2.4 Width multiplier

As in MobileNet, our architecture also has a width multiplier (WM) to control the complexity of the network. The WM linearly scales the number of output channels from Conv3 to Conv11. This parameter is an easy way to manage the capacity of the network, and our experiments will explore its accuracy impact on the ESC-50 corpus.

3 Experimental methods

3.1 Dataset

We used ESC-50 to train and evaluate the models. ESC-50 contains a total of 2000 examples of environmental sounds arranged in 50 classes. We use the default 5-folds provided by the dataset for cross validation in performance evaluation. All sound files were converted to 16-bits, at and sampling rates for two different sets of experiments. We eliminated the silent sections at the beginning and ending of each recording.

3.2 Data augmentation

We have experimented with different input lengths to the training of AclNet using ESC-50 and a proprietary dataset. Empirically we found that between 1 to 2 second input gave the best results, so for the rest of the experiments we chose input length. In our data loader of the training process, we use the following real-time data augmentation to generate each training example.

  1. Choose a random of audio within a training file

  2. Center the waveform to zero mean, and normalize by standard deviation

  3. Resample the waveform by a random factor uniformly chosen in range

  4. Crop exactly to

  5. Multiply waveform by random gain chosen uniformly in range

During test time, only the data normalization step is used, and the length of the entire wave file is input into the network.

3.3 Mixup training

Mixup [13] is a recent technique to improve generalization by increasing the support of the training distribution. In this technique, a neighborhood is defined around each example in the training data by constructing virtual training examples, that is, pairs of virtual samples and virtual targets . Given two training examples, and , the new virtual pair is computed as:



. The hyperparameter

controls the amount that is mixed in from the second example. Higher values of alpha make the virtual pairs less similar to the original unmixed training examples. We experimented with values from 0.1 to 0.5.

3.4 Learning Settings

For all experiments, we used stochastic gradient descent optimizer with momentum of

, weight decay of , and a batch size of

. We trained the model using the following learning rate schedule with 3 different phases: 0.2 for the first 500 epochs, 0.04 for the next 1000, and 0.016 for the last 500, for a total of 2000 epochs for each fold. Also, for the first 100 epochs we disabled the mixup procedure as a form of warm up to improve initial convergence.

4 Results and analysis

4.1 Data augmentation and mixup

Several experiments were done to assess the effectiveness of augmentation and mixup. Figure 1 shows the validation accuracy over the course of the training process for various combinations of augmentation as explained in Section 3. All experiments were done using a WM of 1.0, and SC. We see an obvious improvement with each individual augmentation, and that mixup by itself is more effective than the other form of augmentation. The best result was achieved when augmentation was combined with mixup, which had an absolute improvement of more than above the baseline without any augmentation. We note mixup is conceptually similar to between class learning, which was also shown to work well for ESC-50 [8].

We have experimented with the choice of in mixup, and found that values between to worked well for the larger size architectures, thus for the remainder of experiments, we default to using this combined augmentation with mixup .

Figure 1: Comparison of the effects of data augmentation and mixup on validation accuracy.

Sampling rate Conv type LLF params (k) LLF MMACS HLF params (k) HLF MMACS Total params (k) Total MMACS Width multiplier Accuracy (%)
16k DWSC 1.44 4.35 13.91 2.93 15.35 7.28 0.125 75.38
16k DWSC 1.44 4.35 153.43 31.07 154.87 35.42 0.5 80.40
16k DWSC 1.44 4.35 567.92 113.7 569.4 118.1 1.0 80.90
44.1k DWSC 1.81 17.98 13.91 2.96 15.72 20.94 0.125 75.50
44.1k DWSC 1.81 17.98 153.43 31.33 155.23 49.31 0.5 81.75
44.1k DWSC 1.81 17.98 567.92 114.6 569.73 132.59 1.0 83.10
44.1k SC 6.99 80.9 77.21 8.88 84.21 131.17 0.125 82.30
44.1k SC 6.99 80.9 1190.0 132.72 1197.0 255.01 0.5 83.95
44.1k SC 6.99 80.9 4730.0 524.67 4737.0 646.97 1.0 85.0
44.1k SC 6.99 80.9 10620 786.56 10627 867.45 1.5 85.65
Table 3: ESC-50 5-fold accuracies with AclNet at select configurations.

4.2 Low-level feature parameters

In EnvNet [16], analysis showed that 2 convolutions of kernel size 8 worked best for this dataset. Our experiments confirmed that 2 convolutions being optimal, but we also found that slightly reducing the kernel size of second convolution had no impact on accuracy. Our best setting is with kernel sizes of 9 and 5 for the first two convolutions.

In order to determine the choice of other LLF parameters, we did a grid search of the parameter space over these ranges: , , .

We trained AclNet using both SC and DWSC settings with width multiplier of 1.0, and found the values of (C1, S1, S2) = for SC and for DWSC gave the best accuracy. For the remainder of experiments, we will default to using these best settings for SC and DWSC. The experiments showed that there was about a difference between the best and worst parameters for each of the settings. The best result in both cases was not the highest complexity, which is . We suspect the heavier LLF settings might be overfitting, and that with more training data we could reach a different conclusion.

4.3 Complexity versus accuracy

To understand the tradeoff between complexity and accuracy, we ran three sets of experiments using 1) input with DWSC, 2) input with DWSC, and 3) input with SC. For each set, we did the 5-fold validation with WM configured at , , , , , , , , and . Figure 2 shows the accuracy versus MMACS for each of the settings, color-coded by sets. For each of these settings, increasing complexity generally led to better accuracy. The exception is at the highest WM, where it is possible that we hit diminishing returns of higher capacity. In all cases, WM below steepens the drop in accuracy. Another observation is that for the same HLF settings, sampling rate improves accuracy by around .

Table 3 shows a subset of these experiments, with details of LLF, HLF, overall complexity and accuracy. Our best accuracy of was achieved with sampling rate, SC, and WM. At the time of this writing, this is the best single system accuracy reported for ESC-50 (second overall behind an ensemble system [7]). With DWSC models, we can see that the total parameter and MMACS are significantly lower than SC for the same WM. The result on kHz, DWSC, and WM has , which exceeds human accuracy of [6], was achieved with only parameters, and MMACS. We note that human accuracy is also exceeded with SC, WM of 0.125, a model that has a modest parameters and MMACS. As a comparison of complexity, EnvNetV2 [8], which at the time of this writing has the best single model accuracy of , uses parameters and MMACS. Our best model with accuracy of has about the parameters and less operations.

Figure 2: Accuracy vs million multiply-adds per second.

5 Conclusion

We have presented a novel e2e CNN architecture, AclNet, for audio classification. AclNet is a scalable architecture that achieved state-of-the-art accuracy with high compute, and better than human level accuracy of with only parameters and MMACS. To achieve the low complexity with high accuracy, AclNet used depthwise separable convolution blocks. The combination of mixup and data augmentation helped boost the accuracy by , which had a major contribution to achieving one of the best results reported on ESC-50 dataset.