Audio Set classification with attention model: A probabilistic perspective

by   Qiuqiang Kong, et al.
University of Surrey

This paper investigate the classification of the Audio Set dataset. Audio Set is a large scale multi instance learning (MIL) dataset of sound clips. In MIL, a bag consists of several instances, and a bag is labelled positive if one or more instances in the audio clip is positive. Audio Set is a MIL dataset because an audio clip is labelled positive for a class if at least one frame contains the corresponding class. We tackle this MIL problem using an attention model and explain this attention model from a novel probabilistic perspective. We define a probability space on each bag. Each instance in a bag has a trainable probability measure for a class. Then the classification of a bag is the expectation of the classification of the instances in the bag with respect to the learned probability measure. Experimental results show that our proposed attention model modeled by fully connected deep neural network obtains mAP of 0.327 on Audio Set dataset, outperforming the Google's baseline of 0.314 and recurrent neural network of 0.325.



page 1

page 2

page 3

page 4


A bag-to-class divergence approach to multiple-instance learning

In multi-instance (MI) learning, each object (bag) consists of multiple ...

Learning with Out-of-Distribution Data for Audio Classification

In supervised machine learning, the assumption that training data is lab...

Weakly labelled AudioSet Classification with Attention Neural Networks

Audio tagging is the task of predicting the presence or absence of sound...

Learning and Interpreting Multi-Multi-Instance Learning Networks

We introduce an extension of the multi-instance learning problem where e...

Attention-based Neural Bag-of-Features Learning for Sequence Data

In this paper, we propose 2D-Attention (2DA), a generic attention formul...

Segment Relevance Estimation for Audio Analysis and Weakly-Labelled Classification

We propose a method that quantifies the importance, namely relevance, of...

The bag-of-frames approach: a not so sufficient model for urban soundscapes

The "bag-of-frames" approach (BOF), which encodes audio signals as the l...

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Analysis of environmental sounds has been a popular topic which has the potential to be used in many applications, such as public security surveillance, smart homes, smart cars and health care monitoring. Audio classification has also attracted significant research effort due to the Detection and Classification of Acoustic Scenes and Events (DCASE) challenge [1, 2]

. Several tasks have been defined for audio classification including acoustic scene classification

[1], sound event detection [1] and audio tagging [3, 4]. However, the data sets used in these challenges are relatively small. Recently, Google released an ontology and human-labeled large scale data set for audio events, namely, Audio Set [5]. Audio Set consists of an expanding ontology of 527 sound event classes and a collection of over 2 million human-labeled 10-second sound clips drawn from YouTube videos.

Audio Set is defined for tasks such as audio tagging. The objective of audio tagging is to perform multi-label classification on fixed-length audio chunks (i.e. assigning zero or more labels to each audio chunk) without predicting the precise boundaries of acoustic events. This task was first proposed in DCASE2016 challenge. Deep neural networks (DNNs) [5] and convolutional recurrent neural networks (CRNNs) [3] have been used for predicting the occurring audio tags. Neural networks with an attention scheme was firstly proposed in our previous work [6]

for the audio tagging task which provides the ability to localize the related audio events. Gated convolutional neural networks

[7] have also been applied in the “Large-scale weakly supervised sound event detection for smart cars” task of DCASE2017 challenge, where our system achieved the 1st place in the audio tagging sub-task111 However, the audio tagging data set used in the DCASE2017 challenge is just a small sub-set of Google Audio Set [5]. The number of the audio event classes is only 17 compared with 527 classes in Google Audio Set. In this paper, we propose to use an attention model for audio tagging on Google Audio Set [5], which shows better performance than the Google’s baseline. In this work, we have two main contributions, one is that we conduct and explore a large-scale audio tagging on Google Audio Set [5]

. Secondly, we explain the attention model from a probability perspective. The attention scheme is also similar to the feature selection process which can figure out the related features while suppressing the unrelated background noise. It is achieved by a weighted sum over frames where the attention values are automatically learned by the neural network model.

In the remainder of this paper, the related works are presented in Section 2. The proposed attention method and explanation from the probability perspective are shown in Section 3. Section 4 presents the experimental setup and results. Conclusions are drawn in section 5.

2 Related works

Multiple instance learning (MIL) [8, 9]

is a variation on supervised learning, where each learning example contains a

bag of instances

. In MIL, a positive bag contains at least one positive instance. On the other hand, a negative bag contains no positive instances. Each audio clip in Audio Set contains several feature vectors. An audio clip is labelled positive for a class if at least one feature vector belongs to the corresponding class.

A multi instance dataset consists of many pairs , where is the number training pairs. Each bag consists of several instances , where is an instance in a bag and is the number of instances in each bag. We denote as the label of the -th bag. In Audio Set classification, a bag is a collection of features from an audio clip. Each instance is a feature, where is the dimension of the feature. The label of a bag is where is the number of audio classes and and represent the negative and positive label, respectively. For a specific class , when the label of the -th bag then so that is positive. Otherwise if then so that is negative. Assume we have a classifier on each instance, we want to obtain a classifier on each bag. There are several ways to obtain bag level classifier from instance level classifier described as follows.

2.1 Collective assumption

The collective assumption [10] states that all instances in a bag contribute equally and independently to the bag’s label. Under this assumption, the bag level classifier is obtained by using the sum as the aggregation rule:


The collective assumption is simple and assumes that the instances contribute equally and independently to the bag-level class labels. However the collective assumption assumes all the instances inherit the label from its corresponding bag, which is not the same as the MIL assumption.

2.2 Maximum selection

The maximum selection [11] states that the prediction of a bag is the maximum classification value of each instance in the bag described as follows:


Maximum selection has been used in audio tagging using convolutional neural networks (CNNs) [12] and audio event detection using weakly labelled data [13]

. Maximum selection corresponds to a global max pooling layer

[12] in a convolutional neural network. Maximum selection performs well in audio tagging [12] but is sometimes inefficient in training because only one instance with the maximum value in a bag is used for training, and the gradient will only be computed from the instance with the highest classification value.

2.3 Weighted collective assumption

The weighted collective assumption is a generalization of the collective assumption, where a different weight is allowed for each instance [9]:


The weighted collective assumption asserts that each instance contributes independently but not necessarily equally to the label of a tag. This is achieved by incorporating a weight function into the collective assumption. Equation (3) has the same form as our joint detection-classification (JDC) model [14] and our attention model [6] proposed for audio tagging and sound event detection. The difference is that the work in [14, 6] model both and using neural network.

3 Attention a probabilistic perspective

Although Equation (3) has been used in many previous works [9, 14, 6], the explanation for this equation is not clearly presented. In this paper we explain this attention model in Equation (3) from a probabilistic perspective, which is helpful to guide the selection of and in Equation (3).

3.1 Measure space

For any instances in a bag, they should contribute differently to the classification of a bag. In MIL, a bag is labelled positive if at least one instance in the bag is positive. To solve this problem, the positive instances should be attended to and the negative instances should be ignored. We first assign a measure on each where is a set laid in, for example Euclidean space. To assign the measure on each instance , we introduce the measure space [15]

in probability theory.

Definition 1. Let be a set, a Borel field [15] of subsets of . A measure on is a numerically valued set function with domain , satisfying the following axioms:

2. If is a countable collection of disjoint sets in , , then we call the triple a measure space.

In addition, if we have:
then we call the triple a probability space.

3.2 Probability space

When classifying a bag, different instances contribute differently. We define a probability space for each bag for each class . As , we may define a probability space on where and is the Borel filed of the set . The probability measure on satisfies , so Definition 1 Axiom 3 is satisfied. We call a probability space for the -th class. For an instance in a bag, the closer to 1 the more this instance is attended. The closer to 0 the less this instance is attended.

3.3 Expectation

Assume for the -th class, the classification prediction and the probability measure on each instance are and , respectively. To obtain the classification result on the bag , we apply the expectation of the classification result with respect to the probability measure :

Figure 1: Attention model a probabilistic perspective where is the classification result on each instance and is the probability measure of each instance in a given bag. The prediction is the expectation of with respect to the probability measure .


is a random variable. Equation (4) represents the instances

contributes differently to the classification of the bag . The probability measure controls how much an instance is attended. Large and small represents the instance is attended and ignored, respectively.

3.4 Modeling attention

For a dataset with . A mapping is used to model the presence probability of the -th class of an instance . On the other hand, modeling the probability measure is difficult because of the constraint that the sum of the probability of the instances in a bag should be equal to 1:


So instead of modeling directly, we start from modeling in the measure space because in the measure space does not need to satisfy Definition 1, Axiom 3. To model , we use a mapping , where . Then for each bag and , we may define the probability measure of any instance of the -th class as:


where and are the measure of and , respectively. From Definition 1 Axiom 2, can be calculated by . So the constraint in Equation (5) is satisfied. After modeling and , the prediction of the -th class can be obtained by using Equation (4). The framework of the attention model is shown in Fig. 1.

3.5 Mini batch balancing

The Audio Set dataset is highly unbalanced. Some classes have tens of thousands samples while other classes only contain hundreds of samples. We therefore propose a mini batch balancing strategy, where the occurrence frequency of training samples of the different classes in a mini-batch are kept the same.

4 Experiments

4.1 Dataset

We experiment on the Audio Set dataset [5]. Audio Set contains over 2 million 10 seconds audio clips extracted from YouTube videos. Audio Set consists of 527 classes of audio with a hierarchy structure. The original waveform of the 2 million audio clips are not published. Instead, we use the published bottleneck feature vectors extracted from the embedding layer representation of a deep CNN trained on the YouTube-100M dataset [16]. The bottleneck feature vectors are extracted at one feature per second, that is, there are 10 features in an 10 seconds audio clip. Then the bottleneck feature vectors are post-processed by a principle component analysis (PCA) to remove the correlations and only the first 128 PCA coefficients are kept.

4.2 Model

The source code of this system is available here222 We apply a simple fully connected deep neural network to verify the effectiveness of the proposed attention model. We first apply fully connected layers on the input feature vectors to extract high level representation. We call this mapping as embedded mapping and denote as . We call as embedded instance. The embedded mapping

is modeled by three fully connected layers, with 500 hidden units in each layer followed by ReLU

[17] non-linearity and dropout [18] rate of 0.2 to reduce the risk of over-fitting. These configurations are chosen empirically. Then we model the classifier and the measure on each embedded instance by the following equation:


where is sigmoid non-linearity . The sigmoid non-linearity ensures that the probability is between 0 and 1. The non-linearity can be any non-negative function and we investigate ReLU [17], sigmoid and softmax functions in our experiment.

Then we may obtain in the -th bag by:


Figure 2: Model for Audio Set classification. The input space is mapped to an embedded space followed by a classification and an attention (probability measure) branch. Final prediction is the expectation of the classification output with respect to the learned probability measure.

Finally the prediction of the -th event in bag is obtained by using Equation (4).

4.3 Experiment analysis

We evaluate using mean average precision (mAP), area under curve (AUC) and d-prime used in [5]. These values are computed for each of the 527 classes and averaged across the 527 classes to obtain the final mAP, AUC and d-prime. Higher mAP, AUC and d-prime lead to better performance.

Table 1 shows the results of with and without data balancing strategy using collective assumption in Equation (1). The data balancing strategy is described in Section 3.5. Table 1 shows using balancing strategy performs better than without data balancing strategy in all of mAP, AUC and d-prime.

Table 2 shows the results of modeling the measure function using different non-negative functions including ReLU, sigmoid and softmax functions. Softmax non-negative performs slightly better than sigmoid non-negative and better than ReLU non-negative function.

Table 3 shows the comparison of different pooling strategies. Average pooling and max pooling along time axis are described in Equation (1) and (2), respectively. The Google baseline uses a simple fully connected DNN [5]. Table 3 shows that RNN with global average pooling performs better than Google baseline. Using DNN with attention achieves better performance than Google baseline and RNN.

mAP AUC d-prime
w/o balancing 0.275 0.957 2.429
with balancing 0.296 0.960 2.473
Table 1: Classification result with and without data balancing strategy.
mAP AUC d-prime
DNN ReLU attention 0.306 0.961 2.500
DNN sigmoid attention 0.326 0.964 2.547
DNN softmax attention 0.327 0.965 2.558
Table 2: Classification results of measure modeled by ReLU, sigmoid and softmax functions.
mAP AUC d-prime
DNN max pooling 0.284 0.958 2.442
DNN avg. pooling 0.296 0.960 2.473
Google baseline 0.314 0.959 2.452
RNN avg. pooling 0.325 0.960 2.480
DNN softmax attention 0.327 0.965 2.558
Table 3: Classification results with different pooling strategy.

5 Conclusion

In this paper, an attention model in audio classification is explained from a probability perspective. Both the classifier and the probability measure on each instance are modeled by a neural network. We apply fully connected neural network with this attention model on Audio Set and achieves mAP of 0.327 and AUC of 0.965 outperforming the Google baseline and recurrent neural network. In the future, we will explore more on modeling probability measure using different non-negative functions.

6 Acknowledgement

This research is supported by EPSRC grant EP/N014111/1 “Making Sense of Sounds” and Research Scholarship from the China Scholarship Council (CSC).


  • [1] A. Mesaros, T. Heittola, and T. Virtanen, “TUT database for acoustic scene classification and sound event detection,” in EUSIPCO. IEEE, 2016, pp. 1128–1132.
  • [2] A. Mesaros, T. Heittola, A. Diment, B. Elizalde, A. Shah, E. Vincent, B. Raj, and T. Virtanen, “DCASE 2017 challenge setup: Tasks, datasets and baseline system,” in Proceedings of DCASE2017 Workshop.
  • [3] Y. Xu, Q. Kong, Q. Huang, W. Wang, and Mark D. Plumbley, “Convolutional gated recurrent neural network incorporating spatial features for audio tagging,” in IJCNN. IEEE, 2017, pp. 3461–3466.
  • [4] Y. Xu, Q. Huang, W. Wang, P. Foster, S. Sigtia, PJB Jackson, and MD Plumbley, “Unsupervised feature learning based on deep models for environmental audio tagging,” IEEE/ACM Trans. on Audio, Speech and Language Processing, vol. 25, no. 6, pp. 1230–1241, 2017.
  • [5] J. F. Gemmeke, D. P. W. Ellis, D. Freedman, A. Jansen, W. Lawrence, R. C. Moore, M Plakal, and M. Ritter, “Audio Set: An ontology and human-labeled dataset for audio events,” in ICASSP, 2017, pp. 776–780.
  • [6] Y. Xu, Q. Kong, Q. Huang, W. Wang, and Mark D. Plumbley, “Attention and localization based on a deep convolutional recurrent model for weakly supervised audio tagging,” in INTERSPEECH. IEEE, 2017, pp. 3083–3087.
  • [7] Y. Xu, Q. Kong, W. Wang, and Mark D. Plumbley, “Large-scale weakly supervised audio classification using gated convolutional neural network,” arXiv preprint arXiv:1710.00343, 2017.
  • [8] O. Maron and T. Lozano-Pérez, “A framework for multiple-instance learning,” in Advances in Neural Information Processing Systems, 1998, pp. 570–576.
  • [9] J. Foulds and E. Frank, “A review of multi-instance learning assumptions,” pp. 25(1):570–576, 2010.
  • [10] X. Xu, “Statistical learning in multiple instance problems,” in Master Thesis, University of Waikato, 2003.
  • [11] J. Amores, “Multiple instance classification: Review, taxonomy and comparative study,” Artificial Intelligence, pp. 25(1):570–576, 2013.
  • [12] K. Choi, G. Fazekas, and M. Sandler, “Automatic tagging using deep convolutional neural networks,” arXiv preprint arXiv:1606.00298, 2016.
  • [13] A. Kumar and B. Raj, “Audio event detection using weakly labeled data,” in Proceedings of the 2016 ACM on Multimedia Conference. ACM, 2016, pp. 1038–1047.
  • [14] Q. Kong, Y. Xu, W. Wang, and M. D. Plumbley, “A joint detection-classification model for audio tagging of weakly labelled data,” in ICASSP. IEEE, 2017, pp. 641–645.
  • [15] K. L. Chung, “A course in probability theory,” Academic Press, pp. 16–21, 2001.
  • [16] S. Hershey, S. Chaudhuri, D. P. Ellis, J. F. Gemmeke, A. Jansen, R. C. Moore, and M. Slaney, “CNN architectures for large-scale audio classification,” in ICASSP. IEEE, 2017, pp. 131–135.
  • [17] V. Nair and G. E. Hinton,

    Rectified linear units improve restricted boltzmann machines,”


    Proceedings of the 27th International Conference on Machine Learning

    . IEEE, 2010, pp. 807–814.
  • [18] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” in Journal of Machine Learning Research. IEEE, 2014, pp. 1929–1958.