Unsupervised Feature Learning Based on Deep Models for Environmental Audio Tagging

07/13/2016
by   Yong Xu, et al.
0

Environmental audio tagging aims to predict only the presence or absence of certain acoustic events in the interested acoustic scene. In this paper we make contributions to audio tagging in two parts, respectively, acoustic modeling and feature learning. We propose to use a shrinking deep neural network (DNN) framework incorporating unsupervised feature learning to handle the multi-label classification task. For the acoustic modeling, a large set of contextual frames of the chunk are fed into the DNN to perform a multi-label classification for the expected tags, considering that only chunk (or utterance) level rather than frame-level labels are available. Dropout and background noise aware training are also adopted to improve the generalization capability of the DNNs. For the unsupervised feature learning, we propose to use a symmetric or asymmetric deep de-noising auto-encoder (sDAE or aDAE) to generate new data-driven features from the Mel-Filter Banks (MFBs) features. The new features, which are smoothed against background noise and more compact with contextual information, can further improve the performance of the DNN baseline. Compared with the standard Gaussian Mixture Model (GMM) baseline of the DCASE 2016 audio tagging challenge, our proposed method obtains a significant equal error rate (EER) reduction from 0.21 to 0.13 on the development set. The proposed aDAE system can get a relative 6.7 compared with the strong DNN baseline on the development set. Finally, the results also show that our approach obtains the state-of-the-art performance with 0.15 EER on the evaluation set of the DCASE 2016 audio tagging task while EER of the first prize of this challenge is 0.17.

READ FULL TEXT

page 8

page 10

research
06/24/2016

Fully DNN-based Multi-label regression for audio tagging

Acoustic event detection for content analysis in most cases relies on lo...
research
02/24/2017

Convolutional Gated Recurrent Neural Network Incorporating Spatial Features for Audio Tagging

Environmental audio tagging is a newly proposed task to predict the pres...
research
07/13/2016

Hierarchical learning for DNN-based acoustic scene classification

In this paper, we present a deep neural network (DNN)-based acoustic sce...
research
08/25/2023

Deep Active Audio Feature Learning in Resource-Constrained Environments

The scarcity of labelled data makes training Deep Neural Network (DNN) m...
research
12/11/2017

Unsupervised Feature Learning for Audio Analysis

Identifying acoustic events from a continuously streaming audio source i...
research
10/22/2022

GCT: Gated Contextual Transformer for Sequential Audio Tagging

Audio tagging aims to assign predefined tags to audio clips to indicate ...
research
07/29/2021

Fine-Grained Classroom Activity Detection from Audio with Neural Networks

Instructors are increasingly incorporating student-centered learning tec...

Please sign up or login with your details

Forgot password? Click here to reset