GCT: Gated Contextual Transformer for Sequential Audio Tagging

10/22/2022
by   Yuanbo Hou, et al.
0

Audio tagging aims to assign predefined tags to audio clips to indicate the class information of audio events. Sequential audio tagging (SAT) means detecting both the class information of audio events, and the order in which they occur within the audio clip. Most existing methods for SAT are based on connectionist temporal classification (CTC). However, CTC cannot effectively capture connections between events due to the conditional independence assumption between outputs at different times. The contextual Transformer (cTransformer) addresses this issue by exploiting contextual information in SAT. Nevertheless, cTransformer is also limited in exploiting contextual information as it only uses forward information in inference. This paper proposes a gated contextual Transformer (GCT) with forward-backward inference (FBI). In addition, a gated contextual multi-layer perceptron (GCMLP) block is proposed in GCT to improve the performance of cTransformer structurally. Experiments on two real-life audio datasets show that the proposed GCT with GCMLP and FBI performs better than the CTC-based methods and cTransformer. To promote research on SAT, the manually annotated sequential labels for the two datasets are released.

READ FULL TEXT
research
03/22/2022

CT-SAT: Contextual Transformer for Sequential Audio Tagging

Sequential audio event tagging can provide not only the type information...
research
02/03/2021

A Global-local Attention Framework for Weakly Labelled Audio Tagging

Weakly labelled audio tagging aims to predict the classes of sound event...
research
05/29/2023

Streaming Audio Transformers for Online Audio Tagging

Transformers have emerged as a prominent model framework for audio taggi...
research
11/17/2018

Polyphonic audio tagging with sequentially labelled data using CRNN with learnable gated linear units

Audio tagging aims to detect the types of sound events occurring in an a...
research
07/13/2016

Unsupervised Feature Learning Based on Deep Models for Environmental Audio Tagging

Environmental audio tagging aims to predict only the presence or absence...
research
11/29/2017

Speaker-Sensitive Dual Memory Networks for Multi-Turn Slot Tagging

In multi-turn dialogs, natural language understanding models can introdu...
research
02/16/2023

Personalized Audio Quality Preference Prediction

This paper proposes to use both audio input and subject information to p...

Please sign up or login with your details

Forgot password? Click here to reset