Masked Autoencoders with Multi-Window Attention Are Better Audio Learners

06/01/2023
by   Sarthak Yadav, et al.
0

Several recent works have adapted Masked Autoencoders (MAEs) for learning general-purpose audio representations. However, they do not address two key aspects of modelling multi-domain audio data: (i) real-world audio tasks consist of a combination of local+global contexts, and (ii) real-world audio signals are complex compositions of several acoustic elements with different time-frequency characteristics. To address these concerns, this work proposes a Multi-Window Masked Autoencoder (MW-MAE) fitted with a novel Multi-Window Multi-Head Attention module that can capture information at multiple local and global contexts in every decoder transformer block through attention heads of several distinct local and global windows. Empirical results on ten downstream audio tasks show that MW-MAEs consistently outperform standard MAEs in overall performance and learn better general-purpose audio representations, as well as demonstrate considerably better scaling characteristics. Exploratory analyses of the learned representations reveals that MW-MAE encoders learn attention heads with more distinct entropies compared to those learned by MAEs, while attention heads across the different transformer blocks in MW-MAE decoders learn correlated feature representations, enabling each block to independently capture local and global information, leading to a decoupled feature hierarchy. Code for feature extraction and downstream experiments along with pre-trained weights can be found at https://github.com/10997NeurIPS23/10997_mwmae.

READ FULL TEXT

page 9

page 16

page 17

research
04/15/2022

BYOL for Audio: Exploring Pre-trained General-purpose Audio Representations

Pre-trained models are essential as feature extractors in modern machine...
research
07/13/2022

Masked Autoencoders that Listen

This paper studies a simple extension of image-based Masked Autoencoders...
research
02/04/2022

Decoupling Local and Global Representations of Time Series

Real-world time series data are often generated from several sources of ...
research
07/01/2022

DALG: Deep Attentive Local and Global Modeling for Image Retrieval

Deeply learned representations have achieved superior image retrieval pe...
research
11/26/2021

A Volumetric Transformer for Accurate 3D Tumor Segmentation

This paper presents a Transformer architecture for volumetric medical im...
research
03/14/2023

CAT: Causal Audio Transformer for Audio Classification

The attention-based Transformers have been increasingly applied to audio...
research
06/24/2020

Differentiable Window for Dynamic Local Attention

We propose Differentiable Window, a new neural module and general purpos...

Please sign up or login with your details

Forgot password? Click here to reset