BYOL for Audio: Exploring Pre-trained General-purpose Audio Representations
Pre-trained models are essential as feature extractors in modern machine learning systems in various domains. In this study, we hypothesize that representations effective for general audio tasks should provide multiple aspects of robust features of the input sound. For recognizing sounds regardless of perturbations such as varying pitch or timbre, features should be robust to these perturbations. For serving the diverse needs of tasks such as recognition of emotions or music genres, representations should provide multiple aspects of these robust features, such as local and global features and their statistics. To implement our principle, we propose a self-supervised learning method: Bootstrap Your Own Latent (BYOL) for Audio (BYOL-A, pronounced "viola"). BYOL-A pre-trains representations of the input sound themselves invariant to audio data augmentations by minimizing the difference between a pair of augmented input variants, which makes the learned representations robust to the perturbations of sounds. In the BYOL-A encoder, the global pooling calculates representations to form multi-aspect information by combining statistics of frequency- and channel-wise, local, and global features. As a result, the learned representations should provide multi-aspect robust features of the input and serve various needs of diverse tasks. We evaluated general audio task performance among previous state-of-the-art methods, and BYOL-A showed competitive results in all tasks with the best average result of 72.4 VoxCeleb1 and 63.8 experiments and validated the contributions of BYOL-A components. Our code is available online.
READ FULL TEXT