Self-supervised learning of audio representations using angular contrastive loss

11/10/2022
by   Shanshan Wang, et al.
0

In Self-Supervised Learning (SSL), various pretext tasks are designed for learning feature representations through contrastive loss. However, previous studies have shown that this loss is less tolerant to semantically similar samples due to the inherent defect of instance discrimination objectives, which may harm the quality of learned feature embeddings used in downstream tasks. To improve the discriminative ability of feature embeddings in SSL, we propose a new loss function called Angular Contrastive Loss (ACL), a linear combination of angular margin and contrastive loss. ACL improves contrastive learning by explicitly adding an angular margin between positive and negative augmented pairs in SSL. Experimental results show that using ACL for both supervised and unsupervised learning significantly improves performance. We validated our new loss function using the FSDnoisy18k dataset, where we achieved 73.6 accuracy in sound event classification using supervised and self-supervised learning, respectively.

READ FULL TEXT
research
07/13/2020

Whitening for Self-Supervised Representation Learning

Recent literature on self-supervised learning is based on the contrastiv...
research
10/13/2020

Are all negatives created equal in contrastive instance discrimination?

Self-supervised learning has recently begun to rival supervised learning...
research
05/26/2022

Triangular Contrastive Learning on Molecular Graphs

Recent contrastive learning methods have shown to be effective in variou...
research
03/01/2021

Using contrastive learning to improve the performance of steganalysis schemes

To improve the detection accuracy and generalization of steganalysis, th...
research
04/29/2021

Hyperspherically Regularized Networks for BYOL Improves Feature Uniformity and Separability

Bootstrap Your Own Latent (BYOL) introduced an approach to self-supervis...
research
04/11/2022

Speech Sequence Embeddings using Nearest Neighbors Contrastive Learning

We introduce a simple neural encoder architecture that can be trained us...
research
12/02/2021

Probabilistic Contrastive Loss for Self-Supervised Learning

This paper proposes a probabilistic contrastive loss function for self-s...

Please sign up or login with your details

Forgot password? Click here to reset