SPICE: Self-supervised Pitch Estimation

10/25/2019
by   Beat Gfeller, et al.
0

We propose a model to estimate the fundamental frequency in monophonic audio, often referred to as pitch estimation. We acknowledge the fact that obtaining ground truth annotations at the required temporal and frequency resolution is a particularly daunting task. Therefore, we propose to adopt a self-supervised learning technique, which is able to estimate (relative) pitch without any form of supervision. The key observation is that pitch shift maps to a simple translation when the audio signal is analysed through the lens of the constant-Q transform (CQT). We design a self-supervised task by feeding two shifted slices of the CQT to the same convolutional encoder, and require that the difference in the outputs is proportional to the corresponding difference in pitch. In addition, we introduce a small model head on top of the encoder, which is able to determine the confidence of the pitch estimate, so as to distinguish between voiced and unvoiced audio. Our results show that the proposed method is able to estimate pitch at a level of accuracy comparable to fully supervised models, both on clean and noisy audio samples, yet it does not require access to large labeled datasets

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/05/2023

PESTO: Pitch Estimation with Self-supervised Transposition-equivariant Objective

In this paper, we address the problem of pitch estimation using Self Sup...
research
05/14/2019

Self-supervised Audio Spatialization with Correspondence Classifier

Spatial audio is an essential medium to audiences for 3D visual and audi...
research
08/07/2019

Self-supervised Attention Model for Weakly Labeled Audio Event Classification

We describe a novel weakly labeled Audio Event Classification approach b...
research
10/22/2020

A Framework for Contrastive and Generative Learning of Audio Representations

In this paper, we present a framework for contrastive learning for audio...
research
10/26/2022

AVES: Animal Vocalization Encoder based on Self-Supervision

The lack of annotated training data in bioacoustics hinders the use of l...
research
05/24/2019

Self-supervised audio representation learning for mobile devices

We explore self-supervised models that can be potentially deployed on mo...
research
09/03/2022

Equivariant Self-Supervision for Musical Tempo Estimation

Self-supervised methods have emerged as a promising avenue for represent...

Please sign up or login with your details

Forgot password? Click here to reset