Time-Frequency Scattering Accurately Models Auditory Similarities Between Instrumental Playing Techniques

07/21/2020
by   Vincent Lostanlen, et al.
0

Instrumental playing techniques such as vibratos, glissandos, and trills often denote musical expressivity, both in classical and folk contexts. However, most existing approaches to music similarity retrieval fail to describe timbre beyond the so-called “ordinary” technique, use instrument identity as a proxy for timbre quality, and do not allow for customization to the perceptual idiosyncrasies of a new subject. In this article, we ask 31 human subjects to organize 78 isolated notes into a set of timbre clusters. Analyzing their responses suggests that timbre perception operates within a more flexible taxonomy than those provided by instruments or playing techniques alone. In addition, we propose a machine listening model to recover the cluster graph of auditory similarities across instruments, mutes, and techniques. Our model relies on joint time–frequency scattering features to extract spectrotemporal modulations as acoustic features. Furthermore, it minimizes triplet loss in the cluster graph by means of the large-margin nearest neighbor (LMNN) metric learning algorithm. Over a dataset of 9346 isolated notes, we report a state-of-the-art average precision at rank five (AP@5) of 99.0%±1. An ablation study demonstrates that removing either the joint time–frequency scattering transform or the metric learning algorithm noticeably degrades performance.

READ FULL TEXT

page 1

page 26

page 30

page 31

research
10/10/2018

On Time-frequency Scattering and Computer Music

To appear as the preface to: "Florian Hecker: Halluzination, Perspektive...
research
04/18/2022

Differentiable Time-Frequency Scattering in Kymatio

Joint time-frequency scattering (JTFS) is a convolutional operator in th...
research
08/29/2018

Extended playing techniques: The next milestone in musical instrument recognition

The expressive variability in producing a musical note conveys informati...
research
10/22/2019

Learning the helix topology of musical pitch

To explain the consonance of octaves, music psychologists represent pitc...
research
01/24/2023

Mesostructures: Beyond Spectrogram Loss in Differentiable Time-Frequency Analysis

Computer musicians refer to mesostructures as the intermediate levels of...
research
10/08/2021

Joint Scattering for Automatic Chick Call Recognition

Animal vocalisations contain important information about health, emotion...

Please sign up or login with your details

Forgot password? Click here to reset