DeepAI AI Chat
Log In Sign Up

Mesostructures: Beyond Spectrogram Loss in Differentiable Time-Frequency Analysis

by   Cyrus Vahidi, et al.
Queen Mary University of London

Computer musicians refer to mesostructures as the intermediate levels of articulation between the microstructure of waveshapes and the macrostructure of musical forms. Examples of mesostructures include melody, arpeggios, syncopation, polyphonic grouping, and textural contrast. Despite their central role in musical expression, they have received limited attention in deep learning. Currently, autoencoders and neural audio synthesizers are only trained and evaluated at the scale of microstructure: i.e., local amplitude variations up to 100 milliseconds or so. In this paper, we formulate and address the problem of mesostructural audio modeling via a composition of a differentiable arpeggiator and time-frequency scattering. We empirically demonstrate that time–frequency scattering serves as a differentiable model of similarity between synthesis parameters that govern mesostructure. By exposing the sensitivity of short-time spectral distances to time alignment, we motivate the need for a time-invariant and multiscale differentiable time–frequency model of similarity at the level of both local spectra and spectrotemporal modulations.


page 5

page 7


On Time-frequency Scattering and Computer Music

To appear as the preface to: "Florian Hecker: Halluzination, Perspektive...

Differentiable Time-Frequency Scattering in Kymatio

Joint time-frequency scattering (JTFS) is a convolutional operator in th...

Sparsity-based audio declipping methods: overview, new algorithms, and large-scale evaluation

Recent advances in audio declipping have substantially improved the stat...

The Shape of RemiXXXes to Come: Audio Texture Synthesis with Time-frequency Scattering

This article explains how to apply time-frequency scattering, a convolut...

wav2shape: Hearing the Shape of a Drum Machine

Disentangling and recovering physical attributes, such as shape and mate...

Time-Frequency Scattering Accurately Models Auditory Similarities Between Instrumental Playing Techniques

Instrumental playing techniques such as vibratos, glissandos, and trills...

End-to-End Probabilistic Inference for Nonstationary Audio Analysis

A typical audio signal processing pipeline includes multiple disjoint an...