Unsupervised Learning of Audio Perception for Robotics Applications: Learning to Project Data to T-SNE/UMAP space

02/10/2020
by   Prateek Verma, et al.
0

Audio perception is a key to solving a variety of problems ranging from acoustic scene analysis, music meta-data extraction, recommendation, synthesis and analysis. It can potentially also augment computers in doing tasks that humans do effortlessly in day-to-day activities. This paper builds upon key ideas to build perception of touch sounds without access to any ground-truth data. We show how we can leverage ideas from classical signal processing to get large amounts of data of any sound of interest with a high precision. These sounds are then used, along with the images to map the sounds to a clustered space of the latent representation of these images. This approach, not only allows us to learn semantic representation of the possible sounds of interest, but also allows association of different modalities to the learned distinctions. The model trained to map sounds to this clustered representation, gives reasonable performance as opposed to expensive methods collecting a lot of human annotated data. Such approaches can be used to build a state of art perceptual model for any sound of interest described using a few signal processing features. Daisy chaining high precision sound event detectors using signal processing combined with neural architectures and high dimensional clustering of unlabelled data is a vastly powerful idea, and can be explored in a variety of ways in future.

READ FULL TEXT

page 2

page 3

page 4

research
11/10/2022

Topology in Sound Synthesis and Digital Signal Processing – DAFx2022 Lecture Notes

Lecture notes of a tutorial on topology in sound synthesis and digital s...
research
10/27/2022

One-Shot Acoustic Matching Of Audio Signals – Learning to Hear Music In Any Room/ Concert Hall

The acoustic space in which a sound is created and heard plays an essent...
research
12/11/2019

Learning to Model Aspects of Hearing Perception Using Neural Loss Functions

We present a framework to model the perceived quality of audio signals b...
research
05/24/2023

Sound Design Strategies for Latent Audio Space Explorations Using Deep Learning Architectures

The research in Deep Learning applications in sound and music computing ...
research
05/01/2021

Audio Transformers:Transformer Architectures For Large Scale Audio Understanding. Adieu Convolutions

Over the past two decades, CNN architectures have produced compelling mo...
research
07/13/2020

Vector-Quantized Timbre Representation

Timbre is a set of perceptual attributes that identifies different types...
research
02/20/2023

Synergy between human and machine approaches to sound/scene recognition and processing: An overview of ICASSP special session

Machine Listening, as usually formalized, attempts to perform a task tha...

Please sign up or login with your details

Forgot password? Click here to reset