Multisensory Learning Framework for Robot Drumming

07/23/2019
by   A. Barsky, et al.
0

The hype about sensorimotor learning is currently reaching high fever, thanks to the latest advancement in deep learning. In this paper, we present an open-source framework for collecting large-scale, time-synchronised synthetic data from highly disparate sensory modalities, such as audio, video, and proprioception, for learning robot manipulation tasks. We demonstrate the learning of non-linear sensorimotor mappings for a humanoid drumming robot that generates novel motion sequences from desired audio data using cross-modal correspondences. We evaluate our system through the quality of its cross-modal retrieval, for generating suitable motion sequences to match desired unseen audio or video sequences.

READ FULL TEXT

page 1

page 2

research
11/21/2022

TimbreCLIP: Connecting Timbre to Text and Images

We present work in progress on TimbreCLIP, an audio-text cross modal emb...
research
03/29/2022

On Metric Learning for Audio-Text Cross-Modal Retrieval

Audio-text retrieval aims at retrieving a target audio clip or caption f...
research
12/18/2017

Objects that Sound

In this paper our objectives are, first, networks that can embed audio a...
research
09/21/2018

Perfect match: Improved cross-modal embeddings for audio-visual synchronisation

This paper proposes a new strategy for learning powerful cross-modal emb...
research
01/21/2021

Learning rich touch representations through cross-modal self-supervision

The sense of touch is fundamental in several manipulation tasks, but rar...
research
02/11/2021

A Fractal Approach to Characterize Emotions in Audio and Visual Domain: A Study on Cross-Modal Interaction

It is already known that both auditory and visual stimulus is able to co...
research
01/06/2021

Multi-Stage Residual Hiding for Image-into-Audio Steganography

The widespread application of audio communication technologies has speed...

Please sign up or login with your details

Forgot password? Click here to reset