DeepAI AI Chat
Log In Sign Up

A Bimodal Learning Approach to Assist Multi-sensory Effects Synchronization

04/28/2018
by   Raphael Abreu, et al.
Cefet/RJ
0

In mulsemedia applications, traditional media content (text, image, audio, video, etc.) can be related to media objects that target other human senses (e.g., smell, haptics, taste). Such applications aim at bridging the virtual and real worlds through sensors and actuators. Actuators are responsible for the execution of sensory effects (e.g., wind, heat, light), which produce sensory stimulations on the users. In these applications sensory stimulation must happen in a timely manner regarding the other traditional media content being presented. For example, at the moment in which an explosion is presented in the audiovisual content, it may be adequate to activate actuators that produce heat and light. It is common to use some declarative multimedia authoring language to relate the timestamp in which each media object is to be presented to the execution of some sensory effect. One problem in this setting is that the synchronization of media objects and sensory effects is done manually by the author(s) of the application, a process which is time-consuming and error prone. In this paper, we present a bimodal neural network architecture to assist the synchronization task in mulsemedia applications. Our approach is based on the idea that audio and video signals can be used simultaneously to identify the timestamps in which some sensory effect should be executed. Our learning architecture combines audio and video signals for the prediction of scene components. For evaluation purposes, we construct a dataset based on Google's AudioSet. We provide experiments to validate our bimodal architecture. Our results show that the bimodal approach produces better results when compared to several variants of unimodal architectures.

READ FULL TEXT

page 1

page 5

page 7

03/28/2023

LIPSFUS: A neuromorphic dataset for audio-visual sensory fusion of lip reading

This paper presents a sensory fusion neuromorphic dataset collected with...
12/14/2018

On Attention Modules for Audio-Visual Synchronization

With the development of media and networking technologies, multimedia ap...
12/31/2020

Leveraging Audio Gestalt to Predict Media Memorability

Memorability determines what evanesces into emptiness, and what worms it...
09/15/2021

A Framework for Multisensory Foresight for Embodied Agents

Predicting future sensory states is crucial for learning agents such as ...
07/03/2018

Deep Neural Object Analysis by Interactive Auditory Exploration with a Humanoid Robot

We present a novel approach for interactive auditory object analysis wit...
05/07/2022

Timestamp-independent Haptic-Visual Synchronization

The booming haptic data significantly improves the users'immersion durin...
07/14/2019

Autoencoding sensory substitution

Tens of millions of people live blind, and their number is ever increasi...