Synchronizing Audio-Visual Film Stimuli in Unity (version 5.5.1f1): Game Engines as a Tool for Research

07/05/2019
by   Javier Sanz, et al.
0

Unity is a software specifically designed for the development of video games. However, due to its programming possibilities and the polyvalence of its architecture, it can prove to be a versatile tool for stimuli presentation in research experiments. Nevertheless, it also has some limitations and conditions that need to be taken into account to ensure optimal performance in particular experimental situations. Such is the case if we want to use it in an experimental design that includes the acquisition of biometric signals synchronized with the broadcasting of video and audio in real time. In the present paper, we analyse how Unity (version 5.5.1f1) reacts in one such experimental design that requires the execution of audio-visual material. From the analysis of an experimental procedure in which the video was executed following the standard software specifications, we have detected the following problems desynchronization between the emission of the video and the audio; desynchronization between the temporary counter and the video; a delay in the execution of the screenshot; and depending on the encoding of the video a bad fluency in the video playback, which even though it maintains the total playback time, it causes Unity to freeze frames and proceed to compensate with little temporary jumps in the video. Finally, having detected all the problems, a compensation and verification process is designed to be able to work with audio-visual material in Unity (version 5.5.1f1) in an accurate way. We present a protocol for checks and compensations that allows solving these problems to ensure the execution of robust experiments in terms of reliability.

READ FULL TEXT

page 3

page 7

page 9

research
06/10/2019

"Did You Hear That?" Learning to Play Video Games from Audio Cues

Game-playing AI research has focused for a long time on learning to play...
research
12/03/2022

A subjective study of the perceptual acceptability of audio-video desynchronization in sports videos

This paper presents the results of a study conducted on the perceptual a...
research
03/29/2023

Sounding Video Generator: A Unified Framework for Text-guided Sounding Video Generation

As a combination of visual and audio signals, video is inherently multi-...
research
07/11/2023

Detection Threshold of Audio Haptic Asynchrony in a Driving Context

In order to provide perceptually accurate multimodal feedback during dri...
research
05/11/2023

Training development for multisensory data analysis

Perception is a process that requires a great deal of mental processing,...
research
03/24/2020

How deep is your encoder: an analysis of features descriptors for an autoencoder-based audio-visual quality metric

The development of audio-visual quality assessment models poses a number...

Please sign up or login with your details

Forgot password? Click here to reset