Object-Based Audio Rendering

08/23/2017
by   Philip Jackson, et al.
0

Apparatus and methods are disclosed for performing object-based audio rendering on a plurality of audio objects which define a sound scene, each audio object comprising at least one audio signal and associated metadata. The apparatus comprises: a plurality of renderers each capable of rendering one or more of the audio objects to output rendered audio data; and object adapting means for adapting one or more of the plurality of audio objects for a current reproduction scenario, the object adapting means being configured to send the adapted one or more audio objects to one or more of the plurality of renderers.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

02/11/2021

Multichannel-based learning for audio object extraction

The current paradigm for creating and deploying immersive audio content ...
07/29/2021

Kinetic surface friction rendering for interactive sonification: an initial exploration

Inspired by the role sound and friction play in interactions with everyd...
09/26/2021

Rendering Spatial Sound for Interoperable Experiences in the Audio Metaverse

Interactive audio spatialization technology previously developed for vid...
10/05/2020

Combined Hapto-Visual and Auditory Rendering of Cultural Heritage Objects

In this work, we develop a multi-modal rendering framework comprising of...
02/23/2010

CLD-shaped Brushstrokes in Non-Photorealistic Rendering

Rendering techniques based on a random grid can be improved by adapting ...
07/12/2013

Speedy Object Detection based on Shape

This study is a part of design of an audio system for in-house object de...
06/03/2019

How Much Does Audio Matter to Recognize Egocentric Object Interactions?

Sounds are an important source of information on our daily interactions ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.