Jointly Optimizing Sensing Pipelines for Multimodal Mixed Reality Interaction

10/13/2020
by   Darshana Rathnayake, et al.
0

Natural human interactions for Mixed Reality Applications are overwhelmingly multimodal: humans communicate intent and instructions via a combination of visual, aural and gestural cues. However, supporting low-latency and accurate comprehension of such multimodal instructions (MMI), on resource-constrained wearable devices, remains an open challenge, especially as the state-of-the-art comprehension techniques for each individual modality increasingly utilize complex Deep Neural Network models. We demonstrate the possibility of overcoming the core limitation of latency–vs.–accuracy tradeoff by exploiting cross-modal dependencies - i.e., by compensating for the inferior performance of one model with an increased accuracy of more complex model of a different modality. We present a sensor fusion architecture that performs MMI comprehension in a quasi-synchronous fashion, by fusing visual, speech and gestural input. The architecture is reconfigurable and supports dynamic modification of the complexity of the data processing pipeline for each individual modality in response to contextual changes. Using a representative "classroom" context and a set of four common interaction primitives, we then demonstrate how the choices between low and high complexity models for each individual modality are coupled. In particular, we show that (a) a judicious combination of low and high complexity models across modalities can offer a dramatic 3-fold decrease in comprehension latency together with an increase 10-15 dependent, with the performance of some model combinations being significantly more sensitive to changes in scene context or choice of interaction.

READ FULL TEXT

page 1

page 3

research
06/22/2023

Learning Unseen Modality Interaction

Multimodal learning assumes all modality combinations of interest are av...
research
12/16/2022

EffMulti: Efficiently Modeling Complex Multimodal Interactions for Emotion Analysis

Humans are skilled in reading the interlocutor's emotion from multimodal...
research
06/09/2022

AttX: Attentive Cross-Connections for Fusion of Wearable Signals in Emotion Recognition

We propose cross-modal attentive connections, a new dynamic and effectiv...
research
05/17/2023

Rethinking Multimodal Content Moderation from an Asymmetric Angle with Mixed-modality

There is a rapidly growing need for multimodal content moderation (CM) a...
research
02/04/2019

Exploring Temporal Dependencies in Multimodal Referring Expressions with Mixed Reality

In collaborative tasks, people rely both on verbal and non-verbal cues s...
research
07/02/2022

Enabling Harmonious Human-Machine Interaction with Visual-Context Augmented Dialogue System: A Review

The intelligent dialogue system, aiming at communicating with humans har...
research
03/14/2017

A computational investigation of sources of variability in sentence comprehension difficulty in aphasia

We present a computational evaluation of three hypotheses about sources ...

Please sign up or login with your details

Forgot password? Click here to reset