Guidelines for creating man-machine multimodal interfaces

01/29/2019
by   João Ranhel, et al.
0

Understanding details of human multimodal interaction can elucidate many aspects of the type of information processing machines must perform to interact with humans. This article gives an overview of recent findings from Linguistics regarding the organization of conversation in turns, adjacent pairs, (dis)preferred responses, (self)repairs, etc. Besides, we describe how multiple modalities of signs interfere with each other modifying meanings. Then, we propose an abstract algorithm that describes how a machine can implement a double-feedback system that can reproduces a human-like face-to-face interaction by processing various signs, such as verbal, prosodic, facial expressions, gestures, etc. Multimodal face-to-face interactions enrich the exchange of information between agents, mainly because these agents are active all the time by emitting and interpreting signs simultaneously. This article is not about an untested new computational model. Instead, it translates findings from Linguistics as guidelines for designs of multimodal man-machine interfaces. An algorithm is presented. Brought from Linguistics, it is a description pointing out how human face-to-face interactions work. The linguistic findings reported here are the first steps towards the integration of multimodal communication. Some developers involved on interface designs carry on working on isolated models for interpreting text, grammar, gestures and facial expressions, neglecting the interwoven between these signs. In contrast, for linguists working on the state-of-the-art multimodal integration, the interpretation of separated modalities leads to an incomplete interpretation, if not to a miscomprehension of information. The algorithm proposed herein intends to guide man-machine interface designers who want to integrate multimodal components on face-to-face interactions as close as possible to those performed between humans.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/24/2022

On the Linguistic and Computational Requirements for Creating Face-to-Face Multimodal Human-Machine Interaction

In this study, conversations between humans and avatars are linguistical...
research
07/29/2022

Face-to-Face Contrastive Learning for Social Intelligence Question-Answering

Creating artificial social intelligence - algorithms that can understand...
research
06/06/2020

Multimodal Systems: Taxonomy, Methods, and Challenges

Naturally, humans use multiple modalities to convey information. The mod...
research
09/28/2011

Cognitive Principles in Robust Multimodal Interpretation

Multimodal conversational interfaces provide a natural means for users t...
research
02/22/2021

Zoomorphic Gestures for Communicating Cobot States

Communicating the robot state is vital to creating an efficient and trus...
research
12/18/2022

Exploring Workplace Behaviors through Speaking Patterns using Large-scale Multimodal Wearable Recordings: A Study of Healthcare Providers

Interpersonal spoken communication is central to human interaction and t...
research
12/12/2017

Generating and Estimating Nonverbal Alphabets for Situated and Multimodal Communications

In this paper, we discuss the formalized approach for generating and est...

Please sign up or login with your details

Forgot password? Click here to reset