Natural interaction with traffic control cameras through multimodal interfaces

03/02/2021
by   Marco Grazioso, et al.
0

Human-Computer Interfaces have always played a fundamental role in usability and commands' interpretability of the modern software systems. With the explosion of the Artificial Intelligence concept, such interfaces have begun to fill the gap between the user and the system itself, further evolving in Adaptive User Interfaces (AUI). Meta Interfaces are a further step towards the user, and they aim at supporting the human activities in an ambient interactive space; in such a way, the user can control the surrounding space and interact with it. This work aims at proposing a meta user interface that exploits the Put That There paradigm to enable the user to fast interaction by employing natural language and gestures. The application scenario is a video surveillance control room, in which the speed of actions and reactions is fundamental for urban safety and driver and pedestrian security. The interaction is oriented towards three environments: the first is the control room itself, in which the operator can organize the views of the monitors related to the cameras on site by vocal commands and gestures, as well as conveying the audio on the headset or in the speakers of the room. The second one is related to the control of the video, in order to go back and forth to a particular scene showing specific events, or zoom in/out a particular camera; the third allows the operator to send rescue vehicle in a particular street, in case of need. The gestures data are acquired through a Microsoft Kinect 2 which captures pointing and gestures allowing the user to interact multimodally thus increasing the naturalness of the interaction; the related module maps the movement information to a particular instruction, also supported by vocal commands which enable its execution. (cont...)

READ FULL TEXT
research
07/09/2019

A Novel Contactless Human Machine Interface based on Machine Learning

This paper describes a global framework that enables contactless human m...
research
07/16/2017

Automatized Generation of Alphabets of Symbols

In this paper, we discuss the generation of symbols (and alphabets) base...
research
12/12/2017

Generating and Estimating Nonverbal Alphabets for Situated and Multimodal Communications

In this paper, we discuss the formalized approach for generating and est...
research
06/08/2017

OrbTouch: Recognizing Human Touch in Deformable Interfaces with Deep Neural Networks

User interfaces provide an interactive window between physical and virtu...
research
03/14/2023

Designing a 3D Gestural Interface to Support User Interaction with Time-Oriented Data as Immersive 3D Radar Chart

The design of intuitive three-dimensional user interfaces is vital for i...
research
04/04/2015

Preprint Extending Touch-less Interaction on Vision Based Wearable Device

This is the preprint version of our paper on IEEE Virtual Reality Confer...
research
11/07/2022

Adaptive User-Centered Multimodal Interaction towards Reliable and Trusted Automotive Interfaces

With the recently increasing capabilities of modern vehicles, novel appr...

Please sign up or login with your details

Forgot password? Click here to reset