User-driven Intelligent Interface on the Basis of Multimodal Augmented Reality and Brain-Computer Interaction for People with Functional Disabilities

04/12/2017
by   S. Stirenko, et al.
0

The analysis of the current integration attempts of some modes and use cases of user-machine interaction is presented. The new concept of the user-driven intelligent interface is proposed on the basis of multimodal augmented reality and brain-computer interaction for various applications: in disabilities studies, education, home care, health care, etc. The several use cases of multimodal augmentation are presented. The perspectives of the better human comprehension by the immediate feedback through neurophysical channels by means of brain-computer interaction are outlined. It is shown that brain-computer interface (BCI) technology provides new strategies to overcome limits of the currently available user interfaces, especially for people with functional disabilities. The results of the previous studies of the low end consumer and open-source BCI-devices allow us to conclude that combination of machine learning (ML), multimodal interactions (visual, sound, tactile) with BCI will profit from the immediate feedback from the actual neurophysical reactions classified by ML methods. In general, BCI in combination with other modes of AR interaction can deliver much more information than these types of interaction themselves. Even in the current state the combined AR-BCI interfaces could provide the highly adaptable and personal services, especially for people with functional disabilities.

READ FULL TEXT

page 1

page 3

page 4

page 6

page 7

research
10/29/2021

ARviz – An Augmented Reality-enabled Visualization Platform for ROS Applications

Current robot interfaces such as teach pendants and 2D screen displays u...
research
08/29/2023

A Consumer-tier based Visual-Brain Machine Interface for Augmented Reality Glasses Interactions

Objective.Visual-Brain Machine Interface(V-BMI) has provide a novel inte...
research
12/12/2017

Generating and Estimating Nonverbal Alphabets for Situated and Multimodal Communications

In this paper, we discuss the formalized approach for generating and est...
research
09/03/2017

A survey on haptic technologies for mobile augmented reality

Augmented Reality (AR) and Mobile Augmented Reality (MAR) applications h...
research
01/15/2017

A feasibility study on SSVEP-based interaction with motivating and immersive virtual and augmented reality

Non-invasive steady-state visual evoked potential (SSVEP) based brain-co...
research
07/16/2017

Automatized Generation of Alphabets of Symbols

In this paper, we discuss the generation of symbols (and alphabets) base...
research
04/06/2020

What If Your Car Would Care? Exploring Use Cases For Affective Automotive User Interfaces

In this paper we present use cases for affective user interfaces (UIs) i...

Please sign up or login with your details

Forgot password? Click here to reset