First Contact: Unsupervised Human-Machine Co-Adaptation via Mutual Information Maximization

05/24/2022
by   Siddharth Reddy, et al.
0

How can we train an assistive human-machine interface (e.g., an electromyography-based limb prosthesis) to translate a user's raw command signals into the actions of a robot or computer when there is no prior mapping, we cannot ask the user for supervision in the form of action labels or reward feedback, and we do not have prior knowledge of the tasks the user is trying to accomplish? The key idea in this paper is that, regardless of the task, when an interface is more intuitive, the user's commands are less noisy. We formalize this idea as a completely unsupervised objective for optimizing interfaces: the mutual information between the user's command signals and the induced state transitions in the environment. To evaluate whether this mutual information score can distinguish between effective and ineffective interfaces, we conduct an observational study on 540K examples of users operating various keyboard and eye gaze interfaces for typing, controlling simulated robots, and playing video games. The results show that our mutual information scores are predictive of the ground-truth task completion metrics in a variety of domains, with an average Spearman's rank correlation of 0.43. In addition to offline evaluation of existing interfaces, we use our unsupervised objective to learn an interface from scratch: we randomly initialize the interface, have the user attempt to perform their desired tasks using the interface, measure the mutual information score, and update the interface to maximize mutual information through reinforcement learning. We evaluate our method through a user study with 12 participants who perform a 2D cursor control task using a perturbed mouse, and an experiment with one user playing the Lunar Lander game using hand gestures. The results show that we can learn an interface from scratch, without any user supervision or prior knowledge of tasks, in under 30 minutes.

READ FULL TEXT
research
09/07/2023

Bootstrapping Adaptive Human-Machine Interfaces with Offline Reinforcement Learning

Adaptive interfaces can help users perform sequential decision-making ta...
research
04/17/2023

LIMIT: Learning Interfaces to Maximize Information Transfer

Robots can use auditory, visual, or haptic interfaces to convey informat...
research
03/04/2022

X2T: Training an X-to-Text Typing Interface with Online Learning from User Feedback

We aim to help users communicate their intent to machines using flexible...
research
02/05/2022

ASHA: Assistive Teleoperation via Human-in-the-Loop Reinforcement Learning

Building assistive interfaces for controlling robots through arbitrary, ...
research
10/07/2021

Improving Robot-Centric Learning from Demonstration via Personalized Embeddings

Learning from demonstration (LfD) techniques seek to enable novice users...
research
05/24/2007

MI image registration using prior knowledge

Subtraction of aligned images is a means to assess changes in a wide var...
research
11/12/2019

Communication in Plants: Comparison of Multiple Action Potential and Mechanosensitive Signals with Experiments

Both action potentials and mechanosensitive signalling are an important ...

Please sign up or login with your details

Forgot password? Click here to reset