LIMIT: Learning Interfaces to Maximize Information Transfer

04/17/2023
by   Benjamin A. Christie, et al.
0

Robots can use auditory, visual, or haptic interfaces to convey information to human users. The way these interfaces select signals is typically pre-defined by the designer: for instance, a haptic wristband might vibrate when the robot is moving and squeeze when the robot stops. But different people interpret the same signals in different ways, so that what makes sense to one person might be confusing or unintuitive to another. In this paper we introduce a unified algorithmic formalism for learning co-adaptive interfaces from scratch. Our method does not need to know the human's task (i.e., what the human is using these signals for). Instead, our insight is that interpretable interfaces should select signals that maximize correlation between the human's actions and the information the interface is trying to convey. Applying this insight we develop LIMIT: Learning Interfaces to Maximize Information Transfer. LIMIT optimizes a tractable, real-time proxy of information gain in continuous spaces. The first time a person works with our system the signals may appear random; but over repeated interactions the interface learns a one-to-one mapping between displayed signals and human responses. Our resulting approach is both personalized to the current user and not tied to any specific interface modality. We compare LIMIT to state-of-the-art baselines across controlled simulations, an online survey, and an in-person user study with auditory, visual, and haptic interfaces. Overall, our results suggest that LIMIT learns interfaces that enable users to complete the task more quickly and efficiently, and users subjectively prefer LIMIT to the alternatives. See videos here: https://youtu.be/IvQ3TM1_2fA.

READ FULL TEXT

page 2

page 14

page 15

page 17

page 18

research
05/24/2022

First Contact: Unsupervised Human-Machine Co-Adaptation via Mutual Information Maximization

How can we train an assistive human-machine interface (e.g., an electrom...
research
12/19/2022

Learning Latent Representations to Co-Adapt to Humans

When robots interact with humans in homes, roads, or factories the human...
research
07/20/2021

Learning to Share Autonomy Across Repeated Interaction

Wheelchair-mounted robotic arms (and other assistive robots) should help...
research
09/07/2023

Bootstrapping Adaptive Human-Machine Interfaces with Offline Reinforcement Learning

Adaptive interfaces can help users perform sequential decision-making ta...
research
05/19/2022

Learning to Share Autonomy from Repeated Human-Robot Interaction

Assistive robot arms try to help their users perform everyday tasks. One...
research
08/20/2019

StateLens: A Reverse Engineering Solution for Making Existing Dynamic Touchscreens Accessible

Blind people frequently encounter inaccessible dynamic touchscreens in t...
research
02/21/2023

An inverse modeling method to estimate undertain spatial configurations from 2d information and time-based visual discriminations

This paper focuses on a specific aspect of human visual discrimination f...

Please sign up or login with your details

Forgot password? Click here to reset