X2T: Training an X-to-Text Typing Interface with Online Learning from User Feedback

03/04/2022
by   Jensen Gao, et al.
0

We aim to help users communicate their intent to machines using flexible, adaptive interfaces that translate arbitrary user input into desired actions. In this work, we focus on assistive typing applications in which a user cannot operate a keyboard, but can instead supply other inputs, such as webcam images that capture eye gaze or neural activity measured by a brain implant. Standard methods train a model on a fixed dataset of user inputs, then deploy a static interface that does not learn from its mistakes; in part, because extracting an error signal from user behavior can be challenging. We investigate a simple idea that would enable such interfaces to improve over time, with minimal additional effort from the user: online learning from user feedback on the accuracy of the interface's actions. In the typing domain, we leverage backspaces as feedback that the interface did not perform the desired action. We propose an algorithm called x-to-text (X2T) that trains a predictive model of this feedback signal, and uses this model to fine-tune any existing, default interface for translating user input into actions that select words or characters. We evaluate X2T through a small-scale online user study with 12 participants who type sentences by gazing at their desired words, a large-scale observational study on handwriting samples from 60 users, and a pilot study with one participant using an electrocorticography-based brain-computer interface. The results show that X2T learns to outperform a non-adaptive default interface, stimulates user co-adaptation to the interface, personalizes the interface to individual users, and can leverage offline data collected from the default interface to improve its initial performance and accelerate online learning.

READ FULL TEXT

page 2

page 9

page 17

research
02/05/2022

ASHA: Assistive Teleoperation via Human-in-the-Loop Reinforcement Learning

Building assistive interfaces for controlling robots through arbitrary, ...
research
09/07/2023

Bootstrapping Adaptive Human-Machine Interfaces with Offline Reinforcement Learning

Adaptive interfaces can help users perform sequential decision-making ta...
research
05/23/2020

Evaluation of Non-Collocated Force Feedback Driven by Signal-Independent Noise

Individuals living with paralysis or amputation can operate robotic pros...
research
05/24/2022

First Contact: Unsupervised Human-Machine Co-Adaptation via Mutual Information Maximization

How can we train an assistive human-machine interface (e.g., an electrom...
research
09/26/2022

MARLUI: Multi-Agent Reinforcement Learning for Goal-Agnostic Adaptive UIs

The goal of Adaptive UIs is to automatically change an interface so that...
research
06/16/2023

Learning to Assist and Communicate with Novice Drone Pilots for Expert Level Performance

Multi-task missions for unmanned aerial vehicles (UAVs) involving inspec...
research
02/12/2023

LipLearner: Customizable Silent Speech Interactions on Mobile Devices

Silent speech interface is a promising technology that enables private c...

Please sign up or login with your details

Forgot password? Click here to reset