Assisted Perception: Optimizing Observations to Communicate State

08/06/2020
by   Siddharth Reddy, et al.
6

We aim to help users estimate the state of the world in tasks like robotic teleoperation and navigation with visual impairments, where users may have systematic biases that lead to suboptimal behavior: they might struggle to process observations from multiple sensors simultaneously, receive delayed observations, or overestimate distances to obstacles. While we cannot directly change the user's internal beliefs or their internal state estimation process, our insight is that we can still assist them by modifying the user's observations. Instead of showing the user their true observations, we synthesize new observations that lead to more accurate internal state estimates when processed by the user. We refer to this method as assistive state estimation (ASE): an automated assistant uses the true observations to infer the state of the world, then generates a modified observation for the user to consume (e.g., through an augmented reality interface), and optimizes the modification to induce the user's new beliefs to match the assistant's current beliefs. We evaluate ASE in a user study with 12 participants who each perform four tasks: two tasks with known user biases – bandwidth-limited image classification and a driving video game with observation delay – and two with unknown biases that our method has to learn – guided 2D navigation and a lunar lander teleoperation video game. A different assistance strategy emerges in each domain, such as quickly revealing informative pixels to speed up image classification, using a dynamics model to undo observation delay in driving, identifying nearby landmarks for navigation, and exaggerating a visual indicator of tilt in the lander game. The results show that ASE substantially improves the task performance of users with bandwidth constraints, observation delay, and other unknown biases.

READ FULL TEXT

page 2

page 8

page 10

page 14

research
05/21/2018

Where Do You Think You're Going?: Inferring Beliefs about Dynamics from Behavior

Inferring intent from observed behavior has been studied extensively wit...
research
09/30/2021

Augmented reality navigation system for visual prosthesis

The visual functions of visual prostheses such as field of view, resolut...
research
09/14/2022

DASH: Visual Analytics for Debiasing Image Classification via User-Driven Synthetic Data Augmentation

Image classification models often learn to predict a class based on irre...
research
09/07/2023

Bootstrapping Adaptive Human-Machine Interfaces with Offline Reinforcement Learning

Adaptive interfaces can help users perform sequential decision-making ta...
research
10/14/2020

Optimal Assistance for Object-Rearrangement Tasks in Augmented Reality

Augmented-reality (AR) glasses that will have access to onboard sensors ...
research
06/16/2023

Learning to Assist and Communicate with Novice Drone Pilots for Expert Level Performance

Multi-task missions for unmanned aerial vehicles (UAVs) involving inspec...
research
02/22/2020

Emergent Communication with World Models

We introduce Language World Models, a class of language-conditional gene...

Please sign up or login with your details

Forgot password? Click here to reset