DeepAI AI Chat
Log In Sign Up

Characterizing the Temporal Dynamics of Information in Visually Guided Predictive Control Using LSTM Recurrent Neural Networks

by   Kamran Binaee, et al.
Rochester Institute of Technology

Theories for visually guided action account for online control in the presence of reliable sources of visual information, and predictive control to compensate for visuomotor delay and temporary occlusion. In this study, we characterize the temporal relationship between information integration window and prediction distance using computational models. Subjects were immersed in a simulated environment and attempted to catch virtual balls that were transiently "blanked" during flight. Recurrent neural networks were trained to reproduce subject's gaze and hand movements during blank. The models successfully predict gaze behavior within 3 degrees, and hand movements within 8.5 cm as far as 500 ms in time, with integration window as short as 27 ms. Furthermore, we quantified the contribution of each input source of information to motor output through an ablation study. The model is a proof of concept for prediction as a discrete mapping between information integrated over time and a temporally distant motor output.


page 2

page 5


The Gaze and Mouse Signal as additional Source for User Fingerprints in Browser Applications

In this work we inspect different data sources for browser fingerprints....

Human-Piloted Drone Racing: Visual Processing and Control

Humans race drones faster than algorithms, despite being limited to a fi...

A detailed study of recurrent neural networks used to model tasks in the cerebral cortex

We studied the properties of simple recurrent neural networks trained to...

How hard is it to cross the room? -- Training (Recurrent) Neural Networks to steer a UAV

This work explores the feasibility of steering a drone with a (recurrent...

Digging Deeper into Egocentric Gaze Prediction

This paper digs deeper into factors that influence egocentric gaze. Inst...