Neural Foundations of Mental Simulation: Future Prediction of Latent Representations on Dynamic Scenes

05/19/2023
by   Aran Nayebi, et al.
0

Humans and animals have a rich and flexible understanding of the physical world, which enables them to infer the underlying dynamical trajectories of objects and events, plausible future states, and use that to plan and anticipate the consequences of actions. However, the neural mechanisms underlying these computations are unclear. We combine a goal-driven modeling approach with dense neurophysiological data and high-throughput human behavioral readouts to directly impinge on this question. Specifically, we construct and evaluate several classes of sensory-cognitive networks to predict the future state of rich, ethologically-relevant environments, ranging from self-supervised end-to-end models with pixel-wise or object-centric objectives, to models that future predict in the latent space of purely static image-based or dynamic video-based pretrained foundation models. We find strong differentiation across these model classes in their ability to predict neural and behavioral data both within and across diverse environments. In particular, we find that neural responses are currently best predicted by models trained to predict the future state of their environment in the latent space of pretrained foundation models optimized for dynamic scenes in a self-supervised manner. Notably, models that future predict in the latent space of video foundation models that are optimized to support a diverse range of sensorimotor tasks, reasonably match both human behavioral error patterns and neural dynamics across all environmental scenarios that we were able to test. Overall, these findings suggest that the neural mechanisms and behaviors of primate mental simulation are thus far most consistent with being optimized to future predict on dynamic, reusable visual representations that are useful for embodied AI more generally.

READ FULL TEXT

page 4

page 6

page 7

page 17

research
09/17/2020

Learning to Identify Physical Parameters from Video Using Differentiable Physics

Video representation learning has recently attracted attention in comput...
research
06/05/2017

Visual Interaction Networks

From just a glance, humans can make rich predictions about the future st...
research
11/05/2018

Learning Shared Dynamics with Meta-World Models

Humans have consciousness as the ability to perceive events and objects:...
research
03/04/2019

VideoFlow: A Flow-Based Generative Model for Video

Generative models that can model and predict sequences of future events ...
research
07/31/2023

Learning to Model the World with Language

To interact with humans in the world, agents need to understand the dive...
research
03/01/2019

Video Extrapolation with an Invertible Linear Embedding

We predict future video frames from complex dynamic scenes, using an inv...
research
09/13/2023

EarthPT: a foundation model for Earth Observation

We introduce EarthPT – an Earth Observation (EO) pretrained transformer....

Please sign up or login with your details

Forgot password? Click here to reset