DeepAI AI Chat
Log In Sign Up

Grounding Physical Concepts of Objects and Events Through Dynamic Visual Reasoning

by   Zhenfang Chen, et al.

We study the problem of dynamic visual reasoning on raw videos. This is a challenging problem; currently, state-of-the-art models often require dense supervision on physical object properties and events from simulation, which are impractical to obtain in real life. In this paper, we present the Dynamic Concept Learner (DCL), a unified framework that grounds physical objects and events from video and language. DCL first adopts a trajectory extractor to track each object over time and to represent it as a latent, object-centric feature vector. Building upon this object-centric representation, DCL learns to approximate the dynamic interaction among objects using graph networks. DCL further incorporates a semantic parser to parse questions into semantic programs and, finally, a program executor to run the program to answer the question, levering the learned dynamics model. After training, DCL can detect and associate objects across the frames, ground visual properties, and physical events, understand the causal relationship between events, make future and counterfactual predictions, and leverage these extracted presentations for answering queries. DCL achieves state-of-the-art performance on CLEVRER, a challenging causal video reasoning dataset, even without using ground-truth attributes and collision labels from simulations for training. We further test DCL on a newly proposed video-retrieval and event localization dataset derived from CLEVRER, showing its strong generalization capacity.


page 2

page 3

page 8

page 9

page 19

page 20


Dynamic Visual Reasoning by Learning Differentiable Physics Models from Video and Language

In this work, we propose a unified framework, called Visual Reasoning wi...

COPHY: Counterfactual Learning of Physical Dynamics

Understanding causes and effects in mechanical systems is an essential c...

PTR: A Benchmark for Part-based Conceptual, Relational, and Physical Reasoning

A critical aspect of human visual perception is the ability to parse vis...

Learning to reason about and to act on physical cascading events

Reasoning and interacting with dynamic environments is a fundamental pro...

Filtered-CoPhy: Unsupervised Learning of Counterfactual Physics in Pixel Space

Learning causal relationships in high-dimensional data (images, videos) ...

CLEVRER: CoLlision Events for Video REpresentation and Reasoning

The ability to reason about temporal and causal events from videos lies ...

Towards Unsupervised Visual Reasoning: Do Off-The-Shelf Features Know How to Reason?

Recent advances in visual representation learning allowed to build an ab...