Discovering Novel Actions in an Open World with Object-Grounded Visual Commonsense Reasoning

05/26/2023
by   Sathyanarayanan N. Aakur, et al.
0

Learning to infer labels in an open world, i.e., in an environment where the target “labels” are unknown, is an important characteristic for achieving autonomy. Foundation models pre-trained on enormous amounts of data have shown remarkable generalization skills through prompting, particularly in zero-shot inference. However, their performance is restricted to the correctness of the target label's search space. In an open world where these labels are unknown, the search space can be exceptionally large. It can require reasoning over several combinations of elementary concepts to arrive at an inference, which severely restricts the performance of such models. To tackle this challenging problem, we propose a neuro-symbolic framework called ALGO - novel Action Learning with Grounded Object recognition that can use symbolic knowledge stored in large-scale knowledge bases to infer activities (verb-noun combinations) in egocentric videos with limited supervision using two steps. First, we propose a novel neuro-symbolic prompting approach that uses object-centric vision-language foundation models as a noisy oracle to ground objects in the video through evidence-based reasoning. Second, driven by prior commonsense knowledge, we discover plausible activities through an energy-based symbolic pattern theory framework and learn to ground knowledge-based action (verb) concepts in the video. Extensive experiments on two publicly available datasets (GTEA Gaze and GTEA Gaze Plus) demonstrate its performance on open-world activity inference and its generalization to unseen actions in an unknown search space. We show that ALGO can be extended to zero-shot settings and demonstrate its competitive performance to multimodal foundation models.

READ FULL TEXT
research
09/16/2020

Knowledge Guided Learning: Towards Open Domain Egocentric Action Recognition with Zero Supervision

Advances in deep learning have enabled the development of models that ha...
research
08/11/2017

Exploiting Semantic Contextualization for Interpretation of Human Activity in Videos

We use large-scale commonsense knowledge bases, e.g. ConceptNet, to prov...
research
01/30/2023

ESC: Exploration with Soft Commonsense Constraints for Zero-shot Object Navigation

The ability to accurately locate and navigate to a specific object is a ...
research
06/06/2021

Learning Video Models from Text: Zero-Shot Anticipation for Procedural Actions

Can we teach a robot to recognize and make predictions for activities th...
research
03/12/2020

ZSTAD: Zero-Shot Temporal Activity Detection

An integral part of video analysis and surveillance is temporal activity...
research
05/24/2023

ECHo: Event Causality Inference via Human-centric Reasoning

We introduce ECHo, a diagnostic dataset of event causality inference gro...
research
09/29/2021

Grounding Predicates through Actions

Symbols representing abstract states such as "dish in dishwasher" or "cu...

Please sign up or login with your details

Forgot password? Click here to reset