Visual Understanding and Narration: A Deeper Understanding and Explanation of Visual Scenes

05/31/2019
by   Stephanie M. Lukin, et al.
0

We describe the task of Visual Understanding and Narration, in which a robot (or agent) generates text for the images that it collects when navigating its environment, by answering open-ended questions, such as 'what happens, or might have happened, here?'

READ FULL TEXT
research
10/20/2016

Proposing Plausible Answers for Open-ended Visual Question Answering

Answering open-ended questions is an essential capability for any intell...
research
11/23/2022

Look, Read and Ask: Learning to Ask Questions by Reading Text in Images

We present a novel problem of text-based visual question generation or T...
research
12/09/2017

IQA: Visual Question Answering in Interactive Environments

We introduce Interactive Question Answering (IQA), the task of answering...
research
09/08/2021

YouRefIt: Embodied Reference Understanding with Language and Gesture

We study the understanding of embodied reference: One agent uses both la...
research
12/01/2020

Open-Ended Multi-Modal Relational Reason for Video Question Answering

People with visual impairments urgently need helps, not only on the basi...
research
07/18/2018

Visual Affordance and Function Understanding: A Survey

Nowadays, robots are dominating the manufacturing, entertainment and hea...
research
06/01/2022

SAMPLE-HD: Simultaneous Action and Motion Planning Learning Environment

Humans exhibit incredibly high levels of multi-modal understanding - com...

Please sign up or login with your details

Forgot password? Click here to reset