Goal-driven text descriptions for images

08/28/2021
by   Ruotian Luo, et al.
0

A big part of achieving Artificial General Intelligence(AGI) is to build a machine that can see and listen like humans. Much work has focused on designing models for image classification, video classification, object detection, pose estimation, speech recognition, etc., and has achieved significant progress in recent years thanks to deep learning. However, understanding the world is not enough. An AI agent also needs to know how to talk, especially how to communicate with a human. While perception (vision, for example) is more common across animal species, the use of complicated language is unique to humans and is one of the most important aspects of intelligence. In this thesis, we focus on generating textual output given visual input. In Chapter 3, we focus on generating the referring expression, a text description for an object in the image so that a receiver can infer which object is being described. We use a comprehension machine to directly guide the generated referring expressions to be more discriminative. In Chapter 4, we introduce a method that encourages discriminability in image caption generation. We show that more discriminative captioning models generate more descriptive captions. In Chapter 5, we study how training objectives and sampling methods affect the models' ability to generate diverse captions. We find that a popular captioning training strategy will be detrimental to the diversity of generated captions. In Chapter 6, we propose a model that can control the length of generated captions. By changing the desired length, one can influence the style and descriptiveness of the captions. Finally, in Chapter 7, we rank/generate informative image tags according to their information utility. The proposed method better matches what humans think are the most important tags for the images.

READ FULL TEXT

page 16

page 39

research
11/27/2022

CLID: Controlled-Length Image Descriptions with Limited Data

Controllable image captioning models generate human-like image descripti...
research
03/12/2018

Discriminability objective for training descriptive captions

One property that remains lacking in image captions generated by contemp...
research
03/30/2017

Speaking the Same Language: Matching Machine to Human Captions by Adversarial Training

While strong progress has been made in image captioning over the last ye...
research
03/14/2019

Dense Relational Captioning: Triple-Stream Networks for Relationship-Based Captioning

Our goal in this work is to train an image captioning model that generat...
research
07/21/2020

Fine-Grained Image Captioning with Global-Local Discriminative Objective

Significant progress has been made in recent years in image captioning, ...
research
02/01/2021

Inferring spatial relations from textual descriptions of images

Generating an image from its textual description requires both a certain...
research
03/28/2016

Generating Visual Explanations

Clearly explaining a rationale for a classification decision to an end-u...

Please sign up or login with your details

Forgot password? Click here to reset