The red one!: On learning to refer to things based on their discriminative properties

03/08/2016
by   Angeliki Lazaridou, et al.
0

As a first step towards agents learning to communicate about their visual environment, we propose a system that, given visual representations of a referent (cat) and a context (sofa), identifies their discriminative attributes, i.e., properties that distinguish them (has_tail). Moreover, despite the lack of direct supervision at the attribute level, the model learns to assign plausible attributes to objects (sofa-has_cushion). Finally, we present a preliminary experiment confirming the referential success of the predicted discriminative attributes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/27/2018

Attributes as Operators

We present a new approach to modeling visual attributes. Prior work cast...
research
03/02/2017

Toward Controlled Generation of Text

Generic generation and manipulation of text is challenging and has limit...
research
05/16/2017

Cooperative Learning with Visual Attributes

Learning paradigms involving varying levels of supervision have received...
research
04/30/2018

BomJi at SemEval-2018 Task 10: Combining Vector-, Pattern- and Graph-based Information to Identify Discriminative Attributes

This paper describes BomJi, a supervised system for capturing discrimina...
research
04/17/2019

Interpreting Adversarial Examples with Attributes

Deep computer vision systems being vulnerable to imperceptible and caref...
research
01/28/2019

Cross-Domain Image Manipulation by Demonstration

In this work we propose a model that can manipulate individual visual at...
research
07/23/2018

From Volcano to Toyshop: Adaptive Discriminative Region Discovery for Scene Recognition

As deep learning approaches to scene recognition emerge, they have conti...

Please sign up or login with your details

Forgot password? Click here to reset