Representation Learning for Visual-Relational Knowledge Graphs

09/07/2017
by   Daniel Oñoro-Rubio, et al.
0

A visual-relational knowledge graph (KG) is a KG whose entities are associated with images. We propose representation learning for relation and entity prediction in visual-relational KGs as a novel machine learning problem. We introduce ImageGraph, a KG with 1,330 relation types, 14,870 entities, and 829,931 images. Visual-relational KGs lead to novel probabilistic query types treating images as first-class citizens. We approach the query answering problems by combining ideas from the areas of computer vision and embedding learning for KGs. The resulting ML models can answer queries such as "How are these two unseen images related to each other?" We also explore a novel zero-shot learning scenario where an image of an entirely new entity is linked with multiple relations to entities of an existing KG. Our experiments show that the proposed deep neural networks are able to answer the visual-relational queries efficiently and accurately.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset