DeepAI AI Chat
Log In Sign Up

VisualSem: a high-quality knowledge graph for vision and language

by   Houda Alberts, et al.

We argue that the next frontier in natural language understanding (NLU) and generation (NLG) will include models that can efficiently access external structured knowledge repositories. In order to support the development of such models, we release the VisualSem knowledge graph (KG) which includes nodes with multilingual glosses and multiple illustrative images and visually relevant relations. We also release a neural multi-modal retrieval model that can use images or sentences as inputs and retrieves entities in the KG. This multi-modal retrieval model can be integrated into any (neural network) model pipeline and we encourage the research community to use VisualSem for data augmentation and/or as a source of grounding, among other possible uses. VisualSem as well as the multi-modal retrieval model are publicly available and can be downloaded in:


page 5

page 6


Retrieval-based Knowledge Augmented Vision Language Pre-training

With recent progress in large-scale vision and language representation l...

Construction and Applications of Open Business Knowledge Graph

Business Knowledge Graph is important to many enterprises today, providi...

On the Importance of Karaka Framework in Multi-modal Grounding

Computational Paninian Grammar model helps in decoding a natural languag...

LingYi: Medical Conversational Question Answering System based on Multi-modal Knowledge Graphs

The medical conversational system can relieve the burden of doctors and ...

Few-shot Learning for Multi-modal Social Media Event Filtering

Social media has become an important data source for event analysis. Whe...

S2AMP: A High-Coverage Dataset of Scholarly Mentorship Inferred from Publications

Mentorship is a critical component of academia, but is not as visible as...

Semantically Multi-modal Image Synthesis

In this paper, we focus on semantically multi-modal image synthesis (SMI...

Code Repositories