REVEAL: Retrieval-Augmented Visual-Language Pre-Training with Multi-Source Multimodal Knowledge Memory

12/10/2022
by   Ziniu Hu, et al.
0

In this paper, we propose an end-to-end Retrieval-Augmented Visual Language Model (REVEAL) that learns to encode world knowledge into a large-scale memory, and to retrieve from it to answer knowledge-intensive queries. REVEAL consists of four key components: the memory, the encoder, the retriever and the generator. The large-scale memory encodes various sources of multimodal world knowledge (e.g. image-text pairs, question answering pairs, knowledge graph triplets, etc) via a unified encoder. The retriever finds the most relevant knowledge entries in the memory, and the generator fuses the retrieved knowledge with the input query to produce the output. A key novelty in our approach is that the memory, encoder, retriever and generator are all pre-trained end-to-end on a massive amount of data. Furthermore, our approach can use a diverse set of multimodal knowledge sources, which is shown to result in significant gains. We show that REVEAL achieves state-of-the-art results on visual question answering and image captioning.

READ FULL TEXT

page 1

page 7

page 13

page 14

page 15

research
10/06/2022

MuRAG: Multimodal Retrieval-Augmented Generator for Open Question Answering over Images and Text

While language Models store a massive amount of world knowledge implicit...
research
07/11/2023

Generative Pretraining in Multimodality

We present Emu, a Transformer-based multimodal foundation model, which c...
research
09/20/2023

Retrieve-Rewrite-Answer: A KG-to-Text Enhanced LLMs Framework for Knowledge Graph Question Answering

Despite their competitive performance on knowledge-intensive tasks, larg...
research
03/09/2021

Select, Substitute, Search: A New Benchmark for Knowledge-Augmented Visual Question Answering

Multimodal IR, spanning text corpus, knowledge graph and images, called ...
research
10/30/2022

An Efficient Memory-Augmented Transformer for Knowledge-Intensive NLP Tasks

Access to external knowledge is essential for many natural language proc...
research
01/25/2023

Pre-computed memory or on-the-fly encoding? A hybrid approach to retrieval augmentation makes the most of your compute

Retrieval-augmented language models such as Fusion-in-Decoder are powerf...
research
12/19/2022

Query Enhanced Knowledge-Intensive Conversation via Unsupervised Joint Modeling

The quality of knowledge retrieval is crucial in knowledge-intensive con...

Please sign up or login with your details

Forgot password? Click here to reset