Prompting Vision Language Model with Knowledge from Large Language Model for Knowledge-Based VQA

08/30/2023
by   Yang Zhou, et al.
0

Knowledge-based visual question answering is a very challenging and widely concerned task. Previous methods adopts the implicit knowledge in large language models (LLM) to achieve excellent results, but we argue that existing methods may suffer from biasing understanding of the image and insufficient knowledge to solve the problem. In this paper, we propose PROOFREAD -PROmpting vision language model with knOwledge From laRgE lAnguage moDel, a novel, lightweight and efficient kowledge-based VQA framework, which make the vision language model and the large language model cooperate to give full play to their respective strengths and bootstrap each other. In detail, our proposed method uses LLM to obtain knowledge explicitly, uses the vision language model which can see the image to get the knowledge answer, and introduces knowledge perceiver to filter out knowledge that is harmful for getting the correct final answer. Experimental results on two datasets prove the effectiveness of our approach. Our method outperforms all state-of-the-art methods on the A-OKVQA dataset in two settings and also achieves relatively good performance on the OKVQA dataset.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset