Triggering Multi-Hop Reasoning for Question Answering in Language Models using Soft Prompts and Random Walks

06/06/2023
by   Kanishka Misra, et al.
3

Despite readily memorizing world knowledge about entities, pre-trained language models (LMs) struggle to compose together two or more facts to perform multi-hop reasoning in question-answering tasks. In this work, we propose techniques that improve upon this limitation by relying on random walks over structured knowledge graphs. Specifically, we use soft prompts to guide LMs to chain together their encoded knowledge by learning to map multi-hop questions to random walk paths that lead to the answer. Applying our methods on two T5 LMs shows substantial improvements over standard tuning approaches in answering questions that require 2-hop reasoning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/14/2022

Prompt-based Conservation Learning for Multi-hop Question Answering

Multi-hop question answering (QA) requires reasoning over multiple docum...
research
11/15/2022

Empowering Language Models with Knowledge Graph Reasoning for Question Answering

Answering open-domain questions requires world knowledge about in-contex...
research
09/16/2023

Multimodal Multi-Hop Question Answering Through a Conversation Between Tools and Efficiently Finetuned Large Language Models

We employ a tool-interacting divide-and-conquer strategy enabling large ...
research
04/28/2023

Search-in-the-Chain: Towards Accurate, Credible and Traceable Large Language Models for Knowledge-intensive Tasks

With the wide application of Large Language Models (LLMs) such as ChatGP...
research
09/11/2023

Memory Injections: Correcting Multi-Hop Reasoning Failures during Inference in Transformer-Based Language Models

Answering multi-hop reasoning questions requires retrieving and synthesi...
research
08/09/2023

Answering Unseen Questions With Smaller Language Models Using Rationale Generation and Dense Retrieval

When provided with sufficient explanatory context, smaller Language Mode...
research
10/15/2020

GMH: A General Multi-hop Reasoning Model for KG Completion

Knowledge graphs are essential for numerous downstream natural language ...

Please sign up or login with your details

Forgot password? Click here to reset