Mitigating Language Model Hallucination with Interactive Question-Knowledge Alignment

05/23/2023
by   Shuo Zhang, et al.
0

Despite the remarkable recent advances in language models, they still struggle with the hallucination problem and can generate misleading and unsupported responses. A common approach to mitigate the hallucination issue is retrieving and incorporating supporting evidence from a knowledge base. However, user questions usually do not align well with the stored knowledge, as they are unaware of the information available before asking questions. This misalignment can limit the language model's ability to locate and utilize the knowledge, potentially forcing it to hallucinate by ignoring or overriding the retrieved evidence. To address this issue, we introduce MixAlign, a framework that interacts with both the user and the knowledge base to obtain and integrate clarifications on how the user question relates to the stored information. MixAlign employs a language model to achieve automatic question-knowledge alignment and, if necessary, further enhances this alignment through human user clarifications. Experimental results demonstrate significant improvements over state-of-the-art methods, showcasing the effectiveness of MixAlign in mitigating language model hallucination.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/26/2022

Improving Complex Knowledge Base Question Answering via Question-to-Action and Question-to-Question Alignment

Complex knowledge base question answering can be achieved by converting ...
research
08/30/2023

Prompting Vision Language Model with Knowledge from Large Language Model for Knowledge-Based VQA

Knowledge-based visual question answering is a very challenging and wide...
research
05/20/2023

Collaborative Development of NLP models

Despite substantial advancements, Natural Language Processing (NLP) mode...
research
05/02/2020

Connecting the Dots: A Knowledgeable Path Generator for Commonsense Question Answering

Commonsense question answering (QA) requires the modeling of general bac...
research
11/15/2021

Calculating Question Similarity is Enough:A New Method for KBQA Tasks

Knowledge Base Question Answering (KBQA) aims to answer natural language...
research
08/20/2020

Constructing a Knowledge Graph from Unstructured Documents without External Alignment

Knowledge graphs (KGs) are relevant to many NLP tasks, but building a re...
research
05/29/2023

Large Language Models are not Fair Evaluators

We uncover a systematic bias in the evaluation paradigm of adopting larg...

Please sign up or login with your details

Forgot password? Click here to reset