Context-faithful Prompting for Large Language Models

03/20/2023
by   Wenxuan Zhou, et al.
0

Large language models (LLMs) encode parametric knowledge about world facts and have shown remarkable performance in knowledge-driven NLP tasks. However, their reliance on parametric knowledge may cause them to overlook contextual cues, leading to incorrect predictions in context-sensitive NLP tasks (e.g., knowledge acquisition tasks). In this paper, we seek to assess and enhance LLMs' contextual faithfulness in two aspects: knowledge conflict and prediction with abstention. We demonstrate that LLMs' faithfulness can be significantly improved using carefully designed prompting strategies. In particular, we identify opinion-based prompts and counterfactual demonstrations as the most effective methods. Opinion-based prompts reframe the context as a narrator's statement and inquire about the narrator's opinions, while counterfactual demonstrations use instances containing false facts to improve faithfulness in knowledge conflict situations. Neither technique requires additional training. We conduct experiments on three datasets of two standard NLP tasks, machine reading comprehension and relation extraction, and the results demonstrate significant improvement in faithfulness to contexts.

READ FULL TEXT
research
07/28/2022

MLRIP: Pre-training a military language representation model with informative factual knowledge and professional knowledge base

Incorporating prior knowledge into pre-trained language models has prove...
research
04/25/2021

Identifying Offensive Expressions of Opinion in Context

Classic information extraction techniques consist in building questions ...
research
11/14/2019

Contextual Recurrent Units for Cloze-style Reading Comprehension

Recurrent Neural Networks (RNN) are known as powerful models for handlin...
research
09/25/2021

Sorting through the noise: Testing robustness of information processing in pre-trained language models

Pre-trained LMs have shown impressive performance on downstream NLP task...
research
09/09/2023

EPA: Easy Prompt Augmentation on Large Language Models via Multiple Sources and Multiple Targets

Large language models (LLMs) have shown promising performance on various...
research
06/16/2023

Pushing the Limits of ChatGPT on NLP Tasks

Despite the success of ChatGPT, its performances on most NLP tasks are s...
research
07/16/2023

Look Before You Leap: An Exploratory Study of Uncertainty Measurement for Large Language Models

The recent performance leap of Large Language Models (LLMs) opens up new...

Please sign up or login with your details

Forgot password? Click here to reset