Look Before You Leap: An Exploratory Study of Uncertainty Measurement for Large Language Models

07/16/2023
by   Yuheng Huang, et al.
0

The recent performance leap of Large Language Models (LLMs) opens up new opportunities across numerous industrial applications and domains. However, erroneous generations, such as false predictions, misinformation, and hallucination made by LLMs, have also raised severe concerns for the trustworthiness of LLMs', especially in safety-, security- and reliability-sensitive scenarios, potentially hindering real-world adoptions. While uncertainty estimation has shown its potential for interpreting the prediction risks made by general machine learning (ML) models, little is known about whether and to what extent it can help explore an LLM's capabilities and counteract its undesired behavior. To bridge the gap, in this paper, we initiate an exploratory study on the risk assessment of LLMs from the lens of uncertainty. In particular, we experiment with twelve uncertainty estimation methods and four LLMs on four prominent natural language processing (NLP) tasks to investigate to what extent uncertainty estimation techniques could help characterize the prediction risks of LLMs. Our findings validate the effectiveness of uncertainty estimation for revealing LLMs' uncertain/non-factual predictions. In addition to general NLP tasks, we extensively conduct experiments with four LLMs for code generation on two datasets. We find that uncertainty estimation can potentially uncover buggy programs generated by LLMs. Insights from our study shed light on future design and development for reliable LLMs, facilitating further research toward enhancing the trustworthiness of LLMs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/13/2023

AutoML in the Age of Large Language Models: Current Challenges, Future Opportunities and Risks

The fields of both Natural Language Processing (NLP) and Automated Machi...
research
06/05/2023

Uncertainty in Natural Language Processing: Sources, Quantification, and Applications

As a main field of artificial intelligence, natural language processing ...
research
04/03/2023

A Bibliometric Review of Large Language Models Research from 2017 to 2023

Large language models (LLMs) are a class of language models that have de...
research
12/13/2022

An Exploratory Study of AI System Risk Assessment from the Lens of Data Distribution and Uncertainty

Deep learning (DL) has become a driving force and has been widely adopte...
research
08/15/2023

Through the Lens of Core Competency: Survey on Evaluation of Large Language Models

From pre-trained language model (PLM) to large language model (LLM), the...
research
05/24/2023

Towards Reliable Misinformation Mitigation: Generalization, Uncertainty, and GPT-4

Misinformation poses a critical societal challenge, and current approach...
research
03/20/2023

Context-faithful Prompting for Large Language Models

Large language models (LLMs) encode parametric knowledge about world fac...

Please sign up or login with your details

Forgot password? Click here to reset