Diversity Measures: Domain-Independent Proxies for Failure in Language Model Queries

08/22/2023
by   Noel Ngu, et al.
0

Error prediction in large language models often relies on domain-specific information. In this paper, we present measures for quantification of error in the response of a large language model based on the diversity of responses to a given prompt - hence independent of the underlying application. We describe how three such measures - based on entropy, Gini impurity, and centroid distance - can be employed. We perform a suite of experiments on multiple datasets and temperature settings to demonstrate that these measures strongly correlate with the probability of failure. Additionally, we present empirical results demonstrating how these measures can be applied to few-shot prompting, chain-of-thought reasoning, and error detection.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset