Through the Lens of Core Competency: Survey on Evaluation of Large Language Models

08/15/2023
by   Ziyu Zhuang, et al.
0

From pre-trained language model (PLM) to large language model (LLM), the field of natural language processing (NLP) has witnessed steep performance gains and wide practical uses. The evaluation of a research field guides its direction of improvement. However, LLMs are extremely hard to thoroughly evaluate for two reasons. First of all, traditional NLP tasks become inadequate due to the excellent performance of LLM. Secondly, existing evaluation tasks are difficult to keep up with the wide range of applications in real-world scenarios. To tackle these problems, existing works proposed various benchmarks to better evaluate LLMs. To clarify the numerous evaluation tasks in both academia and industry, we investigate multiple papers concerning LLM evaluations. We summarize 4 core competencies of LLM, including reasoning, knowledge, reliability, and safety. For every competency, we introduce its definition, corresponding benchmarks, and metrics. Under this competency architecture, similar tasks are combined to reflect corresponding ability, while new tasks can also be easily added into the system. Finally, we give our suggestions on the future direction of LLM's evaluation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/17/2022

A Survey of Knowledge-Intensive NLP with Pre-Trained Language Models

With the increasing of model capacity brought by pre-trained language mo...
research
06/14/2023

Towards AGI in Computer Vision: Lessons Learned from GPT and Large Language Models

The AI community has been pursuing algorithms known as artificial genera...
research
07/06/2023

A Survey on Evaluation of Large Language Models

Large language models (LLMs) are gaining increasing popularity in both a...
research
09/17/2023

OWL: A Large Language Model for IT Operations

With the rapid development of IT operations, it has become increasingly ...
research
10/11/2021

Pre-trained Language Models in Biomedical Domain: A Systematic Survey

Pre-trained language models (PLMs) have been the de facto paradigm for m...
research
07/28/2021

Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing

This paper surveys and organizes research works in a new paradigm in nat...
research
07/16/2023

Look Before You Leap: An Exploratory Study of Uncertainty Measurement for Large Language Models

The recent performance leap of Large Language Models (LLMs) opens up new...

Please sign up or login with your details

Forgot password? Click here to reset