FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets

07/20/2023
by   Seonghyeon Ye, et al.
0

Evaluation of Large Language Models (LLMs) is challenging because aligning to human values requires the composition of multiple skills and the required set of skills varies depending on the instruction. Recent studies have evaluated the performance of LLMs in two ways, (1) automatic evaluation on several independent benchmarks and (2) human or machined-based evaluation giving an overall score to the response. However, both settings are coarse-grained evaluations, not considering the nature of user instructions that require instance-wise skill composition, which limits the interpretation of the true capabilities of LLMs. In this paper, we introduce FLASK (Fine-grained Language Model Evaluation based on Alignment SKill Sets), a fine-grained evaluation protocol that can be used for both model-based and human-based evaluation which decomposes coarse-level scoring to an instance-wise skill set-level. Specifically, we define 12 fine-grained skills needed for LLMs to follow open-ended user instructions and construct an evaluation set by allocating a set of skills for each instance. Additionally, by annotating the target domains and difficulty level for each instance, FLASK provides a holistic view with a comprehensive analysis of a model's performance depending on skill, domain, and difficulty. Through using FLASK, we compare multiple open-sourced and proprietary LLMs and observe highly-correlated findings between model-based and human-based evaluations. FLASK enables developers to more accurately measure the model performance and how it can be improved by analyzing factors that make LLMs proficient in particular skills. For practitioners, FLASK can be used to recommend suitable models for particular situations through comprehensive comparison among various LLMs. We release the evaluation data and code implementation at https://github.com/kaistAI/FLASK.

READ FULL TEXT

page 25

page 26

page 27

page 35

page 36

page 37

research
07/26/2023

Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models

The quality of training data impacts the performance of pre-trained larg...
research
05/18/2023

VisionLLM: Large Language Model is also an Open-Ended Decoder for Vision-Centric Tasks

Large language models (LLMs) have notably accelerated progress towards a...
research
09/24/2021

Towards A Measure Of General Machine Intelligence

To build increasingly general-purpose artificial intelligence systems th...
research
11/18/2020

A framework for the fine-grained evaluation of the instantaneous expected value of soccer possessions

The expected possession value (EPV) of a soccer possession represents th...
research
03/07/2022

ILDAE: Instance-Level Difficulty Analysis of Evaluation Data

Knowledge of questions' difficulty level helps a teacher in several ways...
research
05/11/2023

SMATCH++: Standardized and Extended Evaluation of Semantic Graphs

The Smatch metric is a popular method for evaluating graph distances, as...
research
04/23/2023

Evaluating ChatGPT's Information Extraction Capabilities: An Assessment of Performance, Explainability, Calibration, and Faithfulness

The capability of Large Language Models (LLMs) like ChatGPT to comprehen...

Please sign up or login with your details

Forgot password? Click here to reset