An Interpretability Evaluation Benchmark for Pre-trained Language Models

07/28/2022
by   Yaozong Shen, et al.
0

While pre-trained language models (LMs) have brought great improvements in many NLP tasks, there is increasing attention to explore capabilities of LMs and interpret their predictions. However, existing works usually focus only on a certain capability with some downstream tasks. There is a lack of datasets for directly evaluating the masked word prediction performance and the interpretability of pre-trained LMs. To fill in the gap, we propose a novel evaluation benchmark providing with both English and Chinese annotated data. It tests LMs abilities in multiple dimensions, i.e., grammar, semantics, knowledge, reasoning and computation. In addition, it provides carefully annotated token-level rationales that satisfy sufficiency and compactness. It contains perturbed instances for each original instance, so as to use the rationale consistency under perturbations as the metric for faithfulness, a perspective of interpretability. We conduct experiments on several widely-used pre-trained LMs. The results show that they perform very poorly on the dimensions of knowledge and computation. And their plausibility in all dimensions is far from satisfactory, especially when the rationale is short. In addition, the pre-trained LMs we evaluated are not robust on syntax-aware data. We will release this evaluation benchmark at <http://xyz>, and hope it can facilitate the research progress of pre-trained LMs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/23/2022

A Fine-grained Interpretability Evaluation Benchmark for Neural NLP

While there is increasing concern about the interpretability of neural m...
research
12/27/2022

A Survey on Knowledge-Enhanced Pre-trained Language Models

Natural Language Processing (NLP) has been revolutionized by the use of ...
research
11/29/2020

Intrinsic Knowledge Evaluation on Chinese Language Models

Recent NLP tasks have benefited a lot from pre-trained language models (...
research
05/24/2023

SETI: Systematicity Evaluation of Textual Inference

We propose SETI (Systematicity Evaluation of Textual Inference), a novel...
research
02/11/2023

Evaluating the Robustness of Discrete Prompts

Discrete prompts have been used for fine-tuning Pre-trained Language Mod...
research
08/04/2023

Prompt2Gaussia: Uncertain Prompt-learning for Script Event Prediction

Script Event Prediction (SEP) aims to predict the subsequent event for a...
research
11/02/2020

IndoLEM and IndoBERT: A Benchmark Dataset and Pre-trained Language Model for Indonesian NLP

Although the Indonesian language is spoken by almost 200 million people ...

Please sign up or login with your details

Forgot password? Click here to reset