Evaluating Neuron Interpretation Methods of NLP Models

01/30/2023
by   Yimin Fan, et al.
0

Neuron Interpretation has gained traction in the field of interpretability, and have provided fine-grained insights into what a model learns and how language knowledge is distributed amongst its different components. However, the lack of evaluation benchmark and metrics have led to siloed progress within these various methods, with very little work comparing them and highlighting their strengths and weaknesses. The reason for this discrepancy is the difficulty of creating ground truth datasets, for example, many neurons within a given model may learn the same phenomena, and hence there may not be one correct answer. Moreover, a learned phenomenon may spread across several neurons that work together – surfacing these to create a gold standard challenging. In this work, we propose an evaluation framework that measures the compatibility of a neuron analysis method with other methods. We hypothesize that the more compatible a method is with the majority of the methods, the more confident one can be about its performance. We systematically evaluate our proposed framework and present a comparative analysis of a large set of neuron interpretation methods. We make the evaluation framework available to the community. It enables the evaluation of any new method using 20 concepts and across three pre-trained models.The code is released at https://github.com/fdalvi/neuron-comparative-analysis

READ FULL TEXT

page 7

page 8

page 15

page 16

page 17

research
05/26/2023

NeuroX Library for Neuron Analysis of Deep NLP Models

Neuron analysis provides insights into how knowledge is structured in re...
research
05/17/2021

Fine-grained Interpretation and Causation Analysis in Deep NLP Models

This paper is a write-up for the tutorial on "Fine-grained Interpretatio...
research
05/28/2023

Emergent Modularity in Pre-trained Transformers

This work examines the presence of modularity in pre-trained Transformer...
research
05/31/2023

Neuron to Graph: Interpreting Language Model Neurons at Scale

Advances in Large Language Models (LLMs) have led to remarkable capabili...
research
09/16/2020

Are Interpretations Fairly Evaluated? A Definition Driven Pipeline for Post-Hoc Interpretability

Recent years have witnessed an increasing number of interpretation metho...
research
11/13/2020

RethinkCWS: Is Chinese Word Segmentation a Solved Task?

The performance of the Chinese Word Segmentation (CWS) systems has gradu...
research
03/22/2023

Edge Deep Learning Model Protection via Neuron Authorization

With the development of deep learning processors and accelerators, deep ...

Please sign up or login with your details

Forgot password? Click here to reset