Explainability for Large Language Models: A Survey

09/02/2023
by   Haiyan Zhao, et al.
0

Large language models (LLMs) have demonstrated impressive capabilities in natural language processing. However, their internal mechanisms are still unclear and this lack of transparency poses unwanted risks for downstream applications. Therefore, understanding and explaining these models is crucial for elucidating their behaviors, limitations, and social impacts. In this paper, we introduce a taxonomy of explainability techniques and provide a structured overview of methods for explaining Transformer-based language models. We categorize techniques based on the training paradigms of LLMs: traditional fine-tuning-based paradigm and prompting-based paradigm. For each paradigm, we summarize the goals and dominant approaches for generating local explanations of individual predictions and global explanations of overall model knowledge. We also discuss metrics for evaluating generated explanations, and discuss how explanations can be leveraged to debug models and improve performance. Lastly, we examine key challenges and emerging opportunities for explanation techniques in the era of LLMs in comparison to conventional machine learning models.

READ FULL TEXT
research
05/22/2023

SPARSEFIT: Few-shot Prompting with Sparse Fine-tuning for Jointly Generating Predictions and Natural Language Explanations

Explaining the decisions of neural models is crucial for ensuring their ...
research
08/07/2023

Evaluating and Explaining Large Language Models for Code Using Syntactic Structures

Large Language Models (LLMs) for code are a family of high-parameter, tr...
research
05/08/2023

XAI in Computational Linguistics: Understanding Political Leanings in the Slovenian Parliament

The work covers the development and explainability of machine learning m...
research
05/30/2023

Explaining Hate Speech Classification with Model Agnostic Methods

There have been remarkable breakthroughs in Machine Learning and Artific...
research
05/21/2023

Explaining How Transformers Use Context to Build Predictions

Language Generation Models produce words based on the previous context. ...
research
02/03/2022

Rethinking Explainability as a Dialogue: A Practitioner's Perspective

As practitioners increasingly deploy machine learning models in critical...
research
08/28/2023

Goodhart's Law Applies to NLP's Explanation Benchmarks

Despite the rising popularity of saliency-based explanations, the resear...

Please sign up or login with your details

Forgot password? Click here to reset