Beware the Rationalization Trap! When Language Model Explainability Diverges from our Mental Models of Language

07/14/2022
by   Rita Sevastjanova, et al.
5

Language models learn and represent language differently than humans; they learn the form and not the meaning. Thus, to assess the success of language model explainability, we need to consider the impact of its divergence from a user's mental model of language. In this position paper, we argue that in order to avoid harmful rationalization and achieve truthful understanding of language models, explanation processes must satisfy three main conditions: (1) explanations have to truthfully represent the model behavior, i.e., have a high fidelity; (2) explanations must be complete, as missing information distorts the truth; and (3) explanations have to take the user's mental model into account, progressively verifying a person's knowledge and adapting their understanding. We introduce a decision tree model to showcase potential reasons why current explanations fail to reach their objectives. We further emphasize the need for human-centered design to explain the model from multiple perspectives, progressively adapting explanations to changing user expectations.

READ FULL TEXT

page 3

page 4

page 5

research
06/16/2023

Process Knowledge-infused Learning for Clinician-friendly Explanations

Language models have the potential to assess mental health using social ...
research
07/31/2023

Generative Models as a Complex Systems Science: How can we make sense of large language model behavior?

Coaxing out desired behavior from pretrained models, while avoiding unde...
research
06/01/2023

TopEx: Topic-based Explanations for Model Comparison

Meaningfully comparing language models is challenging with current expla...
research
07/02/2020

The Impact of Explanations on AI Competency Prediction in VQA

Explainability is one of the key elements for building trust in AI syste...
research
09/19/2023

Exploring Self-Reinforcement for Improving Learnersourced Multiple-Choice Question Explanations with Large Language Models

Learnersourcing involves students generating and sharing learning resour...
research
03/12/2020

Model Agnostic Multilevel Explanations

In recent years, post-hoc local instance-level and global dataset-level ...
research
10/28/2022

Just-DREAM-about-it: Figurative Language Understanding with DREAM-FLUTE

Figurative language (e.g., "he flew like the wind") is challenging to un...

Please sign up or login with your details

Forgot password? Click here to reset