To what extent do human explanations of model behavior align with actual model behavior?

12/24/2020
by   Grusha Prasad, et al.
9

Given the increasingly prominent role NLP models (will) play in our lives, it is important to evaluate models on their alignment with human expectations of how models behave. Using Natural Language Inference (NLI) as a case study, we investigated the extent to which human-generated explanations of models' inference decisions align with how models actually make these decisions. More specifically, we defined two alignment metrics that quantify how well natural language human explanations align with model sensitivity to input words, as measured by integrated gradients. Then, we evaluated six different transformer models (the base and large versions of BERT, RoBERTa and ELECTRA), and found that the BERT-base model has the highest alignment with human-generated explanations, for both alignment metrics. Additionally, the base versions of the models we surveyed tended to have higher alignment with human-generated explanations than their larger counterparts, suggesting that increasing the number model parameters could result in worse alignment with human explanations. Finally, we find that a model's alignment with human explanations is not predicted by the model's accuracy on NLI, suggesting that accuracy and alignment are orthogonal, and both are important ways to evaluate models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/16/2021

Natural Language Inference with a Human Touch: Using Human Explanations to Guide Model Attention

Natural Language Inference (NLI) models are known to learn from biases a...
research
12/04/2018

e-SNLI: Natural Language Inference with Natural Language Explanations

In order for machine learning to garner widespread public adoption, mode...
research
05/08/2021

e-ViL: A Dataset and Benchmark for Natural Language Explanations in Vision-Language Tasks

Recently, an increasing number of works have introduced models capable o...
research
10/08/2020

Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language?

Data collection for natural language (NL) understanding tasks has increa...
research
04/22/2022

Learning to Scaffold: Optimizing Model Explanations for Teaching

Modern machine learning models are opaque, and as a result there is a bu...
research
08/26/2023

i-Align: an interpretable knowledge graph alignment model

Knowledge graphs (KGs) are becoming essential resources for many downstr...
research
06/22/2022

VisFIS: Visual Feature Importance Supervision with Right-for-the-Right-Reason Objectives

Many past works aim to improve visual reasoning in models by supervising...

Please sign up or login with your details

Forgot password? Click here to reset