DeepAI AI Chat
Log In Sign Up

Aligning Faithful Interpretations with their Social Attribution

by   Alon Jacovi, et al.

We find that the requirement of model interpretations to be faithful is vague and incomplete. Indeed, recent work refers to interpretations as unfaithful despite adhering to the available definition. Similarly, we identify several critical failures with the notion of textual highlights as faithful interpretations, although they adhere to the faithfulness definition. With textual highlights as a case-study, and borrowing concepts from social science, we identify that the problem is a misalignment between the causal chain of decisions (causal attribution) and social attribution of human behavior to the interpretation. We re-formulate faithfulness as an accurate attribution of causality to the model, and introduce the concept of "aligned faithfulness": faithful causal chains that are aligned with their expected social behavior. The two steps of causal attribution and social attribution *together* complete the process of explaining behavior, making the alignment of faithful interpretations a requirement. With this formalization, we characterize the observed failures of misaligned faithful highlight interpretations, and propose an alternative causal chain to remedy the issues. Finally, we the implement highlight explanations of proposed causal format using contrastive explanations.


page 1

page 2

page 3

page 4


Attribution-Scores and Causal Counterfactuals as Explanations in Artificial Intelligence

In this expository article we highlight the relevance of explanations fo...

Towards Rigorous Interpretations: a Formalisation of Feature Attribution

Feature attribution is often loosely presented as the process of selecti...

Scale Matters: Attribution Meets the Wavelet Domain to Explain Model Sensitivity to Image Corruptions

Neural networks have shown remarkable performance in computer vision, bu...

Explaining the Road Not Taken

It is unclear if existing interpretations of deep neural network models ...

Consistent and Truthful Interpretation with Fourier Analysis

For many interdisciplinary fields, ML interpretations need to be consist...

Inferring the size of the causal universe: features and fusion of causal attribution networks

Cause-and-effect reasoning, the attribution of effects to causes, is one...

Code Repositories


Code for "Aligning Faithful Interpretations with their Social Attribution"

view repo