A Sea of Words: An In-Depth Analysis of Anchors for Text Data

05/27/2022
by   Gianluigi Lopardo, et al.
8

Anchors [Ribeiro et al. (2018)] is a post-hoc, rule-based interpretability method. For text data, it proposes to explain a decision by highlighting a small set of words (an anchor) such that the model to explain has similar outputs when they are present in a document. In this paper, we present the first theoretical analysis of Anchors, considering that the search for the best anchor is exhaustive. We leverage this analysis to gain insights on the behavior of Anchors on simple models, including elementary if-then rules and linear classifiers.

READ FULL TEXT
research
03/15/2023

Understanding Post-hoc Explainers: The Case of Anchors

In many scenarios, the interpretability of machine learning models is a ...
research
10/16/2015

Normalization of Relative and Incomplete Temporal Expressions in Clinical Narratives

We analyze the RI-TIMEXes in temporally annotated corpora and propose tw...
research
08/11/2019

LoRMIkA: Local Rule-based Model Interpretability with k-optimal Associations

As we rely more and more on machine learning models for real-life decisi...
research
11/16/2021

SMACE: A New Method for the Interpretability of Composite Decision Systems

Interpretability is a pressing issue for decision systems. Many post hoc...
research
11/03/2020

A Benchmark of Rule-Based and Neural Coreference Resolution in Dutch Novels and News

We evaluate a rule-based (Lee et al., 2013) and neural (Lee et al., 2018...
research
10/23/2020

An Analysis of LIME for Text Data

Text data are increasingly handled in an automated fashion by machine le...
research
06/18/2019

Interactive Topic Modeling with Anchor Words

The formalism of anchor words has enabled the development of fast topic ...

Please sign up or login with your details

Forgot password? Click here to reset