Towards Robust Metrics for Concept Representation Evaluation

01/25/2023
by   Mateo Espinosa Zarlenga, et al.
0

Recent work on interpretability has focused on concept-based explanations, where deep learning models are explained in terms of high-level units of information, referred to as concepts. Concept learning models, however, have been shown to be prone to encoding impurities in their representations, failing to fully capture meaningful features of their inputs. While concept learning lacks metrics to measure such phenomena, the field of disentanglement learning has explored the related notion of underlying factors of variation in the data, with plenty of metrics to measure the purity of such factors. In this paper, we show that such metrics are not appropriate for concept learning and propose novel metrics for evaluating the purity of concept representations in both approaches. We show the advantage of these metrics over existing ones and demonstrate their utility in evaluating the robustness of concept representations and interventions performed on them. In addition, we show their utility for benchmarking state-of-the-art methods from both families and find that, contrary to common assumptions, supervision alone may not be sufficient for pure concept representations.

READ FULL TEXT

page 2

page 7

page 13

page 14

research
06/21/2018

On the Robustness of Interpretability Methods

We argue that robustness of explanations---i.e., that similar inputs sho...
research
09/13/2022

Concept-Based Explanations for Tabular Data

The interpretability of machine learning models has been an essential ar...
research
09/19/2022

Concept Embedding Models

Deploying AI-powered systems requires trustworthy models supporting effe...
research
08/30/2021

DuTrust: A Sentiment Analysis Dataset for Trustworthiness Evaluation

While deep learning models have greatly improved the performance of most...
research
08/25/2023

Learning to Intervene on Concept Bottlenecks

While traditional deep learning models often lack interpretability, conc...
research
10/18/2022

Linear Guardedness and its Implications

Previous work on concept identification in neural representations has fo...
research
12/16/2020

Measuring Disentanglement: A Review of Metrics

Learning to disentangle and represent factors of variation in data is an...

Please sign up or login with your details

Forgot password? Click here to reset