Calibrate to Interpret

07/07/2022
by   Gregory Scafarto, et al.
0

Trustworthy machine learning is driving a large number of ML community works in order to improve ML acceptance and adoption. The main aspect of trustworthy machine learning are the followings: fairness, uncertainty, robustness, explainability and formal guaranties. Each of these individual domains gains the ML community interest, visible by the number of related publications. However few works tackle the interconnection between these fields. In this paper we show a first link between uncertainty and explainability, by studying the relation between calibration and interpretation. As the calibration of a given model changes the way it scores samples, and interpretation approaches often rely on these scores, it seems safe to assume that the confidence-calibration of a model interacts with our ability to interpret such model. In this paper, we show, in the context of networks trained on image classification tasks, to what extent interpretations are sensitive to confidence-calibration. It leads us to suggest a simple practice to improve the interpretation outcomes: Calibrate to Interpret.

READ FULL TEXT
research
12/15/2020

Mitigating bias in calibration error estimation

Building reliable machine learning systems requires that we correctly un...
research
05/13/2019

What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use

Translating machine learning (ML) models effectively to clinical practic...
research
09/30/2022

Variable-Based Calibration for Machine Learning Classifiers

The deployment of machine learning classifiers in high-stakes domains re...
research
02/11/2021

When and How Mixup Improves Calibration

In many machine learning applications, it is important for the model to ...
research
07/05/2020

Participation is not a Design Fix for Machine Learning

This paper critically examines existing modes of participation in design...
research
04/11/2022

Rethinking Machine Learning Model Evaluation in Pathology

Machine Learning has been applied to pathology images in research and cl...
research
03/27/2013

Flexible Interpretations: A Computational Model for Dynamic Uncertainty Assessment

The investigations reported in this paper center on the process of dynam...

Please sign up or login with your details

Forgot password? Click here to reset