Agree to Disagree: When Deep Learning Models With Identical Architectures Produce Distinct Explanations

05/14/2021
by   Matthew Watson, et al.
23

Deep Learning of neural networks has progressively become more prominent in healthcare with models reaching, or even surpassing, expert accuracy levels. However, these success stories are tainted by concerning reports on the lack of model transparency and bias against some medical conditions or patients' sub-groups. Explainable methods are considered the gateway to alleviate many of these concerns. In this study we demonstrate that the generated explanations are volatile to changes in model training that are perpendicular to the classification task and model structure. This raises further questions about trust in deep learning models for healthcare. Mainly, whether the models capture underlying causal links in the data or just rely on spurious correlations that are made visible via explanation methods. We demonstrate that the output of explainability methods on deep neural networks can vary significantly by changes of hyper-parameters, such as the random seed or how the training set is shuffled. We introduce a measure of explanation consistency which we use to highlight the identified problems on the MIMIC-CXR dataset. We find explanations of identical models but with different training setups have a low consistency: ≈ 33 robust against any orthogonal changes, with explanation consistency at 94 conclude that current trends in model explanation are not sufficient to mitigate the risks of deploying models in real life healthcare applications.

READ FULL TEXT
research
05/06/2022

The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations

Machine learning models in safety-critical settings like healthcare are ...
research
08/31/2022

Formalising the Robustness of Counterfactual Explanations for Neural Networks

The use of counterfactual explanations (CFXs) is an increasingly popular...
research
09/19/2022

The Ability of Image-Language Explainable Models to Resemble Domain Expertise

Recent advances in vision and language (V+L) models have a promising imp...
research
03/20/2018

Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges

Issues regarding explainable AI involve four components: users, laws & r...
research
11/22/2022

Explainability of Traditional and Deep Learning Models on Longitudinal Healthcare Records

Recent advances in deep learning have led to interest in training deep l...
research
09/07/2019

Explainable Deep Learning for Video Recognition Tasks: A Framework Recommendations

The popularity of Deep Learning for real-world applications is ever-grow...
research
10/31/2022

SoK: Modeling Explainability in Security Monitoring for Trust, Privacy, and Interpretability

Trust, privacy, and interpretability have emerged as significant concern...

Please sign up or login with your details

Forgot password? Click here to reset