DeepAI AI Chat
Log In Sign Up

The Ability of Image-Language Explainable Models to Resemble Domain Expertise

by   Petrus Werner, et al.

Recent advances in vision and language (V+L) models have a promising impact in the healthcare field. However, such models struggle to explain how and why a particular decision was made. In addition, model transparency and involvement of domain expertise are critical success factors for machine learning models to make an entrance into the field. In this work, we study the use of the local surrogate explainability technique to overcome the problem of black-box deep learning models. We explore the feasibility of resembling domain expertise using the local surrogates in combination with an underlying V+L to generate multi-modal visual and language explanations. We demonstrate that such explanations can serve as helpful feedback in guiding model training for data scientists and machine learning engineers in the field.


Explainable Machine Learning in Deployment

Explainable machine learning seeks to provide various stakeholders with ...

Towards a Rigorous Evaluation of Explainability for Multivariate Time Series

Machine learning-based systems are rapidly gaining popularity and in-lin...

Agree to Disagree: When Deep Learning Models With Identical Architectures Produce Distinct Explanations

Deep Learning of neural networks has progressively become more prominent...

A Surrogate Model Framework for Explainable Autonomous Behaviour

Adoption and deployment of robotic and autonomous systems in industry ar...

Rethinking Explainability as a Dialogue: A Practitioner's Perspective

As practitioners increasingly deploy machine learning models in critical...