Image Classification with Consistent Supporting Evidence

11/13/2021
by   Peiqi Wang, et al.
0

Adoption of machine learning models in healthcare requires end users' trust in the system. Models that provide additional supportive evidence for their predictions promise to facilitate adoption. We define consistent evidence to be both compatible and sufficient with respect to model predictions. We propose measures of model inconsistency and regularizers that promote more consistent evidence. We demonstrate our ideas in the context of edema severity grading from chest radiographs. We demonstrate empirically that consistent models provide competitive performance while supporting interpretation.

READ FULL TEXT

page 8

page 13

research
03/04/2021

Learning to Predict with Supporting Evidence: Applications to Clinical Risk Prediction

The impact of machine learning models on healthcare will depend on the d...
research
12/16/2021

Utilizing Evidence Spans via Sequence-Level Contrastive Learning for Long-Context Question Answering

Long-range transformer models have achieved encouraging results on long-...
research
05/27/2021

Intellige: A User-Facing Model Explainer for Narrative Explanations

Predictive machine learning models often lack interpretability, resultin...
research
08/08/2019

The Adoption of Robotics by Government Agencies: Evidence from Crime Labs

While firms and factories often adopt technologies like robotics and adv...
research
07/19/2019

Multi-Granular Text Encoding for Self-Explaining Categorization

Self-explaining text categorization requires a classifier to make a pred...
research
11/03/2020

Weakly- and Semi-supervised Evidence Extraction

For many prediction tasks, stakeholders desire not only predictions but ...

Please sign up or login with your details

Forgot password? Click here to reset