Enhancing Human-Machine Teaming for Medical Prognosis Through Neural Ordinary Differential Equations (NODEs)

02/08/2021
by   D. Fompeyrine, et al.
0

Machine Learning (ML) has recently been demonstrated to rival expert-level human accuracy in prediction and detection tasks in a variety of domains, including medicine. Despite these impressive findings, however, a key barrier to the full realization of ML's potential in medical prognoses is technology acceptance. Recent efforts to produce explainable AI (XAI) have made progress in improving the interpretability of some ML models, but these efforts suffer from limitations intrinsic to their design: they work best at identifying why a system fails, but do poorly at explaining when and why a model's prediction is correct. We posit that the acceptability of ML predictions in expert domains is limited by two key factors: the machine's horizon of prediction that extends beyond human capability, and the inability for machine predictions to incorporate human intuition into their models. We propose the use of a novel ML architecture, Neural Ordinary Differential Equations (NODEs) to enhance human understanding and encourage acceptability. Our approach prioritizes human cognitive intuition at the center of the algorithm design, and offers a distribution of predictions rather than single outputs. We explain how this approach may significantly improve human-machine collaboration in prediction tasks in expert domains such as medical prognoses. We propose a model and demonstrate, by expanding a concrete example from the literature, how our model advances the vision of future hybrid Human-AI systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/19/2019

Measuring the Quality of Explanations: The System Causability Scale (SCS). Comparing Human and Machine Explanations

Recent success in Artificial Intelligence (AI) and Machine Learning (ML)...
research
02/08/2022

Robust Hybrid Learning With Expert Augmentation

Hybrid modelling reduces the misspecification of expert models by combin...
research
06/05/2021

Integrating Expert ODEs into Neural ODEs: Pharmacology and Disease Progression

Modeling a system's temporal behaviour in reaction to external stimuli i...
research
05/10/2022

Sensible AI: Re-imagining Interpretability and Explainability using Sensemaking Theory

Understanding how ML models work is a prerequisite for responsibly desig...
research
06/30/2022

Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values

Machine learning (ML) interpretability techniques can reveal undesirable...
research
05/05/2022

Interactive Model Cards: A Human-Centered Approach to Model Documentation

Deep learning models for natural language processing (NLP) are increasin...

Please sign up or login with your details

Forgot password? Click here to reset