Towards Model-informed Precision Dosing with Expert-in-the-loop Machine Learning

06/28/2021
by   Yihuang Kang, et al.
0

Machine Learning (ML) and its applications have been transforming our lives but it is also creating issues related to the development of fair, accountable, transparent, and ethical Artificial Intelligence. As the ML models are not fully comprehensible yet, it is obvious that we still need humans to be part of algorithmic decision-making processes. In this paper, we consider a ML framework that may accelerate model learning and improve its interpretability by incorporating human experts into the model learning loop. We propose a novel human-in-the-loop ML framework aimed at dealing with learning problems that the cost of data annotation is high and the lack of appropriate data to model the association between the target tasks and the input features. With an application to precision dosing, our experimental results show that the approach can learn interpretable rules from data and may potentially lower experts' workload by replacing data annotation with rule representation editing. The approach may also help remove algorithmic bias by introducing experts' feedback into the iterative model learning process.

READ FULL TEXT
research
07/01/2021

Quality Metrics for Transparent Machine Learning With and Without Humans In the Loop Are Not Correlated

The field explainable artificial intelligence (XAI) has brought about an...
research
07/25/2019

HEIDL: Learning Linguistic Expressions with Deep Learning and Human-in-the-Loop

While the role of humans is increasingly recognized in machine learning ...
research
07/06/2023

Improving the Efficiency of Human-in-the-Loop Systems: Adding Artificial to Human Experts

Information systems increasingly leverage artificial intelligence (AI) a...
research
08/10/2023

Are We Closing the Loop Yet? Gaps in the Generalizability of VIS4ML Research

Visualization for machine learning (VIS4ML) research aims to help expert...
research
10/18/2022

A Human-ML Collaboration Framework for Improving Video Content Reviews

We deal with the problem of localized in-video taxonomic human annotatio...
research
01/13/2021

Preferential Mixture-of-Experts: Interpretable Models that Rely on Human Expertise as much as Possible

We propose Preferential MoE, a novel human-ML mixture-of-experts model t...
research
11/24/2019

A psychophysics approach for quantitative comparison of interpretable computer vision models

The field of transparent Machine Learning (ML) has contributed many nove...

Please sign up or login with your details

Forgot password? Click here to reset