Co-creating a globally interpretable model with human input

06/23/2023
by   Rahul Nair, et al.
0

We consider an aggregated human-AI collaboration aimed at generating a joint interpretable model. The model takes the form of Boolean decision rules, where human input is provided in the form of logical conditions or as partial templates. This focus on the combined construction of a model offers a different perspective on joint decision making. Previous efforts have typically focused on aggregating outcomes rather than decisions logic. We demonstrate the proposed approach through two examples and highlight the usefulness and challenges of the approach.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/19/2022

Factors that influence the adoption of human-AI collaboration in clinical decision-making

Recent developments in Artificial Intelligence (AI) have fueled the emer...
research
05/22/2021

Human-AI Collaboration with Bandit Feedback

Human-machine complementarity is important when neither the algorithm no...
research
01/12/2022

The Human Factor in AI Safety

AI-based systems have been used widely across various industries for dif...
research
09/19/2023

Using AI Uncertainty Quantification to Improve Human Decision-Making

AI Uncertainty Quantification (UQ) has the potential to improve human de...
research
05/24/2020

Automatic Discovery of Interpretable Planning Strategies

When making decisions, people often overlook critical information or are...
research
06/16/2023

Evaluating Superhuman Models with Consistency Checks

If machine learning models were to achieve superhuman abilities at vario...

Please sign up or login with your details

Forgot password? Click here to reset