To what extent should we trust AI models when they extrapolate?

01/27/2022
by   Roozbeh Yousefzadeh, et al.
2

Many applications affecting human lives rely on models that have come to be known under the umbrella of machine learning and artificial intelligence. These AI models are usually complicated mathematical functions that map from an input space to an output space. Stakeholders are interested to know the rationales behind models' decisions and functional behavior. We study this functional behavior in relation to the data used to create the models. On this topic, scholars have often assumed that models do not extrapolate, i.e., they learn from their training samples and process new input by interpolation. This assumption is questionable: we show that models extrapolate frequently; the extent of extrapolation varies and can be socially consequential. We demonstrate that extrapolation happens for a substantial portion of datasets more than one would consider reasonable. How can we trust models if we do not know whether they are extrapolating? Given a model trained to recommend clinical procedures for patients, can we trust the recommendation when the model considers a patient older or younger than all the samples in the training set? If the training set is mostly Whites, to what extent can we trust its recommendations about Black and Hispanic patients? Which dimension (race, gender, or age) does extrapolation happen? Even if a model is trained on people of all races, it still may extrapolate in significant ways related to race. The leading question is, to what extent can we trust AI models when they process inputs that fall outside their training set? This paper investigates several social applications of AI, showing how models extrapolate without notice. We also look at different sub-spaces of extrapolation for specific individuals subject to AI models and report how these extrapolations can be interpreted, not mathematically, but from a humanistic point of view.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/26/2020

Proceedings of the AI-HRI Symposium at AAAI-FSS 2020

The Artificial Intelligence (AI) for Human-Robot Interaction (HRI) Sympo...
research
02/10/2022

Trust in AI: Interpretability is not necessary or sufficient, while black-box interaction is necessary and sufficient

The problem of human trust in artificial intelligence is one of the most...
research
09/21/2021

Rebuilding Trust: Queer in AI Approach to Artificial Intelligence Risk Management

AI, machine learning, and data science methods are already pervasive in ...
research
12/27/2022

Measuring an artificial intelligence agent's trust in humans using machine incentives

Scientists and philosophers have debated whether humans can trust advanc...
research
03/09/2021

When is it permissible for artificial intelligence to lie? A trust-based approach

Conversational Artificial Intelligence (AI) used in industry settings ca...
research
04/30/2019

Governance by Glass-Box: Implementing Transparent Moral Bounds for AI Behaviour

Artificial Intelligence (AI) applications are being used to predict and ...

Please sign up or login with your details

Forgot password? Click here to reset