Randomized Classifiers vs Human Decision-Makers: Trustworthy AI May Have to Act Randomly and Society Seems to Accept This

11/15/2021
by   Gábor Erdélyi, et al.
0

As artificial intelligence (AI) systems are increasingly involved in decisions affecting our lives, ensuring that automated decision-making is fair and ethical has become a top priority. Intuitively, we feel that akin to human decisions, judgments of artificial agents should necessarily be grounded in some moral principles. Yet a decision-maker (whether human or artificial) can only make truly ethical (based on any ethical theory) and fair (according to any notion of fairness) decisions if full information on all the relevant factors on which the decision is based are available at the time of decision-making. This raises two problems: (1) In settings, where we rely on AI systems that are using classifiers obtained with supervised learning, some induction/generalization is present and some relevant attributes may not be present even during learning. (2) Modeling such decisions as games reveals that any – however ethical – pure strategy is inevitably susceptible to exploitation. Moreover, in many games, a Nash Equilibrium can only be obtained by using mixed strategies, i.e., to achieve mathematically optimal outcomes, decisions must be randomized. In this paper, we argue that in supervised learning settings, there exist random classifiers that perform at least as well as deterministic classifiers, and may hence be the optimal choice in many circumstances. We support our theoretical results with an empirical study indicating a positive societal attitude towards randomized artificial decision-makers, and discuss some policy and implementation issues related to the use of random classifiers that relate to and are relevant for current AI policy and standardization initiatives.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/12/2022

Developing moral AI to support antimicrobial decision making

Artificial intelligence (AI) assisting with antimicrobial prescribing ra...
research
05/03/2021

Explaining how your AI system is fair

To implement fair machine learning in a sustainable way, choosing the ri...
research
06/06/2023

Designing explainable artificial intelligence with active inference: A framework for transparent introspection and decision-making

This paper investigates the prospect of developing human-interpretable, ...
research
06/17/2021

Immune Moral Models? Pro-Social Rule Breaking as a Moral Enhancement Approach for Ethical AI

The world is heading towards a state in which Artificial Intelligence (A...
research
03/01/2022

System Cards for AI-Based Decision-Making for Public Policy

Decisions in public policy are increasingly being made or assisted by au...
research
09/07/2021

Have a break from making decisions, have a MARS: The Multi-valued Action Reasoning System

The Multi-valued Action Reasoning System (MARS) is an automated value-ba...
research
10/21/2021

Ethics-Based Auditing of Automated Decision-Making Systems: Nature, Scope, and Limitations

Important decisions that impact human lives, livelihoods, and the natura...

Please sign up or login with your details

Forgot password? Click here to reset