AI loyalty: A New Paradigm for Aligning Stakeholder Interests

03/24/2020
by   Anthony Aguirre, et al.
0

When we consult with a doctor, lawyer, or financial advisor, we generally assume that they are acting in our best interests. But what should we assume when it is an artificial intelligence (AI) system that is acting on our behalf? Early examples of AI assistants like Alexa, Siri, Google, and Cortana already serve as a key interface between consumers and information on the web, and users routinely rely upon AI-driven systems like these to take automated actions or provide information. Superficially, such systems may appear to be acting according to user interests. However, many AI systems are designed with embedded conflicts of interests, acting in ways that subtly benefit their creators (or funders) at the expense of users. To address this problem, in this paper we introduce the concept of AI loyalty. AI systems are loyal to the degree that they are designed to minimize, and make transparent, conflicts of interest, and to act in ways that prioritize the interests of users. Properly designed, such systems could have considerable functional and competitive - not to mention ethical - advantages relative to those that do not. Loyal AI products hold an obvious appeal for the end-user and could serve to promote the alignment of the long-term interests of AI developers and customers. To this end, we suggest criteria for assessing whether an AI system is sufficiently transparent about conflicts of interest, and acting in a manner that is loyal to the user, and argue that AI loyalty should be considered during the technological design process alongside other important values in AI ethics such as fairness, accountability privacy, and equity. We discuss a range of mechanisms, from pure market forces to strong regulatory frameworks, that could support incorporation of AI loyalty into a variety of future AI systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/27/2023

Designing Fiduciary Artificial Intelligence

A fiduciary is a trusted agent that has the legal duty to act with loyal...
research
05/17/2021

Designer-User Communication for XAI: An epistemological approach to discuss XAI design

Artificial Intelligence is becoming part of any technology we use nowada...
research
09/07/2023

Beyond XAI:Obstacles Towards Responsible AI

The rapidly advancing domain of Explainable Artificial Intelligence (XAI...
research
11/20/2017

What's up with Privacy?: User Preferences and Privacy Concerns in Intelligent Personal Assistants

The recent breakthroughs in Artificial Intelligence (AI) have allowed in...
research
02/21/2023

The Full Rights Dilemma for A.I. Systems of Debatable Personhood

An Artificially Intelligent system (an AI) has debatable personhood if i...
research
04/11/2022

Metaethical Perspectives on 'Benchmarking' AI Ethics

Benchmarks are seen as the cornerstone for measuring technical progress ...
research
05/04/2023

The System Model and the User Model: Exploring AI Dashboard Design

This is a speculative essay on interface design and artificial intellige...

Please sign up or login with your details

Forgot password? Click here to reset