Expose Uncertainty, Instill Distrust, Avoid Explanations: Towards Ethical Guidelines for AI

11/29/2021
by   Claudio S. Pinhanez, et al.
0

In this position paper, I argue that the best way to help and protect humans using AI technology is to make them aware of the intrinsic limitations and problems of AI algorithms. To accomplish this, I suggest three ethical guidelines to be used in the presentation of results, mandating AI systems to expose uncertainty, to instill distrust, and, contrary to traditional views, to avoid explanations. The paper does a preliminary discussion of the guidelines and provides some arguments for their adoption, aiming to start a debate in the community about AI ethics in practice.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/29/2023

RE-centric Recommendations for the Development of Trustworthy(er) Autonomous Systems

Complying with the EU AI Act (AIA) guidelines while developing and imple...
research
06/20/2022

How to Assess Trustworthy AI in Practice

This report is a methodological reflection on Z-Inspection^. Z-Inspectio...
research
09/20/2021

Some Critical and Ethical Perspectives on the Empirical Turn of AI Interpretability

We consider two fundamental and related issues currently faced by Artifi...
research
03/11/2020

Ethical Guidelines for the Construction of Digital Nudges

Under certain circumstances, humans tend to behave in irrational ways, l...
research
06/22/2020

Towards Contrastive Explanations for Comparing the Ethics of Plans

The development of robotics and AI agents has enabled their wider usage ...
research
09/12/2023

How do ASA Ethical Guidelines Support U.S. Guidelines for Official Statistics?

In 2022, the American Statistical Association revised its Ethical Guidel...
research
09/11/2023

On the meaning of uncertainty for ethical AI: philosophy and practice

Whether and how data scientists, statisticians and modellers should be a...

Please sign up or login with your details

Forgot password? Click here to reset