Trustworthy AI

by   Jeannette M. Wing, et al.

The promise of AI is huge. AI systems have already achieved good enough performance to be in our streets and in our homes. However, they can be brittle and unfair. For society to reap the benefits of AI systems, society needs to be able to trust them. Inspired by decades of progress in trustworthy computing, we suggest what trustworthy properties would be desired of AI systems. By enumerating a set of new research questions, we explore one approach–formal verification–for ensuring trust in AI. Trustworthy AI ups the ante on both trustworthy computing and formal methods.


page 1

page 2

page 3

page 4


The Sanction of Authority: Promoting Public Trust in AI

Trusted AI literature to date has focused on the trust needs of users wh...

AI Challenges for Society and Ethics

Artificial intelligence is already being applied in and impacting many i...

Why we need an AI-resilient society

Artificial intelligence is considered as a key technology. It has a huge...

The Value of Measuring Trust in AI - A Socio-Technical System Perspective

Building trust in AI-based systems is deemed critical for their adoption...

Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems

What explains the dramatic progress from 20th-century to 21st-century AI...

A blindspot of AI ethics: anti-fragility in statistical prediction

With this paper, we aim to put an issue on the agenda of AI ethics that ...

AI and the Everything in the Whole Wide World Benchmark

There is a tendency across different subfields in AI to valorize a small...