Never trust, always verify : a roadmap for Trustworthy AI?

06/23/2022
by   Lionel Nganyewou Tidjon, et al.
0

Artificial Intelligence (AI) is becoming the corner stone of many systems used in our daily lives such as autonomous vehicles, healthcare systems, and unmanned aircraft systems. Machine Learning is a field of AI that enables systems to learn from data and make decisions on new data based on models to achieve a given goal. The stochastic nature of AI models makes verification and validation tasks challenging. Moreover, there are intrinsic biaises in AI models such as reproductibility bias, selection bias (e.g., races, genders, color), and reporting bias (i.e., results that do not reflect the reality). Increasingly, there is also a particular attention to the ethical, legal, and societal impacts of AI. AI systems are difficult to audit and certify because of their black-box nature. They also appear to be vulnerable to threats; AI systems can misbehave when untrusted data are given, making them insecure and unsafe. Governments, national and international organizations have proposed several principles to overcome these challenges but their applications in practice are limited and there are different interpretations in the principles that can bias implementations. In this paper, we examine trust in the context of AI-based systems to understand what it means for an AI system to be trustworthy and identify actions that need to be undertaken to ensure that AI systems are trustworthy. To achieve this goal, we first review existing approaches proposed for ensuring the trustworthiness of AI systems, in order to identify potential conceptual gaps in understanding what trustworthy AI is. Then, we suggest a trust (resp. zero-trust) model for AI and suggest a set of properties that should be satisfied to ensure the trustworthiness of AI systems.

READ FULL TEXT
research
05/12/2022

The Different Faces of AI Ethics Across the World: A Principle-Implementation Gap Analysis

Artificial Intelligence (AI) is transforming our daily life with several...
research
12/14/2021

Filling gaps in trustworthy development of AI

The range of application of artificial intelligence (AI) is vast, as is ...
research
05/27/2020

AI Forensics: Did the Artificial Intelligence System Do It? Why?

In an increasingly autonomous manner AI systems make decisions impacting...
research
04/30/2019

Governance by Glass-Box: Implementing Transparent Moral Bounds for AI Behaviour

Artificial Intelligence (AI) applications are being used to predict and ...
research
08/18/2021

A Framework for Understanding AI-Induced Field Change: How AI Technologies are Legitimized and Institutionalized

Artificial intelligence (AI) systems operate in increasingly diverse are...
research
12/17/2021

Relativistic Conceptions of Trustworthiness: Implications for the Trustworthy Status of National Identification Systems

Trustworthiness is typically regarded as a desirable feature of national...
research
05/31/2021

Know Your Model (KYM): Increasing Trust in AI and Machine Learning

The widespread utilization of AI systems has drawn attention to the pote...

Please sign up or login with your details

Forgot password? Click here to reset