Governance by Glass-Box: Implementing Transparent Moral Bounds for AI Behaviour

04/30/2019
by   Andrea Aler Tubella, et al.
0

Artificial Intelligence (AI) applications are being used to predict and assess behaviour in multiple domains, such as criminal justice and consumer finance, which directly affect human well-being. However, if AI is to improve people's lives, then people must be able to trust AI, which means being able to understand what the system is doing and why. Even though transparency is often seen as the requirement in this case, realistically it might not always be possible or desirable, whereas the need to ensure that the system operates within set moral bounds remains. In this paper, we present an approach to evaluate the moral bounds of an AI system based on the monitoring of its inputs and outputs. We place a "glass box" around the system by mapping moral values into explicit verifiable norms that constrain inputs and outputs, in such a way that if these remain within the box we can guarantee that the system adheres to the value. The focus on inputs and outputs allows for the verification and comparison of vastly different intelligent systems; from deep neural networks to agent-based systems. The explicit transformation of abstract moral values into concrete norms brings great benefits in terms of explainability; stakeholders know exactly how the system is interpreting and employing relevant abstract moral human values and calibrate their trust accordingly. Moreover, by operating at a higher level we can check the compliance of the system with different interpretations of the same value. These advantages will have an impact on the well-being of AI systems users at large, building their trust and providing them with concrete knowledge on how systems adhere to moral values.

READ FULL TEXT
research
02/28/2022

The dangers in algorithms learning humans' values and irrationalities

For an artificial intelligence (AI) to be aligned with human values (or ...
research
06/23/2022

Never trust, always verify : a roadmap for Trustworthy AI?

Artificial Intelligence (AI) is becoming the corner stone of many system...
research
02/15/2021

The corruptive force of AI-generated advice

Artificial Intelligence (AI) is increasingly becoming a trusted advisor ...
research
01/26/2022

Cybertrust: From Explainable to Actionable and Interpretable AI (AI2)

To benefit from AI advances, users and operators of AI systems must have...
research
02/27/2023

Towards Audit Requirements for AI-based Systems in Mobility Applications

Various mobility applications like advanced driver assistance systems in...
research
03/15/2023

Contextual Trust

Trust is an important aspect of human life. It provides instrumental val...
research
01/27/2022

To what extent should we trust AI models when they extrapolate?

Many applications affecting human lives rely on models that have come to...

Please sign up or login with your details

Forgot password? Click here to reset