Towards Modelling and Verification of Social Explainable AI

by   Damian Kurpiewski, et al.

Social Explainable AI (SAI) is a new direction in artificial intelligence that emphasises decentralisation, transparency, social context, and focus on the human users. SAI research is still at an early stage. Consequently, it concentrates on delivering the intended functionalities, but largely ignores the possibility of unwelcome behaviours due to malicious or erroneous activity. We propose that, in order to capture the breadth of relevant aspects, one can use models and logics of strategic ability, that have been developed in multi-agent systems. Using the STV model checker, we take the first step towards the formal modelling and verification of SAI environments, in particular of their resistance to various types of attacks by compromised AI modules.


page 1

page 2

page 3

page 4


A User-Centred Framework for Explainable Artificial Intelligence in Human-Robot Interaction

State of the art Artificial Intelligence (AI) techniques have reached an...

Science Communications for Explainable Artificial Intelligence

Artificial Intelligence (AI) has a communication problem. XAI methods ha...

If consciousness is dynamically relevant, artificial intelligence isn't conscious

We demonstrate that if consciousness is relevant for the temporal evolut...

A multi-component framework for the analysis and design of explainable artificial intelligence

The rapid growth of research in explainable artificial intelligence (XAI...

Sell Me the Blackbox! Regulating eXplainable Artificial Intelligence (XAI) May Harm Consumers

Recent AI algorithms are blackbox models whose decisions are difficult t...

Please sign up or login with your details

Forgot password? Click here to reset