A general approach for Explanations in terms of Middle Level Features

by   Andrea Apicella, et al.

Nowadays, it is growing interest to make Machine Learning (ML) systems more understandable and trusting to general users. Thus, generating explanations for ML system behaviours that are understandable to human beings is a central scientific and technological issue addressed by the rapidly growing research area of eXplainable Artificial Intelligence (XAI). Recently, it is becoming more and more evident that new directions to create better explanations should take into account what a good explanation is to a human user, and consequently, develop XAI solutions able to provide user-centred explanations. This paper suggests taking advantage of developing an XAI general approach that allows producing explanations for an ML system behaviour in terms of different and user-selected input features, i.e., explanations composed of input properties that the human user can select according to his background knowledge and goals. To this end, we propose an XAI general approach which is able: 1) to construct explanations in terms of input features that represent more salient and understandable input properties for a user, which we call here Middle-Level input Features (MLFs), 2) to be applied to different types of MLFs. We experimentally tested our approach on two different datasets and using three different types of MLFs. The results seem encouraging.



There are no comments yet.


page 6

page 7

page 13

page 14

page 15

page 16

page 19

page 20


A general approach to compute the relevance of middle-level input features

This work proposes a novel general framework, in the context of eXplaina...

LEx: A Framework for Operationalising Layers of Machine Learning Explanations

Several social factors impact how people respond to AI explanations used...

Abduction-Based Explanations for Machine Learning Models

The growing range of applications of Machine Learning (ML) in a multitud...

Modelling GDPR-Compliant Explanations for Trustworthy AI

Through the General Data Protection Regulation (GDPR), the European Unio...

Culture-Based Explainable Human-Agent Deconfliction

Law codes and regulations help organise societies for centuries, and as ...

On the Diversity and Limits of Human Explanations

A growing effort in NLP aims to build datasets of human explanations. Ho...

Explanation as a process: user-centric construction of multi-level and multi-modal explanations

In the last years, XAI research has mainly been concerned with developin...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.