A systematic review and taxonomy of explanations in decision support and recommender systems

06/15/2020
by   Ingrid Nunes, et al.
0

With the recent advances in the field of artificial intelligence, an increasing number of decision-making tasks are delegated to software systems. A key requirement for the success and adoption of such systems is that users must trust system choices or even fully automated decisions. To achieve this, explanation facilities have been widely investigated as a means of establishing trust in these systems since the early years of expert systems. With today's increasingly sophisticated machine learning algorithms, new challenges in the context of explanations, accountability, and trust towards such systems constantly arise. In this work, we systematically review the literature on explanations in advice-giving systems. This is a family of systems that includes recommender systems, which is one of the most successful classes of advice-giving software in practice. We investigate the purposes of explanations as well as how they are generated, presented to users, and evaluated. As a result, we derive a novel comprehensive taxonomy of aspects to be considered when designing explanation facilities for current and future decision support systems. The taxonomy includes a variety of different facets, such as explanation objective, responsiveness, content and presentation. Moreover, we identified several challenges that remain unaddressed so far, for example related to fine-grained issues associated with the presentation of explanations and how explanation facilities are evaluated.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/19/2023

Visualization for Recommendation Explainability: A Survey and New Perspectives

Providing system-generated explanations for recommendations represents a...
research
11/06/2020

Digital Nudging with Recommender Systems: Survey and Future Directions

Recommender systems are nowadays a pervasive part of our online user exp...
research
08/21/2018

The What, the Why, and the How of Artificial Explanations in Automated Decision-Making

The increasing incorporation of Artificial Intelligence in the form of a...
research
12/05/2021

Toward a Taxonomy of Trust for Probabilistic Machine Learning

Probabilistic machine learning increasingly informs critical decisions i...
research
06/09/2022

A taxonomy of explanations to support Explainability-by-Design

As automated decision-making solutions are increasingly applied to all a...
research
01/31/2021

Improving Accountability in Recommender Systems Research Through Reproducibility

Reproducibility is a key requirement for scientific progress. It allows ...
research
07/16/2019

Evaluating Explanation Without Ground Truth in Interpretable Machine Learning

Interpretable Machine Learning (IML) has become increasingly important i...

Please sign up or login with your details

Forgot password? Click here to reset