Explaining reputation assessments

06/15/2020
by   Ingrid Nunes, et al.
0

Reputation is crucial to enabling human or software agents to select among alternative providers. Although several effective reputation assessment methods exist, they typically distil reputation into a numerical representation, with no accompanying explanation of the rationale behind the assessment. Such explanations would allow users or clients to make a richer assessment of providers, and tailor selection according to their preferences and current context. In this paper, we propose an approach to explain the rationale behind assessments from quantitative reputation models, by generating arguments that are combined to form explanations. Our approach adapts, extends and combines existing approaches for explaining decisions made using multi-attribute decision models in the context of reputation. We present example argument templates, and describe how to select their parameters using explanation algorithms. Our proposal was evaluated by means of a user study, which followed an existing protocol. Our results give evidence that although explanations present a subset of the information of trust scores, they are sufficient to equally evaluate providers recommended based on their trust score. Moreover, when explanation arguments reveal implicit model information, they are less persuasive than scores.

READ FULL TEXT
research
03/10/2023

Explaining Model Confidence Using Counterfactuals

Displaying confidence scores in human-AI interaction has been shown to h...
research
02/19/2018

Hierarchical Expertise-Level Modeling for User Specific Robot-Behavior Explanations

There is a growing interest within the AI research community to develop ...
research
02/15/2023

A Case Study on Designing Evaluations of ML Explanations with Simulated User Studies

When conducting user studies to ascertain the usefulness of model explan...
research
05/09/2018

A Symbolic Approach to Explaining Bayesian Network Classifiers

We propose an approach for explaining Bayesian network classifiers, whic...
research
05/26/2022

Explaining Preferences with Shapley Values

While preference modelling is becoming one of the pillars of machine lea...
research
07/25/2018

Grounding Visual Explanations

Existing visual explanation generating agents learn to fluently justify ...
research
03/16/2022

Explaining Preference-driven Schedules: the EXPRES Framework

Scheduling is the task of assigning a set of scarce resources distribute...

Please sign up or login with your details

Forgot password? Click here to reset