Explaining Preferences with Shapley Values

05/26/2022
by   Robert Hu, et al.
0

While preference modelling is becoming one of the pillars of machine learning, the problem of preference explanation remains challenging and underexplored. In this paper, we propose Pref-SHAP, a Shapley value-based model explanation framework for pairwise comparison data. We derive the appropriate value functions for preference models and further extend the framework to model and explain context specific information, such as the surface type in a tennis game. To demonstrate the utility of Pref-SHAP, we apply our method to a variety of synthetic and real-world datasets and show that richer and more insightful explanations can be obtained over the baseline.

READ FULL TEXT

page 16

page 17

research
03/16/2022

Explaining Preference-driven Schedules: the EXPRES Framework

Scheduling is the task of assigning a set of scarce resources distribute...
research
02/16/2015

Explaining robust additive utility models by sequences of preference swaps

Multicriteria decision analysis aims at supporting a person facing a dec...
research
06/19/2023

Explaining the Model and Feature Dependencies by Decomposition of the Shapley Value

Shapley values have become one of the go-to methods to explain complex m...
research
05/01/2020

Evaluating and Aggregating Feature-based Model Explanations

A feature-based model explanation denotes how much each input feature co...
research
08/22/2019

The many Shapley values for model explanation

The Shapley value has become a popular method to attribute the predictio...
research
06/15/2020

Explaining reputation assessments

Reputation is crucial to enabling human or software agents to select amo...
research
10/20/2018

Hybrid-MST: A Hybrid Active Sampling Strategy for Pairwise Preference Aggregation

In this paper we present a hybrid active sampling strategy for pairwise ...

Please sign up or login with your details

Forgot password? Click here to reset