Metric Elicitation; Moving from Theory to Practice

12/07/2022
by   Safinah Ali, et al.
0

Metric Elicitation (ME) is a framework for eliciting classification metrics that better align with implicit user preferences based on the task and context. The existing ME strategy so far is based on the assumption that users can most easily provide preference feedback over classifier statistics such as confusion matrices. This work examines ME, by providing a first ever implementation of the ME strategy. Specifically, we create a web-based ME interface and conduct a user study that elicits users' preferred metrics in a binary classification setting. We discuss the study findings and present guidelines for future research in this direction.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/19/2022

Classification Performance Metric Elicitation and its Applications

Given a learning problem with real-world tradeoffs, which cost function ...
research
06/23/2020

Fair Performance Metric Elicitation

What is a fair performance metric? We consider the choice of fairness me...
research
06/05/2018

Eliciting Binary Performance Metrics

Given a binary prediction problem, which performance metric should the c...
research
11/03/2020

Quadratic Metric Elicitation with Application to Fairness

Metric elicitation is a recent framework for eliciting performance metri...
research
02/13/2023

Evaluation of a Search Interface for Preference-Based Ranking – Measuring User Satisfaction and System Performance

Finding a product online can be a challenging task for users. Faceted se...
research
07/02/2019

A Framework for Evaluating Snippet Generation for Dataset Search

Reusing existing datasets is of considerable significance to researchers...
research
06/01/2023

The Risks of Recourse in Binary Classification

Algorithmic recourse provides explanations that help users overturn an u...

Please sign up or login with your details

Forgot password? Click here to reset