Learning Interpretability for Visualizations using Adapted Cox Models through a User Experiment

11/18/2016
by   Adrien Bibal, et al.
0

In order to be useful, visualizations need to be interpretable. This paper uses a user-based approach to combine and assess quality measures in order to better model user preferences. Results show that cluster separability measures are outperformed by a neighborhood conservation measure, even though the former are usually considered as intuitively representative of user motives. Moreover, combining measures, as opposed to using a single measure, further improves prediction performances.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/12/2021

Personalized Visualization Recommendation

Visualization recommendation work has focused solely on scoring visualiz...
research
09/03/2014

Analysing Fuzzy Sets Through Combining Measures of Similarity and Distance

Reasoning with fuzzy sets can be achieved through measures such as simil...
research
11/22/2020

A Bayesian Account of Measures of Interpretability in Human-AI Interaction

Existing approaches for the design of interpretable agent behavior consi...
research
04/13/2021

Model Learning with Personalized Interpretability Estimation (ML-PIE)

High-stakes applications require AI-generated models to be interpretable...
research
09/15/2020

Communicative Visualizations as a Learning Problem

Significant research has provided robust task and evaluation languages f...
research
06/14/2020

Considerations for developing predictive models of crime and new methods for measuring their accuracy

Developing spatio-temporal crime prediction models, and to a lesser exte...
research
03/02/2020

How to choose the most appropriate centrality measure?

We propose a new method to select the most appropriate network centralit...

Please sign up or login with your details

Forgot password? Click here to reset