Interpretable Directed Diversity: Leveraging Model Explanations for Iterative Crowd Ideation

09/21/2021
by   Yunlong Wang, et al.
0

Feedback can help crowdworkers to improve their ideations. However, current feedback methods require human assessment from facilitators or peers. This is not scalable to large crowds. We propose Interpretable Directed Diversity to automatically predict ideation quality and diversity scores, and provide AI explanations - Attribution, Contrastive Attribution, and Counterfactual Suggestions - for deeper feedback on why ideations were scored (low), and how to get higher scores. These explanations provide multi-faceted feedback as users iteratively improve their ideation. We conducted think aloud and controlled user studies to understand how various explanations are used, and evaluated whether explanations improve ideation diversity and quality. Users appreciated that explanation feedback helped focus their efforts and provided directions for improvement. This resulted in explanations improving diversity compared to no feedback or feedback with predictions only. Hence, our approach opens opportunities for explainable AI towards scalable and rich feedback for iterative crowd ideation.

READ FULL TEXT
research
06/06/2022

Improving Model Understanding and Trust with Counterfactual Explanations of Model Confidence

In this paper, we show that counterfactual explanations of confidence sc...
research
04/21/2022

Features of Explainability: How users understand counterfactual and causal explanations for categorical and continuous features in XAI

Counterfactual explanations are increasingly used to address interpretab...
research
12/28/2021

Towards Relatable Explainable AI with the Perceptual Process

Machine learning models need to provide contrastive explanations, since ...
research
06/21/2019

Generating Counterfactual and Contrastive Explanations using SHAP

With the advent of GDPR, the domain of explainable AI and model interpre...
research
12/30/2022

Disentangled Explanations of Neural Network Predictions by Finding Relevant Subspaces

Explainable AI transforms opaque decision strategies of ML models into e...
research
09/21/2021

SalienTrack: providing salient information for semi-automated self-tracking feedback with model explanations

Self-tracking can improve people's awareness of their unhealthy behavior...
research
01/15/2021

Directed Diversity: Leveraging Language Embedding Distances for Collective Creativity in Crowd Ideation

Crowdsourcing can collect many diverse ideas by prompting ideators indiv...

Please sign up or login with your details

Forgot password? Click here to reset