The Role of Individual User Differences in Interpretable and Explainable Machine Learning Systems

09/14/2020
by   Lydia P. Gleaves, et al.
0

There is increased interest in assisting non-expert audiences to effectively interact with machine learning (ML) tools and understand the complex output such systems produce. Here, we describe user experiments designed to study how individual skills and personality traits predict interpretability, explainability, and knowledge discovery from ML generated model output. Our work relies on Fuzzy Trace Theory, a leading theory of how humans process numerical stimuli, to examine how different end users will interpret the output they receive while interacting with the ML system. While our sample was small, we found that interpretability – being able to make sense of system output – and explainability – understanding how that output was generated – were distinct aspects of user experience. Additionally, subjects were more able to interpret model output if they possessed individual traits that promote metacognitive monitoring and editing, associated with more detailed, verbatim, processing of ML output. Finally, subjects who are more familiar with ML systems felt better supported by them and more able to discover new patterns in data; however, this did not necessarily translate to meaningful insights. Our work motivates the design of systems that explicitly take users' mental representations into account during the design process to more effectively support end user requirements.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/12/2022

Explaining Predictions from Machine Learning Models: Algorithms, Users, and Pedagogy

Model explainability has become an important problem in machine learning...
research
06/14/2021

Counterfactual Explanations for Machine Learning: Challenges Revisited

Counterfactual explanations (CFEs) are an emerging technique under the u...
research
09/01/2021

The Role of Explainability in Assuring Safety of Machine Learning in Healthcare

Established approaches to assuring safety-critical systems and software ...
research
08/09/2022

"Is It My Turn?" Assessing Teamwork and Taskwork in Collaborative Immersive Analytics

Immersive analytics has the potential to promote collaboration in machin...
research
02/22/2021

Interpret-able feedback for AutoML systems

Automated machine learning (AutoML) systems aim to enable training machi...
research
11/22/2019

Enabling Personalized Decision Support with Patient-Generated Data and Attributable Components

Decision-making related to health is complex. Machine learning (ML) and ...
research
12/02/2020

Regularization and False Alarms Quantification: Two Sides of the Explainability Coin

Regularization is a well-established technique in machine learning (ML) ...

Please sign up or login with your details

Forgot password? Click here to reset