Log In Sign Up

The Role of Individual User Differences in Interpretable and Explainable Machine Learning Systems

by   Lydia P. Gleaves, et al.

There is increased interest in assisting non-expert audiences to effectively interact with machine learning (ML) tools and understand the complex output such systems produce. Here, we describe user experiments designed to study how individual skills and personality traits predict interpretability, explainability, and knowledge discovery from ML generated model output. Our work relies on Fuzzy Trace Theory, a leading theory of how humans process numerical stimuli, to examine how different end users will interpret the output they receive while interacting with the ML system. While our sample was small, we found that interpretability – being able to make sense of system output – and explainability – understanding how that output was generated – were distinct aspects of user experience. Additionally, subjects were more able to interpret model output if they possessed individual traits that promote metacognitive monitoring and editing, associated with more detailed, verbatim, processing of ML output. Finally, subjects who are more familiar with ML systems felt better supported by them and more able to discover new patterns in data; however, this did not necessarily translate to meaningful insights. Our work motivates the design of systems that explicitly take users' mental representations into account during the design process to more effectively support end user requirements.


page 1

page 2

page 3

page 4


Explaining Predictions from Machine Learning Models: Algorithms, Users, and Pedagogy

Model explainability has become an important problem in machine learning...

Counterfactual Explanations for Machine Learning: Challenges Revisited

Counterfactual explanations (CFEs) are an emerging technique under the u...

The Role of Explainability in Assuring Safety of Machine Learning in Healthcare

Established approaches to assuring safety-critical systems and software ...

"Is It My Turn?" Assessing Teamwork and Taskwork in Collaborative Immersive Analytics

Immersive analytics has the potential to promote collaboration in machin...

Interpret-able feedback for AutoML systems

Automated machine learning (AutoML) systems aim to enable training machi...

Using Shape Metrics to Describe 2D Data Points

Traditional machine learning (ML) algorithms, such as multiple regressio...

Enabling Personalized Decision Support with Patient-Generated Data and Attributable Components

Decision-making related to health is complex. Machine learning (ML) and ...