Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs

04/23/2020
by   Sungsoo Ray Hong, et al.
0

As the use of machine learning (ML) models in product development and data-driven decision-making processes became pervasive in many domains, people's focus on building a well-performing model has increasingly shifted to understanding how their model works. While scholarly interest in model interpretability has grown rapidly in research communities like HCI, ML, and beyond, little is known about how practitioners perceive and aim to provide interpretability in the context of their existing workflows. This lack of understanding of interpretability as practiced may prevent interpretability research from addressing important needs, or lead to unrealistic solutions. To bridge this gap, we conducted 22 semi-structured interviews with industry practitioners to understand how they conceive of and design for interpretability while they plan, build, and use their models. Based on a qualitative analysis of our results, we differentiate interpretability roles, processes, goals and strategies as they exist within organizations making heavy use of ML models. The characterization of interpretability work that emerges from our analysis suggests that model interpretability frequently involves cooperation and mental model comparison between people in different roles, often aimed at building trust not only between people and models but also between people within the organization. We present implications for design that discuss gaps between the interpretability challenges that practitioners face in their practice and approaches proposed in the literature, highlighting possible research directions that can better address real-world needs.

READ FULL TEXT
research
12/13/2018

Improving fairness in machine learning systems: What do industry practitioners need?

The potential for machine learning (ML) systems to amplify social inequi...
research
01/20/2019

Quantifying Interpretability and Trust in Machine Learning Systems

Decisions by Machine Learning (ML) models have become ubiquitous. Trusti...
research
09/24/2021

Explanation Strategies as an Empirical-Analytical Lens for Socio-Technical Contextualization of Machine Learning Interpretability

During a research project in which we developed a machine learning (ML) ...
research
11/19/2019

"The Human Body is a Black Box": Supporting Clinical Decision-Making with Deep Learning

Machine learning technologies are increasingly developed for use in heal...
research
01/24/2021

Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs

To ensure accountability and mitigate harm, it is critical that diverse ...
research
02/11/2020

Capturing the Practices, Challenges, and Needs of Transportation Decision-Makers

Transportation decision-makers from government agencies play an importan...
research
06/06/2022

Understanding Machine Learning Practitioners' Data Documentation Perceptions, Needs, Challenges, and Desiderata

Data is central to the development and evaluation of machine learning (M...

Please sign up or login with your details

Forgot password? Click here to reset