Intellige: A User-Facing Model Explainer for Narrative Explanations

by   Jilei Yang, et al.

Predictive machine learning models often lack interpretability, resulting in low trust from model end users despite having high predictive performance. While many model interpretation approaches return top important features to help interpret model predictions, these top features may not be well-organized or intuitive to end users, which limits model adoption rates. In this paper, we propose Intellige, a user-facing model explainer that creates user-digestible interpretations and insights reflecting the rationale behind model predictions. Intellige builds an end-to-end pipeline from machine learning platforms to end user platforms, and provides users with an interface for implementing model interpretation approaches and for customizing narrative insights. Intellige is a platform consisting of four components: Model Importer, Model Interpreter, Narrative Generator, and Narrative Exporter. We describe these components, and then demonstrate the effectiveness of Intellige through use cases at LinkedIn. Quantitative performance analyses indicate that Intellige's narrative insights lead to lifts in adoption rates of predictive model recommendations, as well as to increases in downstream key metrics such as revenue when compared to previous approaches, while qualitative analyses indicate positive feedback from end users.


page 1

page 2

page 3

page 4


Altruist: Argumentative Explanations through Local Interpretations of Predictive Models

Interpretable machine learning is an emerging field providing solutions ...

Pitfalls to Avoid when Interpreting Machine Learning Models

Modern requirements for machine learning (ML) models include both high p...

AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models

Neural NLP models are increasingly accurate but are imperfect and opaque...

Image Classification with Consistent Supporting Evidence

Adoption of machine learning models in healthcare requires end users' tr...

Learning Interpretable Concept-Based Models with Human Feedback

Machine learning models that first learn a representation of a domain in...

Considerations for Visualizing Uncertainty in Clinical Machine Learning Models

Clinician-facing predictive models are increasingly present in the healt...

Please sign up or login with your details

Forgot password? Click here to reset