1 Introduction
In this section we present the importance of model interpretability and state of the art in this domain. Then we show how our methodology helps to better understand complex predictive models a.k.a. blackboxes.
Predictive modeling has a large number of applications in almost every area of human activity, starting from medicine, marketing, logistic, banking and many others. Due to the increasing amount of collected data, models become more sophisticated and complex.
It is believed that there is a tradeoff between the interpretability and accuracy of a model (see e.g., Johansson et al. (2011)). It comes from the observation that the most elastic models usually have higher accuracy but in turn they are also more complex. Complexity here means a large number of parameters that affect the final prediction. That number is big enough to make the model ununderstandable for an ordinary human being.
Interpretability may be introduced naturally in the modeling framework, see an example in Figure 1. In many areas the interpretability of a model is very important, see for example Lundberg and Lee (2017), Murad and Tarr , Puri (2017). The reason behind it that interpretability allows to clash the model structure with the domain knowledge. And this may bring multiple benefits such as:

Domain validation. Very flexible models may be overfitted to the training data and focused on some biases that result from the manner in which the data was collected (sample bias) or some surrogate variables (variable bias). Validation of the model structure helps to identify these biases. In Figure 1 this feature is marked as C.

Model improvement. Identification of subsets of observations in which a model has lower performance allows us to correct the model in this subset and leads to further improvements of the model. In Figure 1 this feature was marked as D.

Trust. If the model is used to assist people in activities such as selection of proper therapy, understanding key factors that drive model predictions is very important. See more examples in Ribeiro et al. (2016).

Hidden debt. Sculley et al. (2015) argue that lack of interpretability leads to hidden debt in machine learning models. Despite initial high performance, the real model performance may deteriorate quickly. Model explainers help to control this debt.

New insights. It is hard to increase knowledge about a domain on the basis of black boxes. They may be useful but it does not lead to any new knowledge about a given discipline. Understanding model structure may lead to new interesting discoveries.
In this paper we present a consistent general framework for local and global model interpretability. This framework covers the most known approaches to model explanation such as Partial Dependence Plots (Greenwell, 2017), Accumulated Local Effects Plots (Apley, 2017), Merging Path Plots (Sitko and Biecek, 2017), Break Down Plots (Biecek, 2018), Permutational Variable Importance Plots (Fisher et al., 2018) or Cateris Paribus Plots.
All these explainers are extended in a way that allows us to compare different models against each other on the same scale. Model comparison is very important since in model building often one gets a collection of competing models. Comparisons of these models and exploration of structures learned by elastic models gives new insights that may be used to construct better features for new models (assisted training with surrogate models). Also lot of effort was put in the graphical side of explainers. Solutions such as Visualizations for Convolutional Networks (Zeiler and Fergus, 2014) or Conditional visualization for statistical models (O’Connell et al., 2017) show that wellprepared visualization boost actionability. Also, by purpose, we have not included approaches that do not fit into our grammar of model exploration, such as Individual Conditional Expectations Plot (Goldstein et al., 2015) and (Apley, 2017). Nevertheless, they are still available, for example in the ICEbox package.
The presented methodology is available as an open source package DALEX for R. The R language (R Core Team, 2017) is one of the most popular software systems for statistical and machine learning modeling. The current implementation of DALEX supports models generated with the most popular frameworks for classification or regression, such as caret (from Jed Wing et al., 2016), mlr (Bischl et al., 2016)
(Liaw and Wiener, 2002), Gradient Boosting Machine
(Ridgeway, 2017) and Generalized Linear Models (Dobson, 1990). It can be also easily extended to other frameworks and other techniques for model exploration. The DALEX package is available on GPL3 license at CRAN^{1}^{1}1https://cran.rproject.org/package=DALEX and at GitHub^{2}^{2}2https://github.com/pbiecek/DALEX along with technical documentation^{3}^{3}3https://pbiecek.github.io/DALEX and extended documentation^{4}^{4}4https://pbiecek.github.io/DALEX_docs.Example explainers presented in this paper were recorded with the archivist package (Biecek and Kosinski, 2017). Each explainer is an R object, which can be downloaded directly to R console with hooks added to every section. To save space, we present in this paper only graphical representation of explainers. The tabular representation is available through attached hooks.
2 Architecture
Figure 2 presents the general architecture of the DALEX package. The presented methodology is modelagnostic and works for any predictive model that returns a numeric score, such as classification and regression models.
To achieve a truly modelagnostic solution, explainers cannot be based on model parameters nor model structure. The only assumption here is that we can call the predict function for any selected data points. Such function is wrapped with the model and the validation dataset. Such wrapper serves as a unified interface for a model.
Methods for better understanding of global structure of a model (a.k.a. model explainers) and for better understanding of a local structure of a model (a.k.a. prediction explainers) are implemented in separate functions. We call these function explainers since they are design to explain a single feature of a model. As a result, they return numerical summaries in a tabular format. Results from each explainer may be summarized with generic plot function. The plot function will work with any number of models and will overlay all models in a single chart for cross examination.
3 Model understanding
In this section we present explainers that increase understanding of a global structure of the model. The primary goal of these explainers is to answer the following questions: How good is the model? Which variables are the most important? How are the variables linked with the model response?
3.1 Explainers for model performance
Model performance is often summarized with a single number such as precision, recall, F1, average loss or accuracy. Such approach is handy in model selection. It is easy to construct ranking of models and choose the best one on the basis of a single statistic. However, more descriptive statistics are better when it comes to understanding of a model.
The descriptive statistics most often used for classification models is ROC (Receiver Operating Characteristic). It has many various implementations. In R, the most widely used descriptive statistic is the ROCR package Sing et al. (2005). ROC plots have also extensions for regression models. Find an overview of Regression ROC curves in HernándezOrallo (2013).
Both plots compare distributions of residuals for two models. The left plot shows 1  Empirical Cumulative Distribution Function for absolute values of residuals, while the right plot shows boxplots for absolute values of residuals. Red dots in the right plot stand for root mean square loss.
The DALEX package offers a selection of tools for exploration of model residuals. Figure 3 presents example explainers for model performance^{5}^{5}5Access this explainer with archivist::aread(’pbiecek/DALEX_arepo/b4eb1’) created with model_performance() function. Here distribution of absolute residuals is compared between two models. The average mean square loss is equal for both models, yet we can see that the random forest model has more small residuals and only a small fraction of large residuals. 10% of residuals in random forest model is larger than the largest residual in the linear model.
More diagnostic plots are available through the auditor package (Gosiewska and Biecek, 2018), which is closely integrated with the DALEX.
3.2 Explainers for conditional effect of a single variable
The DALEX package offers a selection of tools for better understanding of a conditional model’s response based on a single variable. Current implementation covers:

Partial Dependence Plot (Greenwell, 2017), as implemented in the pdp package.

Accumulated Local Effects Plot (Apley, 2017) as implemented in ALEPlot package,

Merging Path Plot (Sitko and Biecek, 2017) as implemented in the factorMerger package.
First two methods were designed to deal with continuous variables, while the third one is designed for categorical variables.
Examples for these explainers^{6}^{6}6Access these explainers with archivist::aread(’pbiecek/DALEX_arepo/3b150’) and archivist::aread(’pbiecek/DALEX_arepo/6cbf4’) created with function variable_response() are presented in Figure 4. On the basis of these explainers it is easy to see that the random forest model learns the nonlinear relation between price and construction year. The linear model is unable to handle such relation without some prior feature engineering. For categorical variable we can see that both models divide the district variable into three groups of values: downtown (largest responses), three districts close to downtown (middle response) and all remaining responses.
3.3 Explainers for variable importance
The DALEX package offers a modelagnostic procedure to calculate variable importance. The modelagnostic approach is based on permutational approach introduced initially for random forest (Breiman, 2001) and then extended for other models by Fisher et al. (2018).
An example for these explainers^{7}^{7}7Access this explainer with archivist::aread(’pbiecek/DALEX_arepo/9378c’) created with function variable_importance() is presented in Figure 5. The initial performance of both models is similar, and for that reason these intervals are left aligned. For both models the district and surface variables are the most interesting variables. The largest difference between these models is the effect of construction year. For the linear model the length of corresponding interval is almost 0, while for the random forest model is far from 0. This observation is aligned with variables’ effects presented in Figure 4.
The usual practice in variable importance charts is to present only the length of the interval which is related to loss in the performance metrics after the selected variable is shuffled. Bars on such plots are hitched in 0. In the DALEX package we propose to present not only drop in model performance but also the initial model performance. In that way one can compare variables between models with different initial performance.
4 Prediction understanding
In this section we present explainers that increase understanding of a prediction for a single observations. The primary goal of these explainers is to answer the following questions: How stable is the prediction? Which variables influence the prediction? How to attribute effects of particular variables to a single model prediction?
4.1 Explainers for robustness of predictions
Ceteris Paribus Plots show how the model response changes as a function of a single variable. These plots recollect similarities to Partial Dependency Plots presented in Section 3.2; the only difference between them is the fact that Ceteris Paribus Plots are focused on a single observation.
CP Plots have many applications. The derivative is related to local variable importance (as measured in LIME), the profile may be used to verify some constraints related to a variable (such as monotonic relation) or to asses variable contribution.
An example for this explainer^{8}^{8}8Access this explainer with archivist::aread(’pbiecek/DALEX_arepo/c8989’) created with ceterisParibus package^{9}^{9}9https://github.com/pbiecek/ceterisParibus is presented in Figure 6. We can read from it that the variable surface has the largest effect on the model predictions and and it lowers the model prediction for large apartments. We can also read that small changes in the variable construction year will not affect model predictions.
Ceteris Paribus Plots  explainers for a single observation. The left plot shows how the model response fluctuates for a single observation (predicted y is on OY axis) and if all its unchanged variables remain constant when a single variable is changed (ceteris paribus principle). The right plot shows effects of all variables in the same coordinate system. On OX axis values are normalized through quantile transformation.
4.2 Explainers for variable attribution
The most known approaches to explanations of a single prediction are LIME method (Ribeiro et al., 2016), working best for local explanations, and Shapley values (Štrumbelj and Kononenko, 2010, 2014; Lundberg and Lee, 2017), working best for variable attribution. Break Down Plots are fast approximations of Shapley values. The methodology behind this method and comparison among these three methods is presented in (Staniak and Biecek, 2018).
An example for BDP explainers^{10}^{10}10Access this explainer with archivist::aread(’pbiecek/DALEX_arepo/72b47’) created with function prediction_breakdown() is presented in Figure 7. As one can read from the graph, in both models the largest increase in model prediction is due to variable district = Srodmiescie (downtown). Large surface lowers the prediction in the random forest model, while the variable number of rooms has larger impact in the random forest model than in the linear model.
5 Summary
Thinking about data modeling is currently dominated by feature engineering and model training. Kaggle competitions turn the data modeling process into a process that returns a single model with highest accuracy. Tasks of that type may be easily automated. Such thinking about modeling is popular due to lack of tools that can be used for model validation and richer domain verification.
In this article we have introduced consistent methodology and a set of tools for modelagnostic explanations. The presented global explainers for model understanding and local explainers for prediction understanding are based on uniform grammar introduced in Figure 2. Every explainer is constructed in a way that allows for numerical summary, visual summary and comparison of multiple models.
The methodology is developed in a way that is easy to extend with broad technical documentation with rich training materials^{11}^{11}11https://pbiecek.github.io/DALEX_docs. The code is properly maintained and tested with tools for continuous integration.
6 Acknowledgments
The work was partially supported as RENOIR Project by the European Union Horizon 2020 research and innovation programme under the Marie SkłodowskaCurie grant agreement No 691152 (project RENOIR) and by NCN Opus grant 2016/21/B/ST6/02176.
References
 Apley (2017) Dan Apley. ALEPlot: Accumulated Local Effects (ALE) Plots and Partial Dependence (PD) Plots, 2017. URL https://CRAN.Rproject.org/package=ALEPlot. R package version 1.0.
 Biecek (2018) Przemyslaw Biecek. breakDown: Break Down Plots, 2018. URL https://pbiecek.github.io/breakDown/. R package version 0.1.4.
 Biecek and Kosinski (2017) Przemyslaw Biecek and Marcin Kosinski. archivist: An R Package for Managing, Recording and Restoring Data Analysis Results. Journal of Statistical Software, 82(11):1–28, 2017. doi: 10.18637/jss.v082.i11.
 Bischl et al. (2016) Bernd Bischl, Michel Lang, Lars Kotthoff, Julia Schiffner, Jakob Richter, Erich Studerus, Giuseppe Casalicchio, and Zachary M. Jones. mlr: Machine Learning in R. Journal of Machine Learning Research, 17(170):1–5, 2016. URL http://jmlr.org/papers/v17/15066.html.
 Breiman (2001) Leo Breiman. Random forests. Machine Learning, 45(1):5–32, Oct 2001. ISSN 15730565. doi: 10.1023/A:1010933404324. URL https://doi.org/10.1023/A:1010933404324.
 Dobson (1990) A. J. Dobson. An Introduction to Generalized Linear Models. Chapman and Hall, London, 1990.
 Fisher et al. (2018) Aaron Fisher, Cynthia Rudin, and Francesca Dominici. Model Class Reliance: Variable Importance Measures for any Machine Learning Model Class, from the ”Rashomon” Perspective. 2018. URL http://arxiv.org/abs/1801.01489.
 from Jed Wing et al. (2016) Max Kuhn. Contributions from Jed Wing, Steve Weston, Andre Williams, Chris Keefer, Allan Engelhardt, Tony Cooper, Zachary Mayer, Brenton Kenkel, the R Core Team, Michael Benesty, Reynald Lescarbeau, Andrew Ziem, Luca Scrucca, Yuan Tang, and Can Candan. caret: Classification and Regression Training, 2016. URL https://CRAN.Rproject.org/package=caret. R package version 6.064.
 Goldstein et al. (2015) Alex Goldstein, Adam Kapelner, Justin Bleich, and Emil Pitkin. Peeking inside the black box: Visualizing statistical learning with plots of individual conditional expectation. Journal of Computational and Graphical Statistics, 24(1):44–65, 2015.
 Gosiewska and Biecek (2018) Alicja Gosiewska and Przemyslaw Biecek. auditor: Model Audit  Verification, Validation, and Residuals Diagnostic, 2018. URL https://CRAN.Rproject.org/package=auditor. R package version 0.2.1.
 Greenwell (2017) Brandon M. Greenwell. pdp: An R package for Constructing Partial Dependence Plots. The R Journal, 9(1):421–436, 2017. URL https://journal.rproject.org/archive/2017/RJ2017016/index.html.
 HernándezOrallo (2013) José HernándezOrallo. Roc curves for regression. Pattern Recognition, 46(12):3395–3411, Dec 2013. ISSN 00313203. doi: 10.1016/j.patcog.2013.06.014.
 Johansson et al. (2011) Ulf Johansson, Cecilia Sönströd, Ulf Norinder, and Henrik Boström. Tradeoff between accuracy and interpretability for predictive in silico modeling. Future Medicinal Chemistry, 3(6):647–663, Apr 2011. ISSN 17568919, 17568927. doi: 10.4155/fmc.11.23.
 Liaw and Wiener (2002) Andy Liaw and Matthew Wiener. Classification and regression by randomforest. R News, 2(3):18–22, 2002. URL http://CRAN.Rproject.org/doc/Rnews/.
 Lundberg and Lee (2017) Scott M Lundberg and SuIn Lee. A unified approach to interpreting model predictions. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc., 2017. URL http://papers.nips.cc/paper/7062aunifiedapproachtointerpretingmodelpredictions.pdf.

(16)
Havi Murad and Garth Tarr.
Visualizing and interpreting interactions in logistic regression models.
Biometric Bulletin, page 14–16.  O’Connell et al. (2017) Mark O’Connell, Catherine B. Hurley, and Katarina Domijan. Conditional visualization for statistical models: An introduction to the condvis package in r. Journal of Statistical Software, 81(5), 2017. ISSN 15487660. doi: 10.18637/jss.v081.i05. URL http://www.jstatsoft.org/v81/i05/.
 Puri (2017) Nikaash Puri. Magix: Model agnostic globally interpretable explanations. 2017. URL http://export.arxiv.org/pdf/1706.07160.
 R Core Team (2017) R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, 2017. URL https://www.Rproject.org/.

Ribeiro et al. (2016)
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin.
”Why Should I Trust You?”: Explaining the Predictions of Any Classifier.
pages 1135–1144. ACM Press, 2016. ISBN 9781450342322. doi: 10.1145/2939672.2939778. URL http://dl.acm.org/citation.cfm?doid=2939672.2939778.  Ridgeway (2017) Greg Ridgeway. gbm: Generalized Boosted Regression Models, 2017. URL https://CRAN.Rproject.org/package=gbm. R package version 2.1.3.
 Sculley et al. (2015) D. Sculley, Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Dietmar Ebner, Vinay Chaudhary, Michael Young, JeanFrançois Crespo, and Dan Dennison. Hidden technical debt in machine learning systems. 2015. URL https://papers.nips.cc/paper/5656hiddentechnicaldebtinmachinelearningsystems.pdf.
 Sing et al. (2005) T. Sing, O. Sander, N. Beerenwinkel, and T. Lengauer. ROCR: visualizing classifier performance in R. Bioinformatics, 21(20):7881, 2005. URL http://rocr.bioinf.mpisb.mpg.de.
 Sitko and Biecek (2017) Agnieszka Sitko and Przemyslaw Biecek. The Merging Path Plot: adaptive fusing of kgroups with likelihoodbased model selection, 2017. URL https://arxiv.org/abs/1709.04412.
 Staniak and Biecek (2018) Mateusz Staniak and Przemyslaw Biecek. Explanations of model predictions with live and breakDown packages. ArXiv eprints, Apr 2018. URL https://arxiv.org/abs/1804.01955.

Štrumbelj and Kononenko (2010)
Erik Štrumbelj and Igor Kononenko.
An efficient explanation of individual classifications using game theory.
J. Mach. Learn. Res., 11:1–18, March 2010. ISSN 15324435. URL http://dl.acm.org/citation.cfm?id=1756006.1756007.  Štrumbelj and Kononenko (2014) Erik Štrumbelj and Igor Kononenko. Explaining prediction models and individual predictions with feature contributions. Knowledge and Information Systems, 41(3):647–665, 2014. ISSN 02191377, 02193116. doi: 10.1007/s101150130679x. URL http://link.springer.com/10.1007/s101150130679x.
 Zeiler and Fergus (2014) Matthew D. Zeiler and Rob Fergus. Visualizing and Understanding Convolutional Networks, page 818–833. Springer International Publishing, 2014. ISBN 9783319105901.
Comments
There are no comments yet.