Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI

10/22/2019 ∙ by Alejandro Barredo Arrieta, et al. ∙ 170

In the last years, Artificial Intelligence (AI) has achieved a notable momentum that may deliver the best of expectations over many application sectors across the field. For this to occur, the entire community stands in front of the barrier of explainability, an inherent problem of AI techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI. Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is acknowledged as a crucial feature for the practical deployment of AI models. This overview examines the existing literature in the field of XAI, including a prospect toward what is yet to be reached. We summarize previous efforts to define explainability in Machine Learning, establishing a novel definition that covers prior conceptual propositions with a major focus on the audience for which explainability is sought. We then propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at Deep Learning methods for which a second taxonomy is built. This literature analysis serves as the background for a series of challenges faced by XAI, such as the crossroads between data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to XAI with a reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.

READ FULL TEXT VIEW PDF

Authors

page 24

page 25

page 26

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Artificial Intelligence (AI) lies at the core of many activity sectors that have embraced new information technologies russell2016artificial. While the roots of AI trace back to several decades ago, there is a clear consensus on the paramount importance featured nowadays by intelligent machines endowed with learning, reasoning and adaptation capabilities. It is by virtue of these capabilities that AI methods are achieving unprecedented levels of performance when learning to solve increasingly complex computational tasks, making them pivotal for the future development of the human society west2018future. The sophistication of AI-powered systems has lately increased to such an extent that almost no human intervention is required for their design and deployment. When decisions derived from such systems ultimately affect humans’ lives (as in e.g. medicine, law or defense), there is an emerging need for understanding how such decisions are furnished by AI methods goodman2017Fair.

While the very first AI systems were easily interpretable, the last years have witnessed the rise of opaque decision systems such as Deep Neural Networks (DNNs). The empirical success of Deep Learning (DL) models such as DNNs stems from a combination of efficient learning algorithms and their huge parametric space. The latter space comprises hundreds of layers and millions of parameters, which makes DNNs be considered as complex black-box models Castelvecchi16. The opposite of black-box-ness is transparency, i.e., the search for a direct understanding of the mechanism by which a model works Lipton18.

As black-box Machine Learning (ML) models are increasingly being employed to make important predictions in critical contexts, the demand for transparency is increasing from the various stakeholders in AI Preece18Stakeholders. The danger is on creating and using decisions that are not justifiable, legitimate, or that simply do not allow obtaining detailed explanations of their behaviour gunning2017explainable. Explanations supporting the output of a model are crucial, e.g., in precision medicine, where experts require far more information from the model than a simple binary prediction for supporting their diagnosis 1907.07374. Other examples include autonomous vehicles in transportation, security, and finance, among others.

In general, humans are reticent to adopt techniques that are not directly interpretable, tractable and trustworthy Zhu18, given the increasing demand for ethical AI goodman2017Fair. It is customary to think that by focusing solely on performance, the systems will be increasingly opaque. This is true in the sense that there is a trade-off between the performance of a model and its transparency Dosilovic18. However, an improvement in the understanding of a system can lead to the correction of its deficiencies. When developing a ML model, the consideration of interpretability as an additional design driver can improve its implementability for 3 reasons:

  • Interpretability helps ensure impartiality in decision-making, i.e. to detect, and consequently, correct from bias in the training dataset.

  • Interpretability facilitates the provision of robustness by highlighting potential adversarial perturbations that could change the prediction.

  • Interpretability can act as an insurance that only meaningful variables infer the output, i.e., guaranteeing that an underlying truthful causality exists in the model reasoning.

All these means that the interpretation of the system should, in order to be considered practical, provide either an understanding of the model mechanisms and predictions, a visualization of the model’s discrimination rules, or hints on what could perturb the model Hall2018.

In order to avoid limiting the effectiveness of the current generation of AI systems, eXplainable AI (XAI) gunning2017explainable proposes creating a suite of ML techniques that 1) produce more explainable models while maintaining a high level of learning performance (e.g., prediction accuracy), and 2) enable humans to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners. XAI draws as well insights from the Social Sciences Miller19 and considers the psychology of explanation.

Figure 1: Evolution of the number of total publications whose title, abstract and/or keywords refer to the field of XAI during the last years. Data retrieved from the Scopus® database (October 14th, 2019) by submitting the queries indicated in the legend. It is interesting to note the latent need for interpretable AI models over time (which conforms to intuition, as interpretability is a requirement in many scenarios), yet it has not been until 2017 when the interest in techniques to explain AI models has permeated throughout the research community.

Figure 1 displays the rising trend of contributions on XAI and related concepts. This literature outbreak shares its rationale with the research agendas of national governments and agencies. Although some recent surveys 1907.07374; Gilpin18; Dosilovic18; adadi2018peeking; biran2017explanation; Darpa2019; Guidotti19 summarize the upsurge of activity in XAI across sectors and disciplines, this overview aims to cover the creation of a complete unified framework of categories and concepts that allow for scrutiny and understanding of the field of XAI methods. Furthermore, we pose intriguing thoughts around the explainability of AI models in data fusion contexts with regards to data privacy and model confidentiality. This, along with other research opportunities and challenges identified throughout our study, serve as the pull factor toward Responsible Artificial Intelligence, term by which we refer to a series of AI principles to be necessarily met when deploying AI in real applications. As we will later show in detail, model explainability is among the most crucial aspects to be ensured within this methodological framework. All in all, the novel contributions of this overview can be summarized as follows:

  1. Grounded on a first elaboration of concepts and terms used in XAI-related research, we propose a novel definition of explainability that places audience (Figure 2) as a key aspect to be considered when explaining a ML model. We also elaborate on the diverse purposes sought when using XAI techniques, from trustworthiness to privacy awareness, which round up the claimed importance of purpose and targeted audience in model explainability.

  2. We define and examine the different levels of transparency that a ML model can feature by itself, as well as the diverse approaches to post-hoc explainability, namely, the explanation of ML models that are not transparent by design.

  3. We thoroughly analyze the literature on XAI and related concepts published to date, covering approximately 400 contributions arranged into two different taxonomies. The first taxonomy addresses the explainability of ML models using the previously made distinction between transparency and post-hoc explainability, including models that are transparent by themselves, Deep and non-Deep (i.e., shallow

    ) learning models. The second taxonomy deals with XAI methods suited for the explanation of Deep Learning models, using classification criteria closely linked to this family of ML methods (e.g. layerwise explanations, representation vectors, attention).

  4. We enumerate a series of challenges of XAI that still remain insufficiently addressed to date. Specifically, we identify research needs around the concepts and metrics to evaluate the explainability of ML models, and outline research directions toward making Deep Learning models more understandable. We further augment the scope of our prospects toward the implications of XAI techniques in regards to confidentiality, robustness in adversarial settings, data diversity, and other areas intersecting with explainability.

  5. After the previous prospective discussion, we arrive at the concept of Responsible Artificial Intelligence, a manifold concept that imposes the systematic adoption of several AI principles for AI models to be of practical use. In addition to explainability, the guidelines behind Responsible AI establish that fairness, accountability and privacy should also be considered when implementing AI models in real environments.

  6. Since Responsible AI blends together model explainability and privacy/security by design, we call for a profound reflection around the benefits and risks of XAI techniques in scenarios dealing with sensitive information and/or confidential ML models. As we will later show, the regulatory push toward data privacy, quality, integrity and governance demands more efforts to assess the role of XAI in this arena. In this regard, we provide an insight on the implications of XAI in terms of privacy and security under different data fusion paradigms.

The remainder of this overview is structured as follows: first, Section 2 and subsections therein open a discussion on the terminology and concepts revolving around explainability and interpretability in AI, ending up with the aforementioned novel definition of interpretability (Subsections 2.1 and 2.2), and a general criterion to categorize and analyze ML models from the XAI perspective. Sections 3 and 4 proceed by reviewing recent findings on XAI for ML models (on transparent models and post-hoc techniques respectively) that comprise the main division in the aforementioned taxonomy. We also include a review on hybrid approaches among the two, to attain XAI. Benefits and caveats of the synergies among the families of methods are discussed in Section LABEL:sec:challenges, where we present a prospect of general challenges and some consequences to be cautious about. Finally, Section LABEL:sec:responsibleAI elaborates on the concept of Responsible Artificial Intelligence. Section LABEL:sec:conc concludes the survey with an outlook aimed at engaging the community around this vibrant research area, which has the potential to impact society, in particular those sectors that have progressively embraced ML as a core technology of their activity.

2 Explainability: What, Why, What For and How?

Before proceeding with our literature study, it is convenient to first establish a common point of understanding on what the term explainability stands for in the context of AI and, more specifically, ML. This is indeed the purpose of this section, namely, to pause at the numerous definitions that have been done in regards to this concept (what?), to argue why explainability is an important issue in AI and ML (why? what for?) and to introduce the general classification of XAI approaches that will drive the literature study thereafter (how?).

2.1 Terminology Clarification

One of the issues that hinders the establishment of common grounds is the interchangeable misuse of interpretability and explainability in the literature. There are notable differences among these concepts. To begin with, interpretability refers to a passive characteristic of a model referring to the level at which a given model makes sense for a human observer. This feature is also expressed as transparency. By contrast, explainability can be viewed as an active characteristic of a model, denoting any action or procedure taken by a model with the intent of clarifying or detailing its internal functions.

To summarize the most commonly used nomenclature, in this section we clarify the distinction and similarities among terms often used in the ethical AI and XAI communities.

  • Understandability (or equivalently, intelligibility) denotes the characteristic of a model to make a human understand its function – how the model works – without any need for explaining its internal structure or the algorithmic means by which the model processes data internally Montavon18.

  • Comprehensibility: when conceived for ML models, comprehensibility refers to the ability of a learning algorithm to represent its learned knowledge in a human understandable fashion Fernandez19; gleicher2016framework; craven1996extracting. This notion of model comprehensibility stems from the postulates of Michalski michalski1983theory, which stated that “the results of computer induction should be symbolic descriptions of given entities, semantically and structurally similar to those a human expert might produce observing the same entities. Components of these descriptions should be comprehensible as single ‘chunks’ of information, directly interpretable in natural language, and should relate quantitative and qualitative concepts in an integrated fashion”. Given its difficult quantification, comprehensibility is normally tied to the evaluation of the model complexity Guidotti19.

  • Interpretability: it is defined as the ability to explain or to provide the meaning in understandable terms to a human.

  • Explainability: explainability is associated with the notion of explanation as an interface between humans and a decision maker that is, at the same time, both an accurate proxy of the decision maker and comprehensible to humans Guidotti19.

  • Transparency: a model is considered to be transparent if by itself it is understandable. Since a model can feature different degrees of understandability, transparent models in Section 3 are divided into three categories: simulatable models, decomposable models and algorithmically transparent models Lipton18.

2.2 What?

Although it might be considered to be beyond the scope of this paper, it is worth noting the discussion held around general theories of explanation in the realm of philosophy diez2013Explanations. Many proposals have been done in this regard, suggesting the need for a general, unified theory that approximates the structure and intent of an explanation. However, nobody has stood the critique when presenting such a general theory. For the time being, the most agreed-upon thought blends together different approaches to explanation drawn from diverse knowledge disciplines. A similar problem is found when addressing interpretability in AI. It appears from the literature that there is not yet a common point of understanding on what interpretability or explainability are. However, many contributions claim the achievement of interpretable models and techniques that empower explainability.

To shed some light on this lack of consensus, it might be interesting to place the reference starting point at the definition of the term Explainable Artificial Intelligence (XAI) given by D. Gunning in gunning2017explainable:

“XAI will create a suite of machine learning techniques that enables human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners”

This definition brings together two concepts (understanding and trust) that need to be addressed in advance. However, it misses to consider other purposes motivating the need for interpretable AI models, such as causality, transferability, informativeness, fairness and confidence Lipton18; WhatDoesExplainableAImean; TowardsInterpretability; MakingInterpretable. We will later delve into these topics, mentioning them here as a supporting example of the incompleteness of the above definition.

As exemplified by the definition above, a thorough, complete definition of explainability in AI still slips from our fingers. A broader reformulation of this definition (e.g. “An explainable Artificial Intelligence is one that produces explanations about its functioning”) would fail to fully characterize the term in question, leaving aside important aspects such as its purpose. To build upon the completeness, a definition of explanation is first required.

As extracted from the Cambridge Dictionary of English Language, an explanation is “the details or reasons that someone gives to make something clear or easy to understand” walter2008cambridge. In the context of an ML model, this can be rephrased as: ”the details or reasons a model gives to make its functioning clear or easy to understand”. It is at this point where opinions start to diverge. Inherently stemming from the previous definitions, two ambiguities can be pointed out. First, the details or the reasons used to explain, are completely dependent of the audience to which they are presented. Second, whether the explanation has left the concept clear or easy to understand also depends completely on the audience. Therefore, the definition must be rephrased to reflect explicitly the dependence of the explainability of the model on the audience. To this end, a reworked definition could read as:

Given a certain audience, explainability refers to the details and reasons a model gives to make its functioning clear or easy to understand.

Since explaining, as argumenting, may involve weighting, comparing or convincing an audience with logic-based formalizations of (counter) arguments Besnard08, explainability might convey us into the realm of cognitive psychology and the psychology of explanations gunning2017explainable, since measuring whether something has been understood or put clearly is a hard task to be gauged objectively. However, measuring to which extent the internals of a model can be explained could be tackled objectively. Any means to reduce the complexity of the model or to simplify its outputs should be considered as an XAI approach. How big this leap is in terms of complexity or simplicity will correspond to how explainable the resulting model is. An underlying problem that remains unsolved is that the interpretability gain provided by such XAI approaches may not be straightforward to quantify: for instance, a model simplification can be evaluated based on the reduction of the number of architectural elements or number of parameters of the model itself (as often made, for instance, for DNNs). On the contrary, the use of visualization methods or natural language for the same purpose does not favor a clear quantification of the improvements gained in terms of interpretability. The derivation of general metrics to assess the quality of XAI approaches remain as an open challenge that should be under the spotlight of the field in forthcoming years. We will further discuss on this research direction in Section LABEL:sec:challenges.

Explainability is linked to post-hoc explainability since it covers the techniques used to convert a non-interpretable model into a explainable one. In the remaining of this manuscript, explainability will be considered as the main design objective, since it represents a broader concept. A model can be explained, but the interpretability of the model is something that comes from the design of the model itself. Bearing these observations in mind, explainable AI can be defined as follows:

Given an audience, an explainable Artificial Intelligence is one that produces details or reasons to make its functioning clear or easy to understand.

This definition is posed here as a first contribution of the present overview, implicitly assumes that the ease of understanding and clarity targeted by XAI techniques for the model at hand reverts on different application purposes, such as a better trustworthiness of the model’s output by the audience.

2.3 Why?

As stated in the introduction, explainability is one of the main barriers AI is facing nowadays in regards to its practical implementation. The inability to explain or to fully understand the reasons by which state-of-the-art ML algorithms perform as well as they do, is a problem that find its roots in two different causes, which are conceptually illustrated in Figure 2.

Without a doubt, the first cause is the gap between the research community and business sectors, impeding the full penetration of the newest ML models in sectors that have traditionally lagged behind in the digital transformation of their processes, such as banking, finances, security and health, among many others. In general this issue occurs in strictly regulated sectors with some reluctance to implement techniques that may put at risk their assets.

The second axis is that of knowledge. AI has helped research across the world with the task of inferring relations that were far beyond the human cognitive reach. Every field dealing with huge amounts of reliable data has largely benefited from the adoption of AI and ML techniques. However, we are entering an era in which results and performance metrics are the only interest shown up in research studies. Although for certain disciplines this might be the fair case, science and society are far from being concerned just by performance. The search for understanding is what opens the door for further model improvement and its practical utility.

Figure 2: Diagram showing the different purposes of explainability in ML models sought by different audience profiles. Two goals occur to prevail across them: need for model understanding, and regulatory compliance. Image partly inspired by the one presented in ibm2019, used with permission from IBM.

The following section develops these ideas further by analyzing the goals motivating the search for explainable AI models.

2.4 What for?

The research activity around XAI has so far exposed different goals to draw from the achievement of an explainable model. Almost none of the papers reviewed completely agrees in the goals required to describe what an explainable model should compel. However, all these different goals might help discriminate the purpose for which a given exercise of ML explainability is performed. Unfortunately, scarce contributions have attempted to define such goals from a conceptual perspective Lipton18; Gilpin18; WhatDoesExplainableAImean; WhatDoWeNeed. We now synthesize and enumerate definitions for these XAI goals, so as to settle a first classification criteria for the full suit of papers covered in this review:

  • Trustworthiness: several authors agree upon the search for trustworthiness as the primary aim of an explainable AI model kim2015Trust; ribeiro2016trust. However, declaring a model as explainable as per its capabilities of inducing trust might not be fully compliant with the requirement of model explainability. Trustworthiness might be considered as the confidence of whether a model will act as intended when facing a given problem. Although it should most certainly be a property of any explainable model, it does not imply that every trustworthy model can be considered explainable on its own, nor is trustworthiness a property easy to quantify. Trust might be far from being the only purpose of an explainable model since the relation among the two, if agreed upon, is not reciprocal. Part of the reviewed papers mention the concept of trust when stating their purpose for achieving explainability. However, as seen in Table 1, they do not amount to a large share of the recent contributions related to XAI.

    XAI Goal References
    Trustworthiness Lipton18; Dosilovic18; WhatDoesExplainableAImean; ribeiro2016trust; fox2017explainable; lane2005explainable; murdoch2019interpretable; ExplanationsExpectations; chander2018working
    Causality murdoch2019interpretable; tickle1998truth; louizos2017causal; goudet2018learning; athey2015machine; Lopez-Paz17; barabas2017interventions
    Transferability Lipton18; caruana2015Transferability; craven1996extracting; MakingInterpretable; theodorou2017designing; WhatDoWeNeed; ribeiro2016trust; chander2018working; tickle1998truth; louizos2017causal; samek2017explainable; wadsworth2018achieving; yuan2019adversarial; letham2015interpretable; harbers2010design; aung2007comparing; weller2017challenges; freitas2014comprehensible; schetinin2007confident; martens2011performance; InterpretableDeepICU; barakat2008; ExplainYourselfAgents; ExplainableAgencyAgents; DeepTaylor; LearningHowTo; ras2018explanation; bach2016controlling; MedicinePrecision; UsingPerceptualAgents; olden2002illuminating; krause2016interacting; interpretingHeatMapSVM; tan2014unsupervised; krening2017learning; LIME; LayerWise; etchells2006orthogonal; PlanExplicabilityAgents; santoro2017simple; peng2002use; ustun2007visualisation; zhang2019interpreting; wu2018beyond; hinton2015distilling; frosst2017distilling; augasta2012reverse; zhou2003extracting; tan2016tree; fong2017interpretable
    Informativeness Lipton18; caruana2015Transferability; craven1996extracting; TowardsInterpretability; MakingInterpretable; theodorou2017designing; WhatDoWeNeed; ribeiro2016trust; lane2005explainable; murdoch2019interpretable; chander2018working; tickle1998truth; athey2015machine; samek2017explainable; letham2015interpretable; harbers2010design; aung2007comparing; weller2017challenges; freitas2014comprehensible; schetinin2007confident; martens2011performance; InterpretableDeepICU; barakat2008; ExplainYourselfAgents; ExplainableAgencyAgents; DeepTaylor; bach2016controlling; MedicinePrecision; UsingPerceptualAgents; olden2002illuminating; interpretingHeatMapSVM; tan2014unsupervised; krening2017learning; LIME; LayerWise; etchells2006orthogonal; PlanExplicabilityAgents; santoro2017simple; peng2002use; ustun2007visualisation; zhang2019interpreting; wu2018beyond; UsersAtChargeOfDesing; goebel2018explainable; belle2017logic; edwards2017slave; ExplainableAgencyAgents; lou2013accurate; xu2015show; huysmans2011Informativeness; barakat2007rule; Chaves2005; martens2007comprehensible; LearningDeepFeatures; krishnan1999extracting; fu2004; green2018fair; chouldechova2017fair; kim2018fairness; haasdonk2005feature; FeatureContributionMethod; welling2016forest; fung2005; zhang2005; linsley2018global; zhou2008low; burrell2016machine; shrikumar2016not; ImprovingInterpretability; ridgeway1998interpretable; InterpretableCNN; seo2017interpretable; larsen2000interpreting; interpretingNeuroSVM; xu2018interpreting; Intrees; domingos1998knowledge; DistillAndCompare; StatisticsForCriminalBehavior; MakingTEInterpretable; AtributeInteractions; MIRIAMAgents; QuantifyingInterpretability; nunez2002rule; nunez2006rule; kearns2017preventing; akyol2016price; erhan2010understanding; zhang2015sensitivity; quinlan1987simplifying; SingleTreeApproximation; intepretationSVM; TreeView; VisualizingUnderstanding; UnderstandingDeep; wagner2019interpretable; kanehira2019learning; apley2016visualizing; staniak2018explanations; zeiler2010deconvolutional; springenberg2014striving; kim2017interpretability; polino2018model; murdoch2017automatic; craven1994using; arbatli1997rule; johansson2009evolving; lei2016rationalizing; radford2017learning; selvaraju2016grad; shwartz2017opening; yosinski2015understanding
    Confidence Lipton18; theodorou2017designing; murdoch2019interpretable; samek2017explainable; yuan2019adversarial; schetinin2007confident; LearningHowTo; LayerWise; belle2017logic; edwards2017slave; LearningDeepFeatures; zhou2008low; xu2018interpreting; domingos1998knowledge; pope2019explainability
    Fairness Lipton18; WhatDoesExplainableAImean; theodorou2017designing; murdoch2019interpretable; wadsworth2018achieving; green2018fair; chouldechova2017fair; kim2018fairness; DistillAndCompare; StatisticsForCriminalBehavior; kearns2017preventing; gajane2017formalizing; dwork2018group; barocas-hardt-narayanan19
    Accessibility craven1996extracting; MakingInterpretable; WhatDoWeNeed; ribeiro2016trust; chander2018working; harbers2010design; freitas2014comprehensible; martens2011performance; ras2018explanation; krause2016interacting; interpretingHeatMapSVM; tan2014unsupervised; krening2017learning; LIME; PlanExplicabilityAgents; santoro2017simple; peng2002use; UsersAtChargeOfDesing; barakat2007rule; Chaves2005; FeatureContributionMethod; fung2005; linsley2018global; zhou2008low; ImprovingInterpretability; ridgeway1998interpretable; InterpretableCNN; seo2017interpretable; larsen2000interpreting; MIRIAMAgents; akyol2016price
    Interactivity chander2018working; harbers2010design; ExplainableAgencyAgents; UsingPerceptualAgents; krause2016interacting; PlanExplicabilityAgents; UsersAtChargeOfDesing; MIRIAMAgents
    Privacy awareness edwards2017slave
    Table 1: Goals imposed in the reviewed literature toward reaching explainability.
  • Causality: another common goal for explainability is that of finding causality among data variables. Several authors argue that explainable models might ease the task of finding relationships that, should they occur, could be tested further for a stronger causal link between the involved variables wang1999Causality; rani2006Causality. The inference of causal relationships from observational data is a field that has been broadly studied over time pearl2009causality. As widely acknowledged by the community working on this topic, causality requires a wide frame of prior knowledge to prove that observed effects are causal. A ML model only discovers correlations among the data it learns from, and therefore might not suffice for unveiling a cause-effect relationship. However, causation involves correlation, so an explainable ML model could validate the results provided by causality inference techniques, or provide a first intuition of possible causal relationships within the available data. Again, Table 1 reveals that causality is not among the most important goals if we attend to the amount of papers that state it explicitly as their goal.

  • Transferability: models are always bounded by constraints that should allow for their seamless transferability. This is the main reason why a training-testing approach is used when dealing with ML problems kuhn2013appliedTransferability; james2013Transferability. Explainability is also an advocate for transferability, since it may ease the task of elucidating the boundaries that might affect a model, allowing for a better understanding and implementation. Similarly, the mere understanding of the inner relations taking place within a model facilitates the ability of a user to reuse this knowledge in another problem. There are cases in which the lack of a proper understanding of the model might drive the user toward incorrect assumptions and fatal consequences caruana2015Transferability; szegedy2013Transferability. Transferability should also fall between the resulting properties of an explainable model, but again, not every transferable model should be considered as explainable. As observed in Table 1, the amount of papers stating that the ability of rendering a model explainable is to better understand the concepts needed to reuse it or to improve its performance is the second most used reason for pursuing model explainability.

  • Informativeness: ML models are used with the ultimate intention of supporting decision making huysmans2011Informativeness. However, it should not be forgotten that the problem being solved by the model is not equal to that being faced by its human counterpart. Hence, a great deal of information is needed in order to be able to relate the user’s decision to the solution given by the model, and to avoid falling in misconception pitfalls. For this purpose, explainable ML models should give information about the problem being tackled. Most of the reasons found among the papers reviewed is that of extracting information about the inner relations of a model. Almost all rule extraction techniques substantiate their approach on the search for a simpler understanding of what the model internally does, stating that the knowledge (information) can be expressed in these simpler proxies that they consider explaining the antecedent. This is the most used argument found among the reviewed papers to back up what they expect from reaching explainable models.

  • Confidence: as a generalization of robustness and stability, confidence should always be assessed on a model in which reliability is expected. The methods to maintain confidence under control are different depending on the model. As stated in ruppert1987Stability; basu2018Stability; yu2013stability, stability is a must-have when drawing interpretations from a certain model. Trustworthy interpretations should not be produced by models that are not stable. Hence, an explainable model should contain information about the confidence of its working regime.

  • Fairness: from a social standpoint, explainability can be considered as the capacity to reach and guarantee fairness in ML models. In a certain literature strand, an explainable ML model suggests a clear visualization of the relations affecting a result, allowing for a fairness or ethical analysis of the model at hand goodman2017Fair; chouldechova2017fair. Likewise, a related objective of XAI is highlighting bias in the data a model was exposed to Burns18; Bennetot19. The support of algorithms and models is growing fast in fields that involve human lives, hence explainability should be considered as a bridge to avoid the unfair or unethical use of algorithm’s outputs.

  • Accessibility: a minor subset of the reviewed contributions argues for explainability as the property that allows end users to get more involved in the process of improving and developing a certain ML model chander2018working; UsersAtChargeOfDesing . It seems clear that explainable models will ease the burden felt by non-technical or non-expert users when having to deal with algorithms that seem incomprehensible at first sight. This concept is expressed as the third most considered goal among the surveyed literature.

  • Interactivity: some contributions harbers2010design; ExplainableAgencyAgents include the ability of a model to be interactive with the user as one of the goals targeted by an explainable ML model. Once again, this goal is related to fields in which the end users are of great importance, and their ability to tweak and interact with the models is what ensures success.

  • Privacy awareness: almost forgotten in the reviewed literature, one of the byproducts enabled by explainability in ML models is its ability to assess privacy. ML models may have complex representations of their learned patterns. Not being able to understand what has been captured by the model Castelvecchi16 and stored in its internal representation may entail a privacy breach. Contrarily, the ability to explain the inner relations of a trained model by non-authorized third parties may also compromise the differential privacy of the data origin. Due to its criticality in sectors where XAI is foreseen to play a crucial role, confidentiality and privacy issues will be covered further in Subsections LABEL:ssec:robust_adv and LABEL:ssec:privacydatafusion, respectively.

This subsection has reviewed the goals encountered among the broad scope of the reviewed papers. All these goals are clearly under the surface of the concept of explainability introduced before in this section. To round up this prior analysis on the concept of explainability, the last subsection deals with different strategies followed by the community to address explainability in ML models.

2.5 How?

The literature makes a clear distinction among models that are interpretable by design, and those that can be explained by means of external XAI techniques. This duality could also be regarded as the difference between interpretable models and model interpretability techniques; a more widely accepted classification is that of transparent models and post-hoc explainability. This same duality also appears in the paper presented in Guidotti19 in which the distinction its authors make refers to the methods to solve the transparent box design problem against the problem of explaining the black-box problem. This work, further extends the distinction made among transparent models including the different levels of transparency considered.

Within transparency, three levels are contemplated: algorithmic transparency, decomposability and simulatability111The alternative term simulability is also used in the literature to refer to the capacity of a system or process to be simulated. However, we note that this term does not appear in current English dictionaries.. Among post-hoc techniques we may distinguish among text explanations, visualizations, local explanations, explanations by example, explanations by simplification and feature relevance. In this context, there is a broader distinction proposed by WhatDoesExplainableAImean discerning between 1) opaque systems, where the mappings from input to output are invisible to the user; 2) interpretable systems, in which users can mathematically analyze the mappings; and 3) comprehensible systems, in which the models should output symbols or rules along with their specific output to aid in the understanding process of the rationale behind the mappings being made. This last classification criterion could be considered included within the one proposed earlier, hence this paper will attempt at following the more specific one.

2.5.1 Levels of Transparency in Machine Learning Models

Transparent models convey some degree of interpretability by themselves. Models belonging to this category can be also approached in terms of the domain in which they are interpretable, namely, algorithmic transparency, decomposability and simulatability. As we elaborate next in connection to Figure 3, each of these classes contains its predecessors, e.g. a simulatable model is at the same time a model that is decomposable and algorithmically transparent:

  • Simulatability denotes the ability of a model of being simulated or thought about strictly by a human, hence complexity takes a dominant place in this class. This being said, simple but extensive (i.e., with too large

    amount of rules) rule based systems fall out of this characteristic, whereas a single perceptron neural network falls within. This aspect aligns with the claim that sparse linear models are more interpretable than dense ones

    tibshirani1996Simulatability, and that an interpretable model is one that can be easily presented to a human by means of text and visualizations ribeiro2016trust. Again, endowing a decomposable model with simulatability requires that the model has to be self-contained enough for a human to think and reason about it as a whole.

  • Decomposability stands for the ability to explain each of the parts of a model (input, parameter and calculation). It can be considered as intelligibility as stated in lou2012Decomposability. This characteristic might empower the ability to understand, interpret or explain the behavior of a model. However, as occurs with algorithmic transparency, not every model can fulfill this property. Decomposability requires every input to be readily interpretable (e.g. cumbersome features will not fit the premise). The added constraint for an algorithmically transparent model to become decomposable is that every part of the model must be understandable by a human without the need for additional tools.

  • Algorithmic Transparency can be seen in different ways. It deals with the ability of the user to understand the process followed by the model to produce any given output from its input data. Put it differently, a linear model is deemed transparent because its error surface can be understood and reasoned about, allowing the user to understand how the model will act in every situation it may face james2013Transferability. Contrarily, it is not possible to understand it in deep architectures as the loss landscape might be opaque kawaguchi2016Transparency; AlgorithmicTransparency

    since it cannot be fully observed and the solution has to be approximated through heuristic optimization (e.g. through stochastic gradient descent). The main constrain for algorithmically transparent models is that the model has to be fully explorable by means of mathematical analysis and methods.

2.5.2 Post-hoc Explainability techniques for Machine Learning Models

Post-hoc explainability targets models that are not readily interpretable by design by resorting to diverse means to enhance their interpretability, such as text explanations, visual explanations, local explanations, explanations by example, explanations by simplification and feature relevance explanations techniques. Each of these techniques covers one of the most common ways humans explain systems and processes by themselves.

Figure 3: Conceptual diagram exemplifying the different levels of transparency characterizing a ML model , with denoting the parameter set of the model at hand: (a) simulatability; (b) decomposability; (c) algorithmic transparency.

Further along this river, actual techniques, or better put, actual group of techniques are specified to ease the future work of any researcher that intends to look up for an specific technique that suits its knowledge. Not ending there, the classification also includes the type of data in which the techniques has been applied. Note that many techniques might be suitable for many different types of data, although the categorization only considers the type used by the authors that proposed such technique. Overall, post-hoc explainability techniques are divided first by the intention of the author (explanation technique e.g. Explanation by simplification), then, by the method utilized (actual technique e.g. sensitivity analysis) and finally by the type of data in which it was applied (e.g. images).

  • Text explanations deal with the problem of bringing explainability for a model by means of learning to generate text explanations that help explaining the results from the model Bennetot19. Text explanations also include every method generating symbols that represent the functioning of the model. These symbols may portrait the rationale of the algorithm by means of a semantic mapping from model to symbols.

  • Visual explanation techniques for post-hoc explainability aim at visualizing the model’s behavior. Many of the visualization methods existing in the literature come along with dimensionality reduction techniques that allow for a human interpretable simple visualization. Visualizations may be coupled with other techniques to improve their understanding, and are considered as the most suitable way to introduce complex interactions within the variables involved in the model to users not acquainted to ML modeling.

  • Local explanations tackle explainability by segmenting the solution space and giving explanations to less complex solution subspaces that are relevant for the whole model. These explanations can be formed by means of techniques with the differentiating property that these only explain part of the whole system’s functioning.

  • Explanations by example consider the extraction of data examples that relate to the result generated by a certain model, enabling to get a better understanding of the model itself. Similarly to how humans behave when attempting to explain a given process, explanations by example are mainly centered in extracting representative examples that grasp the inner relationships and correlations found by the model being analyzed.

  • Explanations by simplification collectively denote those techniques in which a whole new system is rebuilt based on the trained model to be explained. This new, simplified model usually attempts at optimizing its resemblance to its antecedent functioning, while reducing its complexity, and keeping a similar performance score. An interesting byproduct of this family of post-hoc techniques is that the simplified model is, in general, easier to be implemented due to its reduced complexity with respect to the model it represents.

  • Finally, feature relevance explanation methods for post-hoc explainability clarify the inner functioning of a model by computing a relevance score for its managed variables. These scores quantify the affection (sensitivity) a feature has upon the output of the model. A comparison of the scores among different variables unveils the importance granted by the model to each of such variables when producing its output. Feature relevance methods can be thought to be an indirect method to explain a model.

Figure 4: Conceptual diagram showing the different post-hoc explainability approaches available for a ML model .

The above classification (portrayed graphically in Figure 4) will be used when reviewing specific/agnostic XAI techniques for ML models in the following sections (Table 2). For each ML model, a distinction of the propositions to each of these categories is presented in order to pose an overall image of the field’s trends.

Model Transparent ML Models Post-hoc analysis
Simulatability Decomposability Algorithmic Transparency

Linear/Logistic Regression

Predictors are human readable and interactions among them are kept to a minimum Variables are still readable, but the number of interactions and predictors involved in them have grown to force decomposition Variables and interactions are too complex to be analyzed without mathematical tools Not needed
Decision Trees A human can simulate and obtain the prediction of a decision tree on his/her own, without requiring any mathematical background The model comprises rules that do not alter data whatsoever, and preserves their readability Human-readable rules that explain the knowledge learned from data and allows for a direct understanding of the prediction process Not needed
K-Nearest Neighbors The complexity of the model (number of variables, their understandability and the similarity measure under use) matches human naive capabilities for simulation The amount of variables is too high and/or the similarity measure is too complex to be able to simulate the model completely, but the similarity measure and the set of variables can be decomposed and analyzed separately The similarity measure cannot be decomposed and/or the number of variables is so high that the user has to rely on mathematical and statistical tools to analyze the model Not needed
Rule Based Learners Variables included in rules are readable, and the size of the rule set is manageable by a human user without external help The size of the rule set becomes too large to be analyzed without decomposing it into small rule chunks Rules have become so complicated (and the rule set size has grown so much) that mathematical tools are needed for inspecting the model behaviour Not needed
General Additive Models Variables and the interaction among them as per the smooth functions involved in the model must be constrained within human capabilities for understanding Interactions become too complex to be simulated, so decomposition techniques are required for analyzing the model Due to their complexity, variables and interactions cannot be analyzed without the application of mathematical and statistical tools Not needed
Bayesian Models Statistical relationships modeled among variables and the variables themselves should be directly understandable by the target audience Statistical relationships involve so many variables that they must be decomposed in marginals so as to ease their analysis Statistical relationships cannot be interpreted even if already decomposed, and predictors are so complex that model can be only analyzed with mathematical tools Not needed
Tree Ensembles Needed: Usually Model simplification or Feature relevance techniques
Support Vector Machines Needed: Usually Model simplification or Local explanations techniques
Multi–layer Neural Network Needed: Usually Model simplification, Feature relevance or Visualization techniques
Convolutional Neural Network Needed: Usually Feature relevance or Visualization techniques
Recurrent Neural Network Needed: Usually Feature relevance techniques
Table 2: Overall picture of the classification of ML models attending to their level of explainability.

3 Transparent Machine Learning Models

The previous section introduced the concept of transparent models. A model is considered to be transparent if by itself it is understandable. The models surveyed in this section are a suit of transparent models that can fall in one or all of the levels of model transparency described previously (namely, simulatability, decomposability and algorithmic transparency). In what follows we provide reasons for this statement, with graphical support given in Figure 5.

Figure 5:

Graphical illustration of the levels of transparency of different ML models considered in this overview: (a) Linear regression; (b) Decision trees; (c) K-Nearest Neighbors; (d) Rule-based Learners; (e) Generalized Additive Models; (f) Bayesian Models.

3.1 Linear/Logistic Regression

Logistic Regression (LR) is a classification model to predict a dependent variable (category) that is dichotomous (binary). However, when the dependent variable is continuous, linear regression would be its homonym. This model takes the assumption of linear dependence between the predictors and the predicted variables, impeding a flexible fit to the data. This specific reason (stiffness of the model) is the one that maintains the model under the umbrella of transparent methods. However, as stated in Section 2, explainability is linked to a certain audience, which makes a model fall under both categories depending who is to interpret it. This way, logistic and linear regression, although clearly meeting the characteristics of transparent models (algorithmic transparency, decomposability and simulatability), may also demand post-hoc explainability techniques (mainly, visualization), particularly when the model is to be explained to non-expert audiences.

The usage of this model has been largely applied within Social Sciences for quite a long time, which has pushed researchers to create ways of explaining the results of the models to non-expert users. Most authors agree on the different techniques used to analyze and express the soundness of LR PurposeLR; InteractionLR; AppliedLR; IntroLR

, including the overall model evaluation, statistical tests of individual predictors, goodness-of-fit statistics and validation of the predicted probabilities. The overall model evaluation shows the improvement of the applied model over a baseline, showing if it is in fact improving the model without predictions. The statistical significance of single predictors is shown by calculating the Wald chi-square statistic. The goodness-of-fit statistics show the quality of fitness of the model to the data and how significant this is. This can be achieved by resorting to different techniques e.g. the so-called Hosmer-Lemeshow (H-L) statistic. The validation of predicted probabilities involves testing whether the output of the model corresponds to what is shown by the data. These techniques show mathematical ways of representing the fitness of the model and its behavior.

Other techniques from other disciplines besides Statistics can be adopted for explaining these regression models. Visualization techniques are very powerful when presenting statistical conclusions to users not well-versed in statistics. For instance, the work in NaturalFrecuencies

shows that the usage of probabilities to communicate the results, implied that the users where able to estimate the outcomes correctly in 10% of the cases, as opposed to 46% of the cases when using natural frequencies. Although logistic regression is among the simplest classification models in supervised learning, there are concepts that must be taken care of.

In this line of reasoning, the authors of CannotDo

unveil some concerns with the interpretations derived from LR. They first mention how dangerous it might be to interpret log odds ratios and odd ratios as substantive effects, since they also represent unobserved heterogeneity. Linked to this first concern,

CannotDo also states that a comparison between these ratios across models with different variables might be problematic, since the unobserved heterogeneity is likely to vary, thereby invalidating the comparison. Finally they also mention that the comparison of these odds across different samples, groups and time is also risky, since the variation of the heterogeneity is not known across samples, groups and time points. This last paper serves the purpose of visualizing the problems a model’s interpretation might entail, even when its construction is as simple as that of LR.

Also interesting is to note that, for a model such as logistic or linear regression to maintain decomposability and simulatability, its size must be limited, and the variables used must be understandable by their users. As stated in Section 2, if inputs to the model are highly engineered features that are complex or difficult to understand, the model at hand will be far from being decomposable. Similarly, if the model is so large that a human cannot think of the model as a whole, its simulatability will be put to question.

3.2 Decision Trees

Decision trees are another example of a model that can easily fulfill every constraint for transparency. Decision trees are hierarchical structures for decision making used to support regression and classification problems quinlan1987simplifying; laurent1976constructing. In the simplest of their flavors, decision trees are simulatable models. However, their properties can render them decomposable or algorithmically transparent.

Decision trees have always lingered in between the different categories of transparent models. Their utilization has been closely linked to decision making contexts, being the reason why their complexity and understandability have always been considered a paramount matter. A proof of this relevance can be found in the upsurge of contributions to the literature dealing with decision tree simplification and generation quinlan1987simplifying; laurent1976constructing; utgoff1989incremental; quinlan1986induction. As noted above, although being capable of fitting every category within transparent models, the individual characteristics of decision trees can push them toward the category of algorithmically transparent models. A simulatable decision tree is one that is manageable by a human user. This means its size is somewhat small and the amount of features and their meaning are easily understandable. An increment in size transforms the model into a decomposable one since its size impedes its full evaluation (simulation) by a human. Finally, further increasing its size and using complex feature relations will make the model algorithmically transparent loosing the previous characteristics.

Decision trees have long been used in decision support contexts due to their off-the-shelf transparency. Many applications of these models fall out of the fields of computation and AI (even information technologies), meaning that experts from other fields usually feel comfortable interpreting the outputs of these models rokach2008data; rovnyak1994decision; nefeslioglu2010assessment. However, their poor generalization properties in comparison with other models make this model family less interesting for their application to scenarios where a balance between predictive performance is a design driver of utmost importance. Tree ensembles aim at overcoming such a poor performance by aggregating the predictions performed by trees learned on different subsets of training data. Unfortunately, the combination of decision trees looses every transparent property, calling for the adoption of post-hoc explainability techniques as the ones reviewed later in the manuscript.

3.3 K-Nearest Neighbors

Another method that falls within transparent models is that of K-Nearest Neighbors (KNN), which deals with classification problems in a methodologically simple way: it predicts the class of a test sample by voting the classes of its K nearest neighbors (where the neighborhood relation is induced by a measure of distance between samples). When used in the context of regression problems, the voting is replaced by an aggregation (e.g. average) of the target values associated with the nearest neighbors.

In terms of model explainability, it is important to observe that predictions generated by KNN models rely on the notion of distance and similarity between examples, which can be tailored depending on the specific problem being tackled. Interestingly, this prediction approach resembles that of experience-based human decision making, which decides upon the result of past similar cases. There lies the rationale of why KNN has also been adopted widely in contexts in which model interpretability is a requirement KNNimandoust2013application; KNNli2004application; KNNguo2004knn; KNNjiang2012improved

. Furthermore, aside from being simple to explain, the ability to inspect the reasons by which a new sample has been classified inside a group and to examine how these predictions evolve when the number of neighbors K is increased or decreased empowers the interaction between the users and the model.

One must keep in mind that as mentioned before, KNN’s class of transparency depends on the features, the number of neighbors and the distance function used to measure the similarity between data instances. A very high K impedes a full simulation of the model performance by a human user. Similarly, the usage of complex features and/or distance functions would hinder the decomposability of the model, restricting its interpretability solely to the transparency of its algorithmic operations.

3.4 Rule-based Learning

Rule-based learning refers to every model that generates rules to characterize the data it is intended to learn from. Rules can take the form of simple conditional if-then rules or more complex combinations of simple rules to form their knowledge. Also connected to this general family of models, fuzzy rule based systems are designed for a broader scope of action, allowing for the definition of verbally formulated rules over imprecise domains. Fuzzy systems improve two main axis relevant for this paper. First, they empower more understandable models since they operate in linguistic terms. Second, they perform better that classic rule systems in contexts with certain degrees of uncertainty. Rule based learners are clearly transparent models that have been often used to explain complex models by generating rules that explain their predictions nunez2002rule; nunez2006rule; RuleExtractionInThere; ProductionRulesFromTrees.

Rule learning approaches have been extensively used for knowledge representation in expert systems RULElangley1995applications. However, a central problem with rule generation approaches is the coverage (amount) and the specificity (length) of the rules generated. This problem relates directly to the intention for their use in the first place. When building a rule database, a typical design goal sought by the user is to be able to analyze and understand the model. The amount of rules in a model will clearly improve the performance of the model at the stake of compromising its intepretability. Similarly, the specificity of the rules plays also against interpretability, since a rule with a high number of antecedents an/or consequences might become difficult to interpret. In this same line of reasoning, these two features of a rule based learner play along with the classes of transparent models presented in Section 2. The greater the coverage or the specificity is, the closer the model will be to being just algorithmically transparent. Sometimes, the reason to transition from classical rules to fuzzy rules is to relax the constraints of rule sizes, since a greater range can be covered with less stress on interpretability.

Rule based learners are great models in terms of interpretability across fields. Their natural and seamless relation to human behaviour makes them very suitable to understand and explain other models. If a certain threshold of coverage is acquired, a rule wrapper can be thought to contain enough information about a model to explain its behavior to a non-expert user, without forfeiting the possibility of using the generated rules as an standalone prediction model.

3.5 General Additive Models

In statistics, a Generalized Additive Model (GAM) is a linear model in which the value of the variable to be predicted is given by the aggregation of a number of unknown smooth functions defined for the predictor variables. The purpose of such model is to infer the smooth functions whose aggregate composition approximates the predicted variable. This structure is easily interpretable, since it allows the user to verify the importance of each variable, namely, how it affects (through its corresponding function) the predicted output.

Similarly to every other transparent model, the literature is replete with case studies where GAMs are in use, specially in fields related to risk assessment. When compared to other models, these are understandable enough to make users feel confident on using them for practical applications in finance Bankruptcy; BankLoanLoss; FianceScienceTechnology, environmental studies RelationshipsEnviromental, geology GeositeAssesment, healthcare caruana2015Transferability, biology SpeciesDistribution; ButterflyTranscent and energy ElectricityLoad. Most of these contributions use visualization methods to further ease the interpretation of the model. GAMs might be also considered as simulatable and decomposable models if the properties mentioned in its definitions are fulfilled, but to an extent that depends roughly on eventual modifications to the baseline GAM model, such as the introduction of link functions to relate the aggregation with the predicted output, or the consideration of interactions between predictors.

All in all, applications of GAMs like the ones exemplified above share one common factor: understandability. The main driver for conducting these studies with GAMs is to understand the underlying relationships that build up the cases for scrutiny. In those cases the research goal is not accuracy for its own sake, but rather the need for understanding the problem behind and the relationship underneath the variables involved in data. This is why GAMs have been accepted in certain communities as their de facto modeling choice, despite their acknowledged misperforming behavior when compared to more complex counterparts.

3.6 Bayesian Models

A Bayesian model usually takes the form of a probabilistic directed acyclic graphical model whose links represent the conditional dependencies between a set of variables. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Similar to GAMs, these models also convey a clear representation of the relationships between features and the target, which in this case are given explicitly by the connections linking variables to each other.

Once again, Bayesian models fall below the ceiling of Transparent models. Its categorization leaves it under simulatable, decomposable and algorithmically transparent. However, it is worth noting that under certain circumstances (overly complex or cumbersome variables), a model may loose these first two properties. Bayesian models have been shown to lead to great insights in assorted applications such as cognitive modeling BayesianCognitive; BayesianPsychiatric, fishery RelationshipsEnviromental; BayesianStock, gaming BayesianRTS, climate BayesianClimate, econometrics BayesianEconometrics or robotics BayesianRobot. Furthermore, they have also been utilized to explain other models, such as averaging tree ensembles BayesianTree.

4 Post-hoc Explainability Techniques for Machile Learning Models: Taxonomy, Shallow Models and Deep Learning

When ML models do not meet any of the criteria imposed to declare them transparent, a separate method must be devised and applied to the model to explain its decisions. This is the purpose of post-hoc explainability techniques (also referred to as post-modeling explainability), which aim at communicating understandable information about how an already developed model produces its predictions for any given input. In this section we categorize and review different algorithmic approaches for post-hoc explainability, discriminating among 1) those that are designed for their application to ML models of any kind; and 2) those that are designed for a specific ML model and thus, can not be directly extrapolated to any other learner. We now elaborate on the trends identified around post-hoc explainability for different ML models, which are illustrated in Figure 6 in the form of hierarchical bibliographic categories and summarized next:

  • Model-agnostic techniques for post-hoc explainability (Subsection 4.1), which can be applied seamlessly to any ML model disregarding its inner processing or internal representations.

  • Post-hoc explainability that are tailored or specifically designed to explain certain ML models. We divide our literature analysis into two main branches: contributions dealing with post-hoc explainability of shallow ML models, which collectively refers to all ML models that do not hinge on layered structures of neural processing units (Subsection 4.2); and techniques devised for deep learning models, which correspondingly denote the family of neural networks and related variants, such as convolutional neural networks, recurrent neural networks (Subsection 4.3) and hybrid schemes encompassing deep neural networks and transparent models. For each model we perform a thorough review of the latest post-hoc methods proposed by the research community, along with a identification of trends followed by such contributions.

  • We end our literature analysis with Subsection 4.4, where we present a second taxonomy that complements the more general one in Figure 6 by classifying contributions dealing with the post-hoc explanation of Deep Learning models. To this end we focus on particular aspects related to this family of black-box ML methods, and expose how they link to the classification criteria used in the first taxonomy.

for tree= l sep=5em, s sep=1em, child anchor=west, parent anchor=east, grow’=0, line width=0.75mm, anchor=west, draw, [XAI in ML [Transparent Models [Logistic / Linear Regression
] [Decision Trees
] [K-Nearest Neighbors
] [Rule-base Learners
] [General Additive Models: caruana2015Transferability ] [Bayesian Models: kim2015Trust; letham2015interpretable; kim2014bayesian; kim2016examples ] ] [Post-Hoc Explainability [Model-Agnostic
[Decision Tree: craven1996extracting; domingos1998knowledge; SingleTreeApproximation; TreeView; johansson2009evolving; Craven96; InterpretabilityViaModelExtraction ] [Others: InterpretableDeepICU; DiscoveringAdditive ] ] [Feature relevance explanation
[Sensitivity: SensitivityAnalysis; UsingSensitivityAndVisualization ] [Game theory inspired: lundberg2017unified; EfficientExplanation ] [Saliency: fong2017interpretable; RealTimeImageSaliency ] [Interaction based: AtributeInteractions; ExploringByRandomization ] [Others: apley2016visualizing; staniak2018explanations; moeyersoms2016explaining; IndividualClassificationDecisions; adebayo2016iterative ] ] [Local Explanations
[Decision Tree: guidotti2018local; krishnan2017palm ] [Others: krause2016interacting; lundberg2017unified; IndividualClassificationDecisions; ExplainingClassifications; ribeiro2018anchors ] ] [Visual explanation
[Sensitivity / Saliency: fong2017interpretable; RealTimeImageSaliencySensitivityAnalysis; UsingSensitivityAndVisualization ] [Others: xu2018interpreting; AtributeInteractions; apley2016visualizing; NaturalFrecuencies; ExplainingClassifications ] ] ] [Model-Specific
] [Feature relevance explanation
] [Visual explanation
] ] [Support Vector Machines
[Probabilistic: bayesianForSVM; probabilisticSVM ] [Others: haasdonk2005feature ] ] [Feature relevance explanation
] [Visual explanation
] ] [Multi-Layer Neural Networks
[Decision Tree: craven1996extracting; InterpretableDeepICU; wu2018beyond; frosst2017distilling; krishnan1999extracting; TreeView; zilke2016deepred; Schmitz99; CRED ] [Others: hinton2015distilling ] ] [Feature relevance explanation
[Sensitivity / Saliency: feraud2002methodologyAxiomatic ] ] [Local explanation
] [Explanation by Example
] [Text explanation
] [Visual explanation
] [Architecture modification
] ] [Convolutional Neural Networks
] [Feature relevance explanation
[Feature Extraction: LayerWise; SynthesizingPreferredInputs ] ] [Visual explanation
[Sensitivity / Saliency: zhang2015sensitivity; InsideConvsamek2017explainable ] [Others: nguyen2015deep ] ] [Architecture modification
[Model combination: xu2015show; donahue2015long; Hendricks16Generate ] [Attention networks: linsley2018global; seo2017interpretable; wang2017residual; xiao2015applicationxu2015show ] [Loss modification: Hendricks16GenerateInterpretableCNN ] [Others: Zhang16 ] ] ] [Recurrent Neural Networks
] [Feature relevance explanation
] [Visual explanation
] [Arquitecture modification
[Others: radford2017learning; InterpretableRNN; MarkovRNNRETAIN ] ] ] ] ] ]

Figure 6: Taxonomy of the reviewed literature and trends identified for explainability techniques related to different ML models. References boxed in blue, green and red correspond to XAI techniques using image, text or tabular data, respectively. In order to build this taxonomy, the literature has been analyzed in depth to discriminate whether a post-hoc technique can be seamlessly applied to any ML model, even if, e.g., explicitly mentions Deep Learning in its title and/or abstract.

4.1 Model-agnostic Techniques for Post-hoc Explainability

Model-agnostic techniques for post-hoc explainability are designed to be plugged to any model with the intent of extracting some information from its prediction procedure. Sometimes, simplification techniques are used to generate proxies that mimic their antecedents with the purpose of having something tractable and of reduced complexity. Other times, the intent focuses on extracting knowledge directly from the models or simply visualizing them to ease the interpretation of their behavior. Following the taxonomy introduced in Section 2, model-agnostic techniques may rely on model simplification, feature relevance estimation and visualization techniques:

  • Explanation by simplification. They are arguably the broadest technique under the category of model agnostic post-hoc methods. Local explanations are also present within this category, since sometimes, simplified models are only representative of certain sections of a model. Almost all techniques taking this path for model simplification are based on rule extraction techniques. Among the most known contributions to this approach we encounter the technique of Local Interpretable Model-Agnostic Explanations (LIME) ribeiro2016trust and all its variations ModelAgnosticMusic; NothingElseMatters. LIME builds locally linear models around the predictions of an opaque model to explain it. These contributions fall under explanations by simplification as well as under local explanations. Besides LIME and related flavors, another approach to rule extraction is G-REX GREX. Although it was not originally intended for extracting rules from opaque models, the generic proposition of G-REX has been extended to also account for model explainability purposes RuleExtractionInThere; AccVsComp. In line with rule extraction methods, the work in InterpretableTwoLevel presents a novel approach to learn rules in CNF (Conjunctive Normal Form) or DNF (Disjunctive Normal Form) to bridge from a complex model to a human-interpretable model. Another contribution that falls off the same branch is that in InterpretabilityViaModelExtraction, where the authors formulate model simplification as a model extraction process by approximating a transparent model to the complex one. Simplification is approached from a different perspective in DistillAndCompare, where an approach to distill and audit black box models is presented. In it, two main ideas are exposed: a method for model distillation and comparison to audit black-box risk scoring models; and an statistical test to check if the auditing data is missing key features it was trained with. The popularity of model simplification is evident, given it temporally coincides with the most recent literature on XAI, including techniques such as LIME or G-REX. This symptomatically reveals that this post-hoc explainability approach is envisaged to continue playing a central role on XAI.

  • Feature relevance explanation techniques aim to describe the functioning of an opaque model by ranking or measuring the influence, relevance or importance each feature has in the prediction output by the model to be explained. An amalgam of propositions are found within this category, each resorting to different algorithmic approaches with the same targeted goal. One fruitful contribution to this path is that of lundberg2017unified called SHAP (SHapley Additive exPlanations). Its authors presented a method to calculate an additive feature importance score for each particular prediction with a set of desirable properties (local accuracy, missingness and consistency) that its antecedents lacked. Another approach to tackle the contribution of each feature to predictions has been coalitional Game Theory EfficientExplanation and local gradients ExplainingClassifications. Similarly, by means of local gradients IndividualClassificationDecisions test the changes needed in each feature to produce a change in the output of the model. In ExploringByRandomization the authors analyze the relations and dependencies found in the model by grouping features, that combined, bring insights about the data. The work in AlgorithmicTransparency presents a broad variety of measures to tackle the quantification of the degree of influence of inputs on outputs of systems. Their QII (Quantitative Input Influence) measures account for correlated inputs while measuring influence. In contrast, in SensitivityAnalysis the authors build upon the existing SA (Sensitivity Analysis) to construct a Global SA which extends the applicability of the existing methods. In RealTimeImageSaliency a real-time image saliency method is proposed, which is applicable to differentiable image classifiers. The study in AtributeInteractions presents the so-called Automatic STRucture IDentification method (ASTRID) to inspect which attributes are exploited by a classifier to generate a prediction. This method finds the largest subset of features such that the accuracy of a classifier trained with this subset of features cannot be distinguished in terms of accuracy from a classifier built on the original feature set. In ViaInfluence the authors use influence functions to trace a model’s prediction back to the training data, by only requiring an oracle version of the model with access to gradients and Hessian-vector products. Compared to those attempting explanations by simplification, a similar amount of publications were found tackling explainability by means of feature relevance techniques. Many of the contributions date from 2017 and some from 2018, implying that as with model simplification techniques, feature relevance has also become a vibrant subject study in the current XAI landscape.

  • Visual explanation techniques are a vehicle to achieve model-agnostic explanations. Representative works in this area can be found in SensitivityAnalysis, which present a portfolio of visualization techniques to help in the explanation of a black-box ML model built upon the set of extended techniques mentioned earlier (Global SA). Another set of visualization techniques is presented in UsingSensitivityAndVisualization. The authors present three novel SA methods (data based SA, Monte-Carlo SA, cluster-based SA) and one novel input importance measure (Average Absolute Deviation). Finally, VisualizingStatisticalLearning presents ICE (Individual Conditional Expectation) plots as a tool for visualizing the model estimated by any supervised learning algorithm. Visual explanations are less common in the field of model-agnostic techniques for post-hoc explainability. Since the design of these methods must ensure that they can be seamlessly applied to any ML model disregarding its inner structure, creating visualizations from just inputs and outputs from an opaque model is a complex task. This is why almost all visualization methods falling in this category work along with feature relevance techniques, which provide the information that is eventually displayed to the end user.

Several trends emerge from our literature analysis. To begin with, rule extraction techniques prevail in model-agnostic contributions under the umbrella of post-hoc explainability. This could have been intuitively expected if we bear in mind the wide use of rule based learning as explainability wrappers anticipated in Section 3.4, and the complexity imposed by not being able to get into the model itself. Similarly, another large group of contributions deals with feature relevance. Lately these techniques are gathering much attention by the community when dealing with DL models, with hybrid approaches that utilize particular aspects of this class of models and therefore, compromise the independence of the feature relevance method on the model being explained. Finally, visualization techniques propose interesting ways for visualizing the output of feature relevance techniques to ease the task of model’s interpretation. By contrast, visualization techniques for other aspects of the trained model (e.g. its structure, operations, etc) are tightly linked to the specific model to be explained.

4.2 Post-hoc Explainability in Shallow ML Models

Shallow ML covers a diversity of supervised learning models. Within these models, there are strictly interpretable (transparent) approaches (e.g. KNN and Decision Trees, already discussed in Section 3). However, other shallow ML models rely on more sophisticated learning algorithms that require additional layers of explanation. Given their prominence and notable performance in predictive tasks, this section concentrates on two popular shallow ML models (tree ensembles and Support Vector Machines, SVMs) that require the adoption of post-hoc explainability techniques for explaining their decisions.

4.2.1 Tree Ensembles and Random Forests

Tree ensembles are arguably among the most accurate ML models in use nowadays. Their advent came as an efficient means to improve the generalization capability of single decision trees, which are usually prone to overfitting. To circumvent this issue, tree ensembles combine different trees to obtain an aggregated prediction/regression. While it results to be effective against overfitting, the combination of models makes the interpretation of the overall ensemble more complex than each of its compounding tree learners, forcing the user to draw from post-hoc explainability techniques. For tree ensembles, techniques found in the literature are explanation by simplification and feature relevance techniques; we next examine recent advances in these techniques.

To begin with, many contributions have been presented to simplify tree ensembles while maintaining part of the accuracy accounted for the added complexity. The author from domingos1998knowledge poses the idea of training a single albeit less complex model from a set of random samples from the data (ideally following the real data distribution) labeled by the ensemble model. Another approach for simplification is that in Intrees, in which authors create a Simplified Tree Ensemble Learner (STEL). Likewise, MakingTEInterpretable

presents the usage of two models (simple and complex) being the former the one in charge of interpretation and the latter of prediction by means of Expectation-Maximization and Kullback-Leibler divergence. As opposed to what was seen in model-agnostic techniques, not that many techniques to board explainability in tree ensembles by means of

model simplification. It derives from this that either the proposed techniques are good enough, or model-agnostic techniques do cover the scope of simplification already.

Following simplification procedures, feature relevance techniques are also used in the field of tree ensembles. Breiman CostComplexityPrunning

was the first to analyze the variable importance within Random Forests. His method is based on measuring MDA (Mean Decrease Accuracy) or MIE (Mean Increase Error) of the forest when a certain variable is randomly permuted in the out-of-bag samples. Following this contribution

auret2012interpretation shows, in an real setting, how the usage of variable importance reflects the underlying relationships of a complex system modeled by a Random Forest. Finally, a crosswise technique among post-hoc explainability, FeatureTweaking proposes a framework that poses recommendations that, if taken, would convert an example from one class to another. This idea attempts to disentangle the variables importance in a way that is further descriptive. In the article, the authors show how these methods can be used to elevate recommendations to improve malicious online ads to make them rank higher in paying rates.

Similar to the trend shown in model-agnostic techniques, for tree ensembles again, simplification and feature relevance techniques seem to be the most used schemes. However, contrarily to what was observed before, most papers date back from 2017.

4.2.2 Support Vector Machines

Another shallow ML model with historical presence in the literature is the SVM. SVM models are more complex than tree ensembles, with a much opaquer structure. Many implementations of post-hoc explainability techniques have been proposed to relate what is mathematically described internally in these models, to what different authors considered explanations about the problem at hand. Technically, an SVM constructs a hyper-plane or set of hyper-planes in a high or infinite-dimensional space, which can be used for classification, regression, or other tasks such as outlier detection. Intuitively, a good separation is achieved by the hyperplane that has the largest distance (so-called functional margin) to the nearest training-data point of any class, since in general, the larger the margin, the lower the generalization error of the classifier. SVMs are among the most used ML models due to their excellent prediction and generalization capabilities. From the techniques stated in Section 2, post-hoc explainability applied to SVMs covers explanation by

simplification, local explanations, visualizations and explanations by example.

Among explanation by simplification, four classes of simplifications are made. Each of them differentiates from the other by how deep they go into the algorithm inner structure. First, some authors propose techniques to build rule based models only from the support vectors of a trained model. This is the approach of barakat2007rule, which proposes a method that extracts rules directly from the support vectors of a trained SVM using a modified sequential covering algorithm. In barakat2008 the same authors propose eclectic rule extraction, still considering only the support vectors of a trained model. The work in Chaves2005 generates fuzzy rules instead of classical propositional rules. Here, the authors argue that long antecedents reduce comprehensibility, hence, a fuzzy approach allows for a more linguistically understandable result. The second class of simplifications can be exemplified by fu2004, which proposed the addition of the SVM’s hyperplane, along with the support vectors, to the components in charge of creating the rules. His method relies on the creation of hyper-rectangles from the intersections between the support vectors and the hyper-plane. In a third approach to model simplification, another group of authors considered adding the actual training data as a component for building the rules. In nunez2002rule; nunez2006; nunez2002B the authors proposed a clustering method to group prototype vectors for each class. By combining them with the support vectors, it allowed defining ellipsoids and hyper-rectangles in the input space. Similarly in zhang2005, the authors proposed the so-called Hyper-rectangle Rule Extraction, an algorithm based on SVC (Support Vector Clustering) to find prototype vectors for each class and then define small hyper-rectangles around. In fung2005, the authors formulate the rule extraction problem as a multi-constrained optimization to create a set of non-overlapping rules. Each rule conveys a non-empty hyper-cube with a shared edge with the hyper-plane. In a similar study conducted in chen2007

, extracting rules for gene expression data, the authors presented a novel technique as a component of a multi-kernel SVM. This multi-kernel method consists of feature selection, prediction modeling and rule extraction. Finally, the study in

intepretationSVM makes use of a growing SVC to give an interpretation to SVM decisions in terms of linear rules that define the space in Voronoi sections from the extracted prototypes.

Leaving aside rule extraction, the literature has also contemplated some other techniques to contribute to the interpretation of SVMs. Three of them (visualization techniques) are clearly used toward explaining SVM models when used for concrete applications. For instance, ustun2007visualisation presents an innovative approach to visualize trained SVM to extract the information content from the kernel matrix. They center the study on Support Vector Regression models. They show the ability of the algorithm to visualize which of the input variables are actually related with the associated output data. In interpretingHeatMapSVM a visual way combines the output of the SVM with heatmaps to guide the modification of compounds in late stages of drug discovery. They assign colors to atoms based on the weights of a trained linear SVM that allows for a much more comprehensive way of debugging the process. In interpretingNeuroSVM the authors argue that many of the presented studies for interpreting SVMs only account for the weight vectors, leaving the margin aside. In their study they show how this margin is important, and they create an statistic that explicitly accounts for the SVM margin. The authors show how this statistic is specific enough to explain the multivariate patterns shown in neuroimaging.

Noteworthy is also the intersection between SVMs and Bayesian systems, the latter being adopted as a post-hoc technique to explain decisions made by the SVM model. This is the case of probabilisticSVM and bayesianForSVM, which are studies where SVMs are interpreted as MAP (Maximum A Posteriori) solutions to inference problems with Gaussian Process priors. This framework makes tuning the hyper-parameters comprehensible and gives the capability of predicting class probabilities instead of the classical binary classification of SVMs. Interpretability of SVM models becomes even more involved when dealing with non-CPD (Conditional Positive Definite) kernels that are usually harder to interpret due to missing geometrical and theoretical understanding. The work in haasdonk2005feature revolves around this issue with a geometrical interpretation of indefinite kernel SVMs, showing that these do not classify by hyper-plane margin optimization. Instead, they minimize the distance between convex hulls in pseudo-Euclidean spaces.

A difference might be appreciated between the post-hoc techniques applied to other models and those noted for SVMs. In previous models, model simplification in a broad sense was the prominent method for post-hoc explainability. In SVMs, local explanations have started to take some weight among the propositions. However, simplification based methods are, on average, much older than local explanations.

As a final remark, none of the reviewed methods treating SVM explainability are dated beyond 2017, which might be due to the progressive proliferation of DL models in almost all disciplines. Another plausible reason is that these models are already understood, so it is hard to improve upon what has already been done.

4.3 Explainability in Deep Learning

Post-hoc local explanations and feature relevance techniques are increasingly the most adopted methods for explaining DNNs. This section reviews explainability studies proposed for the most used DL models, namely multi-layer neural networks, Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN).

4.3.1 Multi-layer Neural Networks

From their inception, multi-layer neural networks (also known as multi-layer perceptrons) have been warmly welcomed by the academic community due to their huge ability to infer complex relations among variables. However, as stated in the introduction, developers and engineers in charge of deploying these models in real-life production find in their questionable explainability a common reason for reluctance. That is why neural networks have been always considered as black-box models. The fact that explainability is often a must for the model to be of practical value, forced the community to generate multiple explainability techniques for multi-layer neural networks, including model simplification approaches, feature relevance estimators, text explanations, local explanations and model visualizations.

Several model simplification techniques have been proposed for neural networks with one single hidden layer, however very few works have been presented for neural networks with multiple hidden layers. One of these few works is DeepRED algorithm zilke2016deepred

, which extends the decompositional approach to rule extraction (splitting at neuron level) presented in

CRED for multi-layer neural network by adding more decision trees and rules.

Some other works use model simplification as a post-hoc explainability approach. For instance, InterpretableDeepICU presents a simple distillation method called Interpretable Mimic Learning

to extract an interpretable model by means of gradient boosting trees. In the same direction, the authors in

TreeView propose a hierarchical partitioning of the feature space that reveals the iterative rejection of unlikely class labels, until association is predicted. In addition, several works addressed the distillation of knowledge from an ensemble of models into a single model hinton2015distilling; bucilua2006model; Traore19 .

Given the fact that the simplification of multi-layer neural networks is more complex as the number of layers increases, explaining these models by feature relevance methods has become progressively more popular. One of the representative works in this area is DeepTaylor, which presents a method to decompose the network classification decision into contributions of its input elements. They consider each neuron as an object that can be decomposed and expanded then aggregate and back-propagate these decompositions through the network, resulting in a deep Taylor decomposition. In the same direction, the authors in shrikumar2016not proposed DeepLIFT, an approach for computing importance scores in a multi-layer neural network. Their method compares the activation of a neuron to the reference activation and assigns the score according to the difference.

On the other hand, some works try to verify the theoretical soundness of current explainability methods. For example, the authors in Axiomatic, bring up a fundamental problem of most feature relevance techniques, designed for multi-layer networks. They showed that two axioms that such techniques ought to fulfill namely, sensitivity and implementation invariance, are violated in practice by most approaches. Following these axioms, the authors of Axiomatic created integrated gradients, a new feature relevance method proven to meet the aforementioned axioms. Similarly, the authors in LearningHowTo analyzed the correctness of current feature relevance explanation approaches designed for Deep Neural Networks, e,g., DeConvNet, Guided BackProp and LRP, on simple linear neural networks. Their analysis showed that these methods do not produce the theoretically correct explanation and presented two new explanation methods PatternNet and PatternAttribution that are more theoretically sound for both, simple and deep neural networks.

4.3.2 Convolutional Neural Networks

Currently, CNNs constitute the state-of-art models in all fundamental computer vision tasks, from image classification and object detection to instance segmentation. Typically, these models are built as a sequence of convolutional layers and pooling layers to automatically learn increasingly higher level features. At the end of the sequence, one or multiple fully connected layers are used to map the output features map into scores. This structure entails extremely complex internal relations that are very difficult to explain. Fortunately, the road to explainability for CNNs is easier than for other types of models, as the human cognitive skills favors the understanding of visual data.

Existing works that aim at understanding what CNNs learn can be divided into two broad categories: 1) those that try to understand the decision process by mapping back the output in the input space to see which parts of the input were discriminative for the output; and 2) those that try to delve inside the network and interpret how the intermediate layers see the external world, not necessarily related to any specific input, but in general.

One of the seminal works in the first category was AdaptiveDeconv. When an input image runs feed-forward through a CNN, each layer outputs a number of feature maps with strong and soft activations. The authors in AdaptiveDeconv used Deconvnet, a network designed previously by the same authors zeiler2010deconvolutional that, when fed with a feature map from a selected layer, reconstructs the maximum activations. These reconstructions can give an idea about the parts of the image that produced that effect. To visualize these strongest activations in the input image, the same authors used the occlusion sensitivity method to generate a saliency map VisualizingUnderstanding, which consists of iteratively forwarding the same image through the network occluding a different region at a time.

To improve the quality of the mapping on the input space, several subsequent papers proposed simplifying both the CNN architecture and the visualization method. In particular, LearningDeepFeatures included a global average pooling layer between the last convolutional layer of the CNN and the fully-connected layer that predicts the object class. With this simple architectural modification of the CNN, the authors built a class activation map that helps identify the image regions that were particularly important for a specific object class by projecting back the weights of the output layer on the convolutional feature maps. Later, in springenberg2014striving

, the authors showed that max-pooling layers can be used to replace convolutional layers with a large stride without loss in accuracy on several image recognition benchmarks. They obtained a cleaner visualization than Deconvnet by using a guided backpropagation method.

To increase the interpretability of classical CNNs, the authors in InterpretableCNN used a loss for each filter in high level convolutional layers to force each filter to learn very specific object components. The obtained activation patterns are much more interpretable for their exclusiveness with respect to the different labels to be predicted. The authors in LayerWise proposed visualizing the contribution to the prediction of each single pixel of the input image in the form of a heatmap. They used a Layer-wise Relevance Propagation (LRP) technique, which relies on a Taylor series close to the prediction point rather than partial derivatives at the prediction point itself. To further improve the quality of the visualization, attribution methods such as heatmaps, saliency maps or class activation methods (GradCAM selvaraju2017grad) are used (see Figure 7). In particular, the authors in selvaraju2017grad proposed a Gradient-weighted Class Activation Mapping (Grad-CAM), which uses the gradients of any target concept, flowing into the final convolutional layer to produce a coarse localization map, highlighting the important regions in the image for predicting the concept.

(a) Heatmap Burns18 (b) Attribution Olah17 (c) Grad-CAM selvaraju2017grad
Figure 7: Examples of rendering for different XAI visualization techniques on images.

In addition to the aforementioned feature relevance and visual explanation methods, some works proposed generating text explanations of the visual content of the image. For example, the authors in xu2015show

combined a CNN feature extractor with an RNN attention model to automatically learn to describe the content of images. In the same line,

xiao2015application presented a three-level attention model to perform a fine-grained classification task. The general model is a pipeline that integrates three types of attention: the object level attention model proposes candidate image regions or patches from the input image, the part-level attention model filters out non-relevant patches to a certain object, and the last attention model localizes discriminative patches. In the task of video captioning, the authors in ImprovingInterpretability use a CNN model combined with a bi-directional LSTM model as encoder to extract video features and then feed these features to an LSTM decoder to generate textual descriptions.

One of the seminal works in the second category is UnderstandingDeep. In order to analyse the visual information contained inside the CNN, the authors proposed a general framework that reconstruct an image from the CNN internal representations and showed that several layers retain photographically accurate information about the image, with different degrees of geometric and photometric invariance. To visualize the notion of a class captured by a CNN, the same authors created an image that maximizes the class score based on computing the gradient of the class score with respect to the input image InsideConv. In the same direction, the authors in SynthesizingPreferredInputs introduced a Deep Generator Network (DGN) that generates the most representative image for a given output neuron in a CNN.

For quantifying the interpretability of the latent representations of CNNs, the authors in QuantifyingInterpretability used a different approach called network dissection. They run a large number of images through a CNN and then analyze the top activated images by considering each unit as a concept detector to further evaluate each unit for semantic segmentation. This paper also examines the effects of classical training techniques on the interpretability of the learned model.

Although many of the techniques examined above utilize local explanations to achieve an overall explanation of a CNN model, others explicitly focus on building global explanations based on locally found prototypes. In adebayo2018local; adebayo2018sanity, the authors empirically showed how local explanations in deep networks are strongly dominated by their lower level features. They demonstrated that deep architectures provide strong priors that prevent the altering of how these low-level representations are captured. All in all, visualization mixed with feature relevance methods are arguably the most adopted approach to explainability in CNNs.

Instead of using one single interpretability technique, the framework proposed in Olah18 combines several methods to provide much more information about the network. For example, combining feature visualization (what is a neuron looking for?) with attribution (how does it affect the output?) allows exploring how the network decides between labels. This visual interpretability interface displays different blocks such as feature visualization and attribution depending on the visualization goal. This interface can be thought of as a union of individual elements that belong to layers (input, hidden, output), atoms (a neuron, channel, spatial or neuron group), content (activations – the amount a neuron fires, attribution – which classes a spatial position most contributes to, which tends to be more meaningful in later layers), and presentation (information visualization, feature visualization). Figure 8 shows some examples. Attribution methods normally rely on pixel association, displaying what part of an input example is responsible for the network activating in a particular way Olah17.

(a) Neuron (b) Channel (c) Layer
Figure 8: Feature visualization at different levels of a certain network Olah17.
(a) Original image (b) Explaining electric guitar (c) Explaining acoustic guitar
Figure 9: Examples of explanation when using LIME on images LIME.

A much simpler approach to all the previously cited methods was proposed in LIME framework LIME, as was described in Subsection 4.1 LIME perturbs the input and sees how the predictions change. In image classification, LIME creates a set of perturbed instances by dividing the input image into interpretable components (contiguous superpixels), and runs each perturbed instance through the model to get a probability. A simple linear model learns on this data set, which is locally weighted. At the end of the process, LIME presents the superpixels with highest positive weights as an explanation (see Figure 9).

A completely different explainability approach is proposed in adversarial detection. To understand model failures in detecting adversarial examples, the authors in papernot2018deep apply the k-nearest neighbors algorithm on the representations of the data learned by each layer of the CNN. A test input image is considered as adversarial if its representations are far from the representations of the training images.

4.3.3 Recurrent Neural Networks

As occurs with CNNs in the visual domain, RNNs have lately been used extensively for predictive problems defined over inherently sequential data, with a notable presence in natural language processing and time series analysis. These types of data exhibit long-term dependencies that are complex to be captured by a ML model. RNNs are able to retrieve such time-dependent relationships by formulating the retention of knowledge in the neuron as another parametric characteristic that can be learned from data.

Few contributions have been made for explaining RNN models. These studies can be divided into two groups: 1) explainability by understanding what a RNN model has learned (mainly via feature relevance methods); and 2) explainability by modifying RNN architectures to provide insights about the decisions they make (local explanations).

In the first group, the authors in ExplainingRNN

extend the usage of LRP to RNNs. They propose a specific propagation rule that works with multiplicative connections as those in LSTMs (Long Short Term Memory) units and GRUs (Gated Recurrent Units). The authors in

VisualizingUnderstandingRNN

propose a visualization technique based on finite horizon n-grams that discriminates interpretable cells within LSTM and GRU networks. Following the premise of not altering the architecture,

DistillingRNN extends the interpretable mimic learning distillation method used for CNN models to LSTM networks, so that interpretable features are learned by fitting Gradient Boosting Trees to the trained LSTM network under focus.

Aside from the approaches that do not change the inner workings of the RNNs, RETAIN presents RETAIN (REverse Time AttentIoN) model, which detects influential past patterns by means of a two-level neural attention model. To create an interpretable RNN, the authors in InterpretableRNN propose an RNN based on SISTA (Sequential Iterative Soft-Thresholding Algorithm) that models a sequence of correlated observations with a sequence of sparse latent vectors, making its weights interpretable as the parameters of a principled statistical model. Finally, MarkovRNN

constructs a combination of an HMM (Hidden Markov Model) and an RNN, so that the overall model approach harnesses the interpretability of the HMM and the accuracy of the RNN model.

4.3.4 Hybrid Transparent and Black-box Methods

The use of background knowledge in the form of logical statements or constraints in Knowledge Bases (KBs) has shown to not only improve explainability but also performance with respect to purely data-driven approaches Donadello17; donadello2018semantic; dAvilaGarcez19NeSy. A positive side effect shown is that this hybrid approach provides robustness to the learning system when errors are present in the training data labels. Other approaches have shown to be able to jointly learn and reason with both symbolic and sub-symbolic representations and inference. The interesting aspect is that this blend allows for expressive probabilistic-logical reasoning in an end-to-end fashion manhaeve2018deepproblog. A successful use case is on dietary recommendations, where explanations are extracted from the reasoning behind (non-deep but KB-based) models Donadello19.

Future data fusion approaches may thus consider endowing DL models with explainability by externalizing other domain information sources. Deep formulation of classical ML models has been done, e.g. in Deep Kalman filters (DKFs)

Krishnan15, Deep Variational Bayes Filters (DVBFs) Karl16

, Structural Variational Autoencoders (SVAE)

Johnson16, or conditional random fields as RNNs Zheng15. These approaches provide deep models with the interpretability inherent to probabilistic graphical models. For instance, SVAE combines probabilistic graphical models in the embedding space with neural networks to enhance the interpretability of DKFs. A particular example of classical ML model enhanced with its DL counterpart is Deep Nearest Neighbors DkNN papernot2018deep, where the neighbors constitute human-interpretable explanations of predictions. The intuition is based on the rationalization of a DNN prediction based on evidence. This evidence consists of a characterization of confidence termed credibility that spans the hierarchy of representations within a DNN, that must be supported by the training data papernot2018deep.

Figure 10: Pictorial representation of a hybrid model. A neural network considered as a black-box can be explained by associating it to a more interpretable model such as a Decision Tree Narodytska18, a (fuzzy) rule-based system Fernandez19 or KNN papernot2018deep.

A different perspective on hybrid XAI models consists of enriching black-box models knowledge with that one of transparent ones, as proposed in WhatDoesExplainableAImean and further refined in Bennetot19. In particular, this can be done by constraining the neural network thanks to a semantic KB and bias-prone concepts Bennetot19.

Other examples of hybrid symbolic and sub-symbolic methods where a knowledge-base tool or graph-perspective enhances the neural (e.g., language petroni2019language) model are in Bollacker19; Shang19

. In reinforcement learning, very few examples of symbolic (graphical

Zolotas19 or relational santoro2017simple; garnelo2016towards) hybrid models exist, while in recommendation systems, for instance, explainable autoencoders are proposed Bellini18. A specific transformer architecture symbolic visualization method (applied to music) pictorially shows how soft-max attention works huang2018music. By visualizing self-reference, i.e., the last layer of attention weights, arcs show which notes in the past are informing the future and how attention is skip over less relevant sections. Transformers can also help explain image captions visually cornia2019smart.

Another hybrid approach consists of mapping an uninterpretable black-box system to a white-box twin that is more interpretable. For example, an opaque neural network can be combined with a transparent Case Based Reasoning (CBR) system Aamodt94; Caruana99. In Keane19, the DNN and the CBR (in this case a kNN) are paired in order to improve interpretability while keeping the same accuracy. The explanation by example consists of analyzing the feature weights of the DNN which are then used in the CBR, in order to retrieve nearest-neighbor cases to explain the DNN’s prediction.

4.4 Alternative Taxonomy of Post-hoc Explainability Techniques for Deep Learning

DL is the model family where most research has been concentrated in recent times and they have become central for most of the recent literature on XAI. While the division between model-agnostic and model-specific is the most common distinction made, the community has not only relied on this criteria to classify XAI methods. For instance, some model-agnostic methods such as SHAP lundberg2017unified are widely used to explain DL models. That is why several XAI methods can be easily categorized in different taxonomy branches depending on the angle the method is looked at. An example is LIME which can also be used over CNNs, despite not being exclusive to deal with images. Searching within the alternative DL taxonomy shows us that LIME can explicitly be used for Explaining a Deep Network Processing, as a kind of Linear Proxy Model. Another type of classification is indeed proposed in Gilpin18 with a segmentation based on 3 categories. The first category groups methods explaining the processing of data by the network, thus answering to the question “why does this particular input leads to this particular output?”. The second one concerns methods explaining the representation of data inside the network, i.e., answering to the question “what information does the network contain?”. The third approach concerns models specifically designed to simplify the interpretation of their own behavior. Such a multiplicity of classification possibilities leads to different ways of constructing XAI taxonomies.