I Introduction
Predictive modelling is increasingly deployed in highstakes environments, e.g., in the criminal justice system [11], loan approval [33], recruiting [9] and medicine [28]. Due to legal regulations [30, 10] and ethical considerations, ML methods need not only perform robustly in such environments but also be able to justify their recommendations in a humanintelligible fashion. This development has given rise to the field of interpretable machine learning (IML) that involves studying methods that provide insight into the relevance of features for model performance, referred to as feature importance.
Prominent feature importance techniques include permutation feature importance (PFI) [5, 12] and conditional feature importance (CFI) [25, 12, 18]. PFI is based on replacing the feature of interest with a perturbed version sampled from the marginal distribution while CFI perturbs such that the conditional distribution with respect to the set of remaining features is preserved. The sampling strategy defines the method’s reference point and therefore affects the method’s implicit notion of relevance. While PFI quantifies the overall reliance of the model on the feature of interest, CFI quantifies its unique contribution given all remaining features.
While both PFI and CFI are useful, they fail to answer more nuanced questions of feature importance. For instance, a stakeholder may be interested in the importance of a feature relative to a subset of features. Also, the user may want to know how important a feature is relative to variables that had not been available at training time.
We suggest relative feature importance (RFI) as a generalization of PFI and CFI that moves beyond the dichotomy between PFI, which breaks all dependencies with features, and CFI, which preserves all dependencies with features. In contrast to PFI and CFI, RFI is based on a perturbation that is restricted to preserve the relationships with a set of variables that can be chosen arbitrarily. We show that RFI is (1) semantically meaningful and (2) practically useful.
We demonstrate the semantical meaning of RFI in Section IV. In particular, we derive general interpretation rules that link nonzero RFI to (1) the conditional dependence of the feature of interest with the target and nonconditioned features given the conditioned variables in the data and (2) the conditional dependence of the input to the feature of interest with the model’s prediction given fixed inputs to the remaining features (Theorem 1). Furthermore, we show that a nonzero difference between and , with being an arbitrary set disjunct with , implies the conditional dependence (Theorem 2).
In Section V
, we provide an implementation of RFI estimation that is based on recent results from the related knockoff research field
[7, 23]. Furthermore, we translate the testing framework developed for conditional feature importance [31] to RFI. We support our theoretical analysis and findings by various simulation studies in Section VI. In particular, we show that RFI can expose the indirect contribution of variables that are not directly used by the model but provide information via dependent variables (Section VIA). Similarly, we show how RFI can be used to assess feature importance with respect to variables not included at training time (Section VIB).Ia Contributions and Related Work
While conditioning on subsets of variables has been suggested before [25, 12], the implications of this generalized variant of CFI have not yet been rigorously analyzed. Some IML methods perturb or hide subsets of features, e.g., in the context of multiple regression relative importance analysis is a modelspecific technique that averages over all importances of models trained on feature subsets [6, 16]. Modelagnostic, local approximations to the respective feature effect that avoid retraining and instead perturb subsets of features have also been proposed [26, 17]. A very recent global, modelagnostic feature importance proposal called SAGE quantifies feature importance by perturbing multiple features [8].
While the aforementioned approaches are all based on removing several features to provide more nuanced insight into the model, our proposal only modifies the feature of interest. Our approach is modelagnostic and global, while most aforementioned approaches are modelspecific or local. The exception is the global, modelagnostic SAGE [8], however the approaches are not only computationally but also semantically different. E.g. our method assigns an importance of zero for features that are not used by the model^{1}^{1}1A proof of this property is given in Lemma 2., which is not the case for SAGE. While our approach aims to provide nuanced insights into variable importance relative to a specific set, SAGE aims to quantify the overall importance of variables for the model.
Feature importance relative to variables that have not been included in the training set has not been studied before. The indirect influence of variables that the model does not computationally rely but statistically depend on has been studied e.g. in [1].
Ii Background and Notation
Iia Notation
We denote the target variable, i.e., the variable the model predicts, as and feature variables by . We refer to the variables as features to emphasize when they were used in model training. Their observations are denoted by and . We use for the index set of all features included in model training and for the index of our feature of interest, . The index set of the remaining variables is denoted as (rest, remainder). The index set of features, relative to which the importance of is considered, is denoted as . As can refer to any index set of variables, we denote its intersection with as and its complement as . We denote the index set of conditioning variables that were not made available to the model during training as .
In case we add new elements to the conditioning set , we will denote this set as . The set may include variables within and outside . The respective components are denoted as and as . The remainder of without and is denoted as .
We denote perturbed variables of interest relative to as
. We refer to the original and perturbed probability distribution of
as the observational and interventional distribution and . The inspected model is denoted as , its prediction as . Independence of and conditional on is denoted using , the respective conditional dependence as .IiB Feature Importance
Performancebased feature importance methods assess the relevance of a feature of interest by assessing the impact of a perturbation of on the model’s performance. Local feature importance methods focus on the importance of features for specific data points, whereas global feature importance methods assess the impact over the whole domain. In the following, we focus on global methods.
Global feature importance is computed according to the following general schemata:
where we denote the original risk of the model and the risk after perturbing as and , respectively. For estimation, the true risk is replaced with the empirical risk .
Feature importance methods furthermore differ in how they perturb and whether they rely on retraining the model. While some methods retrain the model after the perturbation (e.g. LOCO, [15]), others evaluate the impact of the perturbation on the same original model (e.g. [5, 25]). In this work, we focus on methods that avoid retraining.
For methods that avoid retraining, we observe a dichotomy between two general perturbation approaches: resampling that preserves the marginal and resampling that preserves the conditional distribution. Marginal resampling was originally proposed to compute perturbed versions of by permuting the observations within the sample [5]. The respective sample breaks the dependence between and while preserving the marginal distribution . More recently, Model Reliance was proposed [12], which takes the expectation over all possible permutations. Resampling from the marginal distribution has been criticized to introduce bias, in particular because it overestimates the importance of correlated variables [25], resulting in incorrect feature rankings [27].
It also leads to extrapolation under dependent features [14, 18], i.e. conclusions about the model are being drawn using unrealistic data points on which the model was not trained. CFI, on the other hand, samples from the conditional distribution [25, 29, 2, 7, 12, 14, 18]. A large variety of modelspecific methods exist [13, 32]. Conditional variants quantify the importance of a feature given the information that all remaining features contain about [20], thereby avoiding evaluation of the model on unrealistic datapoints [18].
Iii Relative Feature Importance
Relative Feature Importance is a general framework that assesses feature importance relative to arbitrary variable sets . The frameworks subsumes PFI and CFI as two extreme special cases.
In PFI, is replaced with a perturbed version that preserves the marginal distribution while breaking the dependencies with and all features. In CFI, a perturbed version of is used that preserves the conditional distribution , thereby only breaking conditional dependence between and given all features. As our analysis in Section IV establishes, the replacement strategies of PFI and CFI define extreme reference points. CFI quantifies the contribution relative to all remaining features , whereas PFI regards a feature in isolation.
We go beyond the PFI versus CFI dichotomy. We argue that it is (1) meaningful (Section IV) and (2) practically useful (Section VI) to replace with perturbed versions that preserve the conditional distribution with respect to arbitrary sets while requiring . can be a subset of , but can also include variables not available at training time such that . We term the resulting method Relative Feature Importance (RFI):
Definition 1 (Relative Feature Importance – RFI)
We define Relative Feature Importance with respect to a feature set with and a fixed model as
where is the risk w.r.t. to a replacement variable and refers to the original risk. The replacement variable has to satisfy

and

.
In the following section, we discuss the semantic meaning of RFI. The estimation of RFI is discussed in Section V.
Iv Interpreting Relative Feature Importance
IML techniques aim to provide insight into the model and, possibly, into the underlying data generating mechanism. However, IML techniques themselves are subject to interpretation. The characterization of an IML method by its mathematical definition is computationally precise, but has limited aid in guiding users to make conclusions about the underlying model and data. In this section we provide a (noncomprehensive) list of interpretation rules for RFI, that characterize the method by how it behaves in its context. This context includes both the model and the underlying data generating mechanism. More specifically, we link RFI to (conditional) independence in the underlying data set as well as to whether the model’s prediction is constant in the argument for a fixed . While RFI can be used for quantification of feature importance, we focus our analysis on relevance as a binary property and characterize relative feature relevance (). We show that the implicit notion of relevance of RFI is defined by the choice of . By modifying the conditioning set beyond the PFI versus CFI dichotomy, we are able to gain insight into more nuanced aspects of the model and the data generating mechanism. The main results are given in Theorem 1 and Theorem 2
. Furthermore, we highlight limitations stemming from the choice of the loss function
and the model fit for the interpretation, which are, in our humble opinion, underrepresented in the current discussion.We structure our analysis by taking the user’s perspective and asking ”What can we infer from relative feature relevance?”.
Iva Implications of Relative Feature Relevance
In the following, we analyze the implications of RFI without further assumptions about model and data. We thereby distinguish between two levels of explanation. Relative feature relevance provides insight, both into model and data.
Theorem 1
If then

in the underlying distribution (data level)

w.r.t. the interventional distribution (model level)
We prove Theorem 1 in two steps. First, we assess the implications of the respective independence for the underlying data set (Lemma 1). Then, we assess the implications of the respective independence for the model (Lemma 2). The contrapositions yield Theorem 1.
Lemma 1
If for any G with then .
We base the proof of Lemma 1 on the insight that (because the model is fixed) an equivalence in distribution implies an equivalence in risk (Proposition 1). Therefore conditions under which the interventional distribution coincides with the original distribution are sufficient for .
Proposition 1
If observational and interventional distribution coincide, then risks with and without perturbation are equal:
Proof:
Given that we can write
We show next that the conditional independence is a sufficient condition for identity of both distributions.
Proof:
So far, we have assessed implications for the underlying data generating mechanism. Next, we assess implications for the inspected model .
Lemma 2
If w.r.t. the interventional distribution then for any .
Proof:
If the prediction for an observation is independent of the value w.r.t. the interventional distribution, the prediction is unaffected when replacing with any value with . Consequently, any sample from yields the same prediction.
Furthermore values with nonzero probability over the interventional distribution also have nonzero probability over the observational distribution. The interventional distribution can be rewritten as
Similarly, the observational distribution can be factorized into . As
(which can be derived from, e.g., the law of total probability) it follows that
.Consequently the prediction for any value with positive probability is identical given unchanged .
As the conditional distributions of and overlap and the distribution of is unaffected, the prediction is identical with and without perturbation. Therefore and .
To summarize, we have shown that independence on the dataset and on the model level respectively imply and can thereby prove Theorem 1.
Theorem 1 shows that nonzero implies dependencies between sets of variables on the model level as well as on the data level. Which dependencies are relevant for can be controlled with the conditioning set . Consequently, the conditioning set determines the method’s implicit definition of relevance.
I.e., on the data level, if holds, is zero irrespective of any other dependencies that may hold, e.g. with (Lemma 1). Nonzero RFI, a difference in performance on interventional and observational distribution, can only be caused by dependencies that have been destroyed in the interventional distribution, the dependencies with and via are preserved by the replacement and can therefore not be responsible for . Similarly, on the model level, over the interventional distribution yields zero RFI (Lemma 2). The behavior of the model outside the domain in which it is evaluated is irrelevant for . What domain the model is evaluated over depends on the choice of .
Because we can control RFI’s implicit definition of relevance with , RFI allows more nuanced insights into model and data than PFI or CFI alone. In Theorem 1, we aim to make the implicit definition of relevance explicit. On the data level, nonzero RFI implies the dependence of with the tuple given (). In order to understand the aforementioned dependence, using the graphoid axioms contraction and weak union [21], the equivalent formulation below can be adduced:
At least one of the two conditional dependencies has to hold for nonzero . The first dependence can be rephrased as: is informative of , even if we already know . It is more difficult to make sense of the second dependence. Under dependent features , the distribution of with is not preserved under perturbation . In the interventional distribution observations that are improbable or impossible w.r.t. the observational distribution can be possible and probable (and vice versa). Consequently, in the interventional distribution the feature distribution differs from the observation feature distribution. Even if holds, the model may perform suboptimally due to this distribution shift and cause nonzero^{2}^{2}2Let e.g. be perfectly correlated and independent of . Then adding does not alter its prediction performance, unless the dependence between the variables is broken. Also see [14] for a discussion in PFI.. If the conditioning set is a superset of (), such that set of remaining variables is empty, it holds that . Therefore nonzero RFI must be attributed to for .
On the model level, nonzero RFI implies that the model’s predictions are conditionally dependent on given the remaining features are fixed. E.g. for a linear model that has coefficient zero for all terms involving , this dependence would not be fulfilled, and would be zero (Lemma 2). The model is evaluated over the interventional distribution , which varies depending on . If contains a nearly perfect correlate of , can be reconstructed well. In contrast, if , for every possible the model is evaluated over the whole marginal distribution of . Although choosing a smaller set leads to extrapolation under dependent features, it allows more insight into the model’s mechanism. For interpretation purposes like safety, this is highly desirable.
In the preceding paragraphs we have highlighted the importance of the conditioning set for the method’s implicit notion of relevance and illustrated the results from Theorem 1. We have argued that the conditioning set controls which potential dependencies can be responsible for nonzero . The insights lead to a further, interesting application of RFI. By assessing the difference when modifying the conditioning set by adding new elements , we are able to assess the role of the dependencies with variables in relative to a baseline . While for only dependencies of with and via are preserved, for also dependencies with and via are maintained. If is nonzero, this change has to be due to dependencies involving , but not . We substantiate this claim with Theorem 2. In order for to be positive, the dependence has to hold.
Theorem 2
If the difference  , then .
Proof:
While nonzero as well as nonzero have clear implications, interpreting zero or zero is difficult. For example, we may be tempted to interpret as conditional independence in the data. However, the general principle that absence of evidence is no evidence for absence also applies in the context of RFI. A dependence in the data may not be captured by the model when it has a poor fit and does not rely on the respective variable. Similarly, although
may be optimal, a dependence in higher moments may simply not be modeled by
or captured by the loss . As all aforementioned causes of nonzero RFI are potentially sufficient, but not necessary, it is unclear which of the causes nonzero RFI can be attributed to. Furthermore, the related problem of conditional independence testing is provably hard [24].The theoretical insights that we derive in this Section (Theorem 1 and 2) are applied and illustrated in a simulation study in Section VI.
V Estimation and Testing
Estimating and sampling from the conditional distribution is in general difficult, especially in highdimensional continuous settings. Various approaches for replacing with samples from its conditional distribution exist, e.g., knockoff approaches [2, 7, 23]
, imputation and weighting
[12]or permutation within decision tree leaves
[19]. We used ModelX knockoffs [7] in this work, but note that the RFI approach is agnostic to its algorithmic implementation.Using (standard) empirical risk estimates, our RFI estimate is
where is a sample from . We can then test for nonzero using procedures for conditional independence tests, e.g., [31]
, thereby quantifying the uncertainty coming from empirical risk minimization. Because of the central limit theorem, the empirical risk converges (in probability) to a Gaussian distribution with increasing number of observations. Therefore, onesided, paired ttests can be used to infer tests and confidence intervals
[31]. The test procedures proposed in [31] are agnostic to the conditioning set for the perturbation . For smaller samples, the Exact Test by Fisher may be used.The ttest and Fisher Exact Test ignore uncertainty and bias of the estimation procedures, i.e. the ML model and the knockoffsampler are treated as “fixed”. E.g. misspecified, suboptimal models may not capture dependencies. Or dependencies are in higher moments that are not captured by the loss. Consequently, without further assumptions, the framework does not provide a test for conditional independence in the dataset.
The popular testing procedures for knockoffs proposed by [7] provide FDR over all features, but does not test the significance of the importance of individual features.
Vi Simulation Studies
In the following, we demonstrate the usefulness of RFI on two simulation studies. In the first example, we use RFI to expose indirect influence of variables that are not computationally used by the model. In the second example, we assess feature importance relative to a confounder that was unavailable at training time. In both examples, we represent the underlying data generating mechanism, that gives rise to the dependencies in the data, with a causal directed acyclic graph (DAG). The code for the examples is available online^{3}^{3}3Link to Code: https://github.com/gcskoenig/icpr2020rfi.
Via Indirect Influence
A prominent application of interpretable machine learning is auditing models regarding its reliance on protected attributes like age or sex. A reliance on the respective attributes may result in unfair discrimination and requires further inspection. With approaches like fairness through unawareness [3], the model does not rely on protected attributes directly. However, by implicitly reconstructing the sensitive attributes using seemingly harmless correlates, the model can indirectly make use of the protected attribute resulting in potentially harmful, unfair discrimination [3].
PFI and CFI cannot expose such indirect influence. As Lemma 2 proves, is zero for a model that does not (directly) use the feature of interest for the prediction for any conditioning set . Furthermore, from PFI and CFI alone, we cannot infer whether the importance of a variable can be attributed to its dependence with an indirect influence. Using with we preserve the influence of on the prediction and can thereby restrict the attribution of importance to contributions stemming from dependencies not involving (Theorem 1, Lemma 1). The difference to with and exposes the indirect influence.
Not every indirect influence from a sensitive attribute is considered undesirable. Certain correlates of may indeed be valid criteria for a decision (e.g. [4]). Importance stemming from dependencies with via such resolving variables would be considered acceptable. We can assess the indirect influence beyond contributions stemming from dependence via by comparing to a baseline . In this baseline, contributions via are preserved and therefore irrelevant for RFI. Consequently, when setting , the difference only quantifies indirect influence that is not resolved by .
We demonstrate the usefulness of RFI to expose indirect influence in a simulation study. The dataset is a sample drawn from the distribution induced by a structural causal model (SCM) depicted in Figure 2. All relationships are additive linear with coefficients and Gaussian noise terms (, and
). An ordinary least squares linear regression model was fit to predict
from (MSE = , ). We trained modelX knockoffs [7] on the training data and evaluated RFI on test data. Sample size is with test data.In order to quantify the direct influence of the features we compute PFI. As we can see in Figure 3, and are considered irrelevant. In order to expose their indirect influence, we additionally compute RFI with respect to and respectively. For both variables we observe a drop in importance of and . Consequently both and have an indirect influence on the target (Theorem 2).
Furthermore we are interested in whether the indirect influence of can be resolved by . We therefore compute with and . We see that for no change in importance can be observed. This is due to the independence ^{4}^{4}4As faithfulness and causal markov condition hold, separation in the graph and (conditional) independence coincide[22]. We can therefore read the independence structures off Figures 2 and 4. (Theorem 2). The indirect influence is resolved. However, for the importance decreases further and is therefore not resolved by . This is in alignment with the dependence implied by the graph (Figure 2).
ViB Variables Outside Training Set
When designing a model , a practitioner may have decided to exclude a variable from the feature set, e.g., because it was then considered irrelevant, it belongs to a different modality or would have required further preprocessing. Furthermore, when auditing a machine learning model , variables that have not been available for the training of the model may be accessible.
In this example, we demonstrate that variables outside the training set can be included in the conditioning set for RFI. Consequently, importance of the features relative to variables outside the training set and the indirect influence of such variables can be assessed. More specifically, we simulate a hypothetical situation where the influence of a previously unknown confounder shall be evaluated. This variable is available for the model audit. In particular, we wonder whether the features , and are only or partly important due to a dependence via .
The dataset was sampled from a structural causal model (SCM) depicted in Figure 4. Assuming faithfulness and the causal Markov condition, this DAG implies the following (conditional) (in)dependencies: is independent of , is independent of conditional on , and is dependent on . Note that the dependence between and is due to the common cause as well as due to a direct effect of on . All relationships are additive linear with coefficients and additive Gaussian noise ( and ). We fit an ordinary least squares linear regression model on , and to predict (MSE = , ). was not available for model training. We trained ModelX knockoffs [7] on training data and sampled from on test data. Sample size is with test data.
When computing () for each variable, the different relationships with become apparent. The respective results are depicted in Figure 5. For the feature importance relative to remains unchanged as the variables are pairwise independent (Theorem 2). For , that is only dependent with via , it completely vanishes (Lemma 1). For the feature importance decreases but remains nonzero, as is dependent with directly and via .
Consequently, using RFI, we can (1) identify variables that are important due to a variable unavailable at training time and (2) distinguish between variables that only depend on via from those that do not. With PFI () or CFI () such a distinction is in general not possible.
Vii Discussion
We proposed relative feature importance (RFI), a general conditional feature importance framework which allows to condition on arbitrary sets of other features, including features outside the training set. We underpin the method with theoretical results allowing insight into both model and underlying dataset. In a simulation study, the usefulness of the method for the exposure of indirect influence is demonstrated.
Relative feature importance requires sampling from (unknown) conditional distributions. For continuous variables and in highdimensional settings this task is challenging and an open area of research [7, 23]. Uncertainty stemming from inaccurate sampling may affect the interpretation. The quality of insight into the underlying dataset strongly depends on the training and evaluation of the model. Dependencies in higher moments are usually not modeled and not captured by standard loss functions and can therefore not be detected. Especially the interpretation of zero RFI requires careful assessment of the model specification. Further research is needed to assess necessary assumptions for the interpretation of RFI. These challenges are not unique to RFI, but apply more generally in the field of interpretable machine learning [20].
References
 [1] (2018) Auditing blackbox models for indirect influence. Knowledge and Information Systems 54 (1), pp. 95–122. Note: arXiv: 1602.07043 External Links: ISSN 02193116, Document Cited by: §IA.
 [2] (2015) Controlling the false discovery rate via knockoffs. The Annals of Statistics 43 (5), pp. 2055–2085. Note: Publisher: Institute of Mathematical Statistics Cited by: §IIB, §V.
 [3] (2019) Fairness and machine learning. fairmlbook.org. Note: http://www.fairmlbook.org Cited by: §VIA.
 [4] (2016) Will precision medicine move us beyond race?. The New England journal of medicine 374 (21), pp. 2003. Cited by: §VIA.
 [5] (2001) Random forests. Machine Learning, pp. 1–122. Cited by: §I, §IIB.
 [6] (1993) Dominance analysis: a new approach to the problem of relative importance of predictors in multiple regression.. Psychological bulletin 114 (3), pp. 542. Cited by: §IA.
 [7] (2018) Panning for gold: ‘modelX’ knockoffs for high dimensional controlled variable selection. Journal of the Royal Statistical Society. Series B: Statistical Methodology 80 (3), pp. 551–577. Note: arXiv: 1610.02351 External Links: ISSN 14679868, Document Cited by: §I, §IIB, §V, §VIA, §VIB, §VII.
 [8] (2020) Understanding Global Feature Contributions Through Additive Importance Measures. arXiv preprint arXiv:2004.00668. Cited by: §IA.
 [9] (2018) Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. External Links: Link Cited by: §I.
 [10] (2018) A Guide to the California Consumer Privacy Act of 2018. SSRN Electronic Journal, pp. 1–17. External Links: Document Cited by: §I.
 [11] (2018) The accuracy, fairness, and limits of predicting recidivism. Science Advances 4 (1), pp. 1–6. External Links: ISSN 23752548, Document Cited by: §I.
 [12] (2019) All models are wrong, but many are useful: learning a variable’s importance by studying an entire class of prediction models simultaneously.. Journal of Machine Learning Research 20 (177), pp. 1–81. Cited by: §IA, §I, §IIB, §V.
 [13] (2009) Variable importance assessment in regression: linear regression versus random forest. The American Statistician 63 (4), pp. 308–319. Note: Publisher: Taylor & Francis Cited by: §IIB.
 [14] (2019) Please Stop Permuting Features: An Explanation and Alternatives. arXiv preprint arXiv:1905.03151v, pp. 1–15. Note: arXiv: 1905.03151v1 Cited by: §IIB, footnote 2.
 [15] (2018) Distributionfree predictive inference for regression. Journal of the American Statistical Association 113 (523), pp. 1094–1111. Cited by: §IIB.

[16]
(2001)
Analysis of regression in game theory approach
. Applied Stochastic Models in Business and Industry 17 (4), pp. 319–330. Note: Lipovetsky2001 Cited by: §IA.  [17] (2017) A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 2017Decem (Section 2), pp. 4766–4775. Note: arXiv: 1705.07874 External Links: ISSN 10495258 Cited by: §IA.
 [18] (2020) Modelagnostic feature importance and effects with dependent featuresa conditional subgroup approach. arXiv preprint arXiv:2006.04628. Cited by: §I, §IIB.
 [19] (2020) Modelagnostic feature importance and effects with dependent features–a conditional subgroup approach. arXiv preprint arXiv:2006.04628. Cited by: §V.
 [20] (2020) Pitfalls to avoid when interpreting machine learning models. arXiv preprint arXiv:2007.04131. Cited by: §IIB, §VII.
 [21] (1985) Graphoids: a graphbased logic for reasoning about relevance relations. University of California (Los Angeles). Computer Science Department. Cited by: §IVA.
 [22] (2009) Causality. Cambridge university press. Cited by: footnote 4.
 [23] (2019) Deep knockoffs. Journal of the American Statistical Association, pp. 1–12. Note: Publisher: Taylor & Francis Cited by: §I, §V, §VII.
 [24] (2018) The hardness of conditional independence testing and the generalised covariance measure. arXiv preprint arXiv:1804.07203. Cited by: §IVA.
 [25] (2008) Conditional variable importance for random forests. BMC Bioinformatics 9, pp. 1–11. External Links: ISSN 14712105, Document Cited by: §IA, §I, §IIB.
 [26] (2014) Explaining prediction models and individual predictions with feature contributions. Knowledge and information systems 41 (3), pp. 647–665. Note: Publisher: Springer Cited by: §IA.
 [27] (2011) Classification with correlated features: unreliability of feature ranking and solutions. Bioinformatics 27 (14), pp. 1986–1994. Note: Publisher: Oxford University Press Cited by: §IIB.
 [28] (2019) High performance medicine: the convergence of human and artificial intelligence. Nature Medicine 25 (January). Note: Publisher: Springer US External Links: ISSN 1546170X, Link, Document Cited by: §I.
 [29] (2009) Feature selection with ensembles, artificial variables, and redundancy elimination. Journal of Machine Learning Research 10 (Jul), pp. 1341–1366. Cited by: §IIB.
 [30] (2017) The eu general data protection regulation (gdpr). A Practical Guide, 1st Ed., Cham: Springer International Publishing. Note: Publisher: Springer Cited by: §I.

[31]
(2019)
Testing Conditional Independence in Supervised Learning Algorithms
. arXiv preprint arXiv:1901.09917. Note: arXiv: 1901.09917 External Links: Link Cited by: §I, §V.  [32] (2015) Variable importance analysis: a comprehensive review. Reliability Engineering & System Safety 142, pp. 399–432. Note: Publisher: Elsevier Cited by: §IIB.
 [33] (2017) A boosted decision tree approach using Bayesian hyperparameter optimization for credit scoring. Expert Systems with Applications 78, pp. 225–241. Note: Publisher: Elsevier Ltd External Links: ISSN 09574174, Link, Document Cited by: §I.
Comments
There are no comments yet.