Causal Discovery from Incomplete Data using An Encoder and Reinforcement Learning

06/09/2020 ∙ by Xiaoshui Huang, et al. ∙ The University of Sydney UNSW New South Wales Government 0

Discovering causal structure among a set of variables is a fundamental problem in many domains. However, state-of-the-art methods seldom consider the possibility that the observational data has missing values (incomplete data), which is ubiquitous in many real-world situations. The missing value will significantly impair the performance and even make the causal discovery algorithms fail. In this paper, we propose an approach to discover causal structures from incomplete data by using a novel encoder and reinforcement learning (RL). The encoder is designed for missing data imputation as well as feature extraction. In particular, it learns to encode the currently available information (with missing values) into a robust feature representation which is then used to determine where to search the best graph. The encoder is integrated into a RL framework that can be optimized using the actor-critic algorithm. Our method takes the incomplete observational data as input and generates a causal structure graph. Experimental results on synthetic and real data demonstrate that our method can robustly generate causal structures from incomplete data. Compared with the direct combination of data imputation and causal discovery methods, our method performs generally better and can even obtain a performance gain as much as 43.2

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Causal discovery from natural phenomena is a fundamental problem across many domains, such as biology Sachs et al. (2005), economics Pearl (2009) and genetics Peters et al. (2017). Although randomized control experiments are the gold standard for inferring causal relationships Hernan and Robins (2010), it is impossible or very expensive in many fields such as patient treatment Hernán and Robins (2020) and genetics Peters et al. (2017). Rather than designing randomized experiments, causal discovery algorithms attempt to infer causality automatically from passive observational data.

Typically, the causal relationships among a set of variables could be represented as a directed acyclic graph (DAG). To estimate the graph from observational data, there are mainly two classes of approaches in the literature

Triantafillou and Tsamardinos (2016): constraint-based and score-based. Constraint-based approaches use conditional independence tests to check the existence of causal relationship between each pair of variables and try to find a graph that entails all the corresponding -separations Pearl (2010). In contrast, score-based approaches try to find a graph that maximizes the likelihood of the data given . Equivalently, they attempt to search the ”optimal” causal graph from the directed acyclic graph space (denoted as

) in a combinatorial optimization manner by minimizing a score function

(1)

The problem in Eq.(1) is a typical NP-hard problem because the number of DAGs increase super-exponentially in the number of graph nodes Chickering et al. (2004). Many methods have been proposed to solve this problem by designing various score functions Peters et al. (2014); Huang et al. (2018). Recently, Zheng et al. Zheng et al. (2018)

recast this combinatorial optimization problem into a continuous gradient-based optimization one which allows for nonlinear causal relationships parameterized by neural networks. Furthermore, Zhu et al.

Zhu et al. (2020) proposed to use a reinforcement learning agent to automatically determine the search direction in this continuous optimization problem to find the DAG with the best score.

However, existing DAG searching methods, either based on combinatorial or continuous optimization, seldom consider settings with incomplete data that the observational data used for causal discovery have missing values. This phenomenon is ubiquitous in many real-world situations, which can significantly impair the final learning performance and even make the algorithms fail. For example, according to our evaluation, the algorithm in Zhu et al. (2020) will not work in such a setting. To deal with the challenge of data with missing values, in this paper, we propose an approach that is able to learn the causal relationship from incomplete data.

Back to the information processing procedure in our human brain, we can still obtain some basic information though the observational data has missing values. For example, if a patient is an elderly patient and does not present with a good level of fitness we might assume they are not doing any exercise. To imitate such a brain information processing procedure about incomplete data, we propose in this paper a Encoder that uses neural networks to encode the non-missing information into a feature representation. The derived representation is then integrated into a reinforcement learning (RL) framework for searching the best causal graph. According to Zhu et al. (2020), the reason that the RL agent works well lies in that the stochastic policy can be updated properly by using the rewards. Hence, the RL agent can determine automatically where to search.

The contributions of this paper are as follows:

  • an RL-based approach for learning causal graphs from incomplete data.

  • an ad-hoc encoder for extracting features from incomplete observational data for the sake of causal discovery. As a result, the whole RL framework could be optimized in an end-to-end manner while using incomplete data.

2 Related Work

Most of the existing DAG learning algorithms could be divided into two categories: score-based and constraint-based approaches. Score-based algorithms search the space of all possible structures to maximize a decomposable score using greedy, local, or some other search algorithms. A typical example is the Greedy-Equivalent-Search (GES) algorithm Chickering (2002); Nandy et al. (2018). Some popular scoring functions are the Bayesian Dirichlet scorede Campos and Ji (2010), the Akaike Information Criterion Sakamoto et al. (1986), the Bayesian Information Criterion (BIC) Schwarz and others (1978) or Minimum Description Length (MDL) score Chickering (2002)

. In contrast, constraint-based methods exploit the property of Bayesian networks that edges encode conditional dependencies. Typical examples include the well-known PC algorithm

Spirtes et al. (2000) and the FCI algorithm Zhang (2008). In addition, there is a suite of hybrid algorithms that combine score-based and constraint-based methods. A prominent example is the Max-Min Hill-Climbing (MMHC) algorithm Tsamardinos et al. (2006). The main idea is find the skeleton using a constraint-based method and then orient the edges by a search-based method Kalisch and Bühlmann (2014).

As we have discussed in the previous section, since the number of DAGs increase super-exponentially in the number of graph nodes, causal discovery, as a combinatorial search problem, is NP-hard. As a result, traditional algorithms have mainly focused on small graphs and discrete variables. Recently, Zheng et al. Zheng et al. (2018) proposes a new continuous optimization approach called NOTEARS to transform the discrete search procedure into an equality constraint. This is a pioneer work to enable the recent neural network to be applied to solve DAG learning. For example, following NOTEARS, DAG-GNN Yu et al. (2019) uses a graph neural network to solve the DAG learning. GraN-DAG Lachapelle et al. (2019) extends it to deal with nonlinear relationships between variables using neural networks.

However, these methods all assume that the observational data has no missing values. This assumption may not be realistic in real applications. Observational data in many applications always has missing values Josse et al. (2018). For instance, patient observational data is missing since many clinician systems think filling the form is unrelated to the treatment of illness and they may not think highly of these data, therefore, they usually do not filled the form completely. This paper aims to release this assumption to discover the causal relationships from incomplete observational data.

Figure 1: The proposed reinforcement learning framework of causal discovery from incomplete data. The Actor is an encoder-decoder neural network; the encoder consists of an imputation network (ImNet) and a feature extraction network (FeatNet). The Critic uses a Value network (VNet) to calculate a value for the encoded feature and calculate a reward for the decoded graphs. Both the reward and value are integrated into a loss for updating the Actor.

3 Approach

Our approach falls into the framework of continuous optimization based causal discovery using reinforcement learning. In this section, we firstly introduce the problem formulation and notations. Details of the proposed actor that generates candidate causal graphs from observational data with missing values is then described. Finally, we explain how the critic helps to evaluate the generated graphs and thus provides us rewards for training the whole neural network. The overall framework of our causal discovery approach is illustrated in Figure 1.

3.1 Problem Formulation

Given an observational dataset , where each individual unit has - dimensional attributes. Each attribute is associated with a node in a -node DAG , and the observed value of is a sample of this DAG, totally samples. To infer causal relationships among these attributes, we want to find an adjacency matrix . Each value of the adjacency matric describes the causality between these two attributes. For example, means the attribute has an effect to attribute and its weight is .

Since the samples usually have missing values on the attributes, the observational data contains missing values. Missing values are toxic to the existing DAG learning methods which will make the existing optimization-based or neural network methods fail.

To cope with this challenge, we propose a reinforcement learning framework to discover the causal structure from observational data with missing values. The overall framework is illustrated in Figure 1. In particular, given a sample of incomplete observational data, an encoder-decoder (Actor) generates a robust feature and a graph. Then, the feature is input into a value network to compute the value and the graph input into a reward function to calculate the reward. The value and reward are utilized to calculate the loss to optimize the whole network.

3.2 Reinforcement Learning: Actor

In the RL-based causal discovery framework, the objective of the actor is to generate candidate causal graphs from a given observational data. In this paper, our proposed actor is an encoder-decoder neural network that is able to handle missing values in the data.

Encoder The encoder aims to extract robust features by using the incomplete data. The proposed encoder neural network consists of two parts: the imputation network (ImNet) and the feature extraction network (FeatNet). The objective of ImNet is to impute a complete data from the current incomplete data, while the FeatNet is to extract the features from the imputation data.

Encoder: ImNet

The imputation network contains three fully-connected layers. The neurons of each layer are the same to the attribute dimension

and the output is the same dimension as the input data. The first two layers use relu

as the activation function and the final layer uses the

sigmod function.

Given the observational data with missing values, , we define a mask matrix where if the element is observed and otherwise. Inside the ImNet, we firstly concat with , and then generate the initial output by using fully-connected layers. The main idea is that we let the neural network to automatically learn a feature from both the and , and then convert the feature into an matrix of imputation data. Through this process, we obtain an initial output with the same dimension to . Formally,

(2)

After the initial output is generated, the final complete data is calculated by

(3)

That is, the final complete data is obtained by taking the partial observation and replacing each missing value with the corresponding value of .

In practice, we initialize the ImNet with pre-trained parameters for the sake of faster training. The pre-trained parameters are learned by a generative adversarial network architecture where our ImNet acts as the generator. We use the same discriminator as in Yoon et al. (2018). On one hand, the goal of the generator is to accurately impute missing data; on the other hand, the discriminator tries to distinguish between imputed and observed elements in the input data. They are trained iteratively in an adversarial schema.

Encoder: FeatNet In our encoder-decoder neural network for generating causal graphs, we adopt the Transformer structure Vaswani et al. (2017) for extracting robust features from the imputed data. Our empirical results indicate that the self-attention network is capable of extracting robust features from the imputed observational data. Overall, the input of the feature extracting network (FeatNet) is the imputed data matrix outputed from ImNet and its output is denoted as , where is the feature dimension of each attribute and is the feature of attribute node .

Decoder The decoder aims to generate graphs from the features. Inspired by Zhu et al. (2020), we use a single fully-connected layer as our decoder. The reason is that the one-layer network already contains enough ability to build graph Zhu et al. (2020). Moreover, a deep graph network will be difficult to train and the output may be indistinguishable Li et al. (2018). Formally, the decoder used in this paper is

(4)

where are neuron matrix that need to be trained. is the dimension of neurons in the decoder and is the dimension of each feature from the encoder. Then, we input

into a logistic sigmoid function

to get a probability

of an edge emitting from to . Finally, to generate a binary adjacency matrix

, we sample according to a Bernoulli distribution with probability

.

3.3 Reinforcement Learning: Critic

The job of the critic is to evaluate actions generated by the actor so that the actor can update its policy based on the evaluation score Konda and Tsitsiklis (2000). In this paper, the Critic has two objectives: generating a value score for encoded features (Value) and calculating a reward score for the decoded graphs (Reward).

Vnet We use a neural network to calculate the value score of the encoded feature. This value score is used to calculate a discounted reward as Loss in the following section. That is, the update of Actor need to consider both the feature quality and graph quality. In this paper, the value neural network (Vnet) consists of two fully-connected layers. The input is the features from encoder and the output is a number that evaluate the value of the encoded features.

(5)

where is the fully-connected layer, is the non-linear activation function and is the feature from the encoder neural network.

Reward function Given a candidate graph generated by the decoder neural network, the reward function evaluates how well the graph fits the observational data. In this paper, we follow Lachapelle et al. (2019); Zheng et al. (2018); Zhu et al. (2020) and use the following score function:

(6)

where is the least square loss that maximizes the likelihood for a Gaussian model used in Zheng et al. (2018). is the number of edges in .

Acyclic constrain: To make sure the generated graph is a DAG, we follow Zheng et al. (2018) and also introduce the following acyclic constraint to the adjacency matrix .

(7)

where calculate the trace, is the matrix exponential and is the Hadamard product.

Rewards The final reward is calculated by incorporating the above score function and additional Acyclic constraints. Formally, it is defined as

(8)

where is the indicator function, and are two penalty coefficients.

Loss

To better optimize the actor, we use a discounted reward as the final reward. This discounted reward serves as a loss function for optimizing the actor.

(9)

where is the regularized entropy generated graph from the decoder Bernoulli distribution.

Proposition 1.

Minimizing the loss by backward optimizing the neural network is equivalent to optimizing the actor in reinforcement learning. The best actor is achieved iff

(10)
Proof.

The original actor-critic reinforcement learning is to use the score to compute the gradient and use the gradient to update its policy in an approximate gradient direction Konda and Tsitsiklis (2000). The actor is best when the gradient iff . This paper utilizes the graph of neural network to automatically calculate the gradient and use backward to optimize the actor. Therefore, iff , we obtain the best actor. ∎

4 Synthetic Incomplete Data Generation

To systematically demonstrate the effectiveness of the proposed method, we use the following model to generate synthetic datasets. Consider a causal graph with node and each node is a variable , we generate the value of by the following structural equation:

(11)

where is a linear or nonlinear function and the noises

are mutually independent generated from Gaussian or non-Gaussian distributions.

is a vector containing the elements

such that there is an edge from to in the DAG . The generated data is a matrix which consists of a number of row vectors . Each row vector uses the same generation function in 11.

Specifically, the generative model in Eq.(11) belongs to standard linear-Gaussian model family Peters et al. (2017) if all are linear and are Gaussian distribution. When the functions are linear but the noise are non-Gaussian, we obtain the linear non-Gaussian models described in Hoyer et al. (2009); Shimizu et al. (2006). In this paper, all the variables are scalars and it is straightforward to extend to more complex cases with a proper defined generation function.

To simulate the missing values of observational data, inspired by Mayer et al. (2020), we randomly remove the attribute values for each sample with the missing probability of 20%. The goal of causal discovery in this paper is, given the data vectors with missing values, to infer as much as possible about the generating mechanism; in particular, we seek to infer the generating graph .

5 Experiments

5.1 Baselines and Evaluation Metrics

Most of the existing causal discovery methods will totally fail in settings with missing values. To deal with this problem, a straightforward idea is to impute the missing values first and then run the state-of-the-art causal discovery methods. Several experiments were conducted to compare our method with this straightforward idea. We selected GAIN as the imputation method since it has shown robust performances for missing data imputation Yoon et al. (2018). Overall, baseline methods we used for experimental comparison include:

  • GAINYoon et al. (2018)+NOTEARS Zheng et al. (2018). Since NOTEARS is widely used and obtains robust causal discovery results, we compare with it on incomplete data. The NOTEARS uses the default parameters as the released code. The input data is a completed data with imputation data filled in the positions of missing value.

  • GAIN Yoon et al. (2018)+RL+BIC2Zhu et al. (2020). Since RL+BIC2 is the current state-of-the-art causal discovery method, we also directly combine data imputation with RL+BIC2 and compare with it on incomplete data. The parameters used are the default values.

In the proposed approach, we use pre-trained parameters to initialize the ImNet as discussed in Section 3.2. The pre-trained parameters were generated from a generative adversarial architecture. Then, the input is the incomplete observational data and the whole neural networks were optimized together using the final loss from the Critic. We used a batch size of 64 and hidden dimension of the ImNet is same to the data dimension.

Three metrics were adopted to evaluate the estimated graphs:

  • False Discovery Rate (FDR): defined as the expected ratio of falsely discovered positive hypotheses to all those discovered. In the context of estimating the structure of a DAG, a positive hypothesis means that the estimated DAG has an same edge connection with the ground-truth DAG, and a negative hypothesis could be that the estimated DAG has an edge that does not exist in the true DAG. The FDR is the expected proportion of the falsely discovered connections to all those discovered Li and Wang (2009). For FDR, the lower the better.

  • True Positive Rate (TPR): defined as the expected proportion of the actual correctly discovered edges to all those discovered. For TPR, the higher the better.

  • Structural Hamming Distance (SHD): The structural Hamming distance Tsamardinos et al. (2006) is the number of edges that do not coincide in two partially directed acyclic graphs. Partially directed acyclic graph Peters and Bühlmann (2015) is a graph with no directed cycle. Specifically, there is no pair , such that there are directed paths from to and from to . In this paper, we used partially directed acyclic graphs. Since the SHD considers both false negatives and false positives, a lower SHD indicates a better estimate of the causal graph.

5.2 Linear Models with Gaussian and Non-Gaussian Noises

Using the data generation model in Section 4 and following the settings of Zhu et al. (2020), we generated the datasets to test the performance on linear models with Gaussian and non-Gaussian noise. In particular, we randomly remove the values of each sample in the datasets to simulate the missing value problem. For Gaussian noise, we use a standard Gaussian distribution. For the non-Gaussian noise, we follow ICA-LiNGAM Shimizu et al. (2006) to firstly generate samples from a Gaussian distribution and secondly use a power non-linearity to make them non-Gaussian. We generate samples as datasets and conduct a random permutation.

12 nodes: Firstly, we conduct comparison experiments on a graph with node number

. In this experiment, the input of each epoch is randomly selected

samples from the whole datasets that generated before and the total epoch number is 20000. Similar to NOTEARS Zheng et al. (2018) and RL-BIC2 Zhu et al. (2020), we use to prune the estimated edges.

Table 1 shows the comparison results on linear non-Gaussian and linear-Gaussian data models with missing values. Our method obtains better performance than NOTEARS and RL+BIC2. The better results means that our novel encoder integrating into the reinforcement learning framework performs better than directly combining with imputation method. The reason is that both the imputation sub-network and feature sub-network could be optimized together to learn a better feature for the incomplete data. Then, the better feature could be input into the decoder to find a better graph. The low SHD demonstrates our method can robustly generate causal graph from the incomplete data on linear models.

Linear non-Gaussian Linear-Gaussian
FDR TPR SHD FDR TPR SHD
GAIN+NOTEARS 0.486 0.514 31 0.5 0.459 33
GAIN+RL+BIC2 0.137 0.862 8 0.25 0.89 15
Ours 0.156 0.931 7 0.232 0.90 14
Table 1: Experimental results on incomplete data from linear non-Gaussian and linear Gaussian models with 12 graph nodes.

30 nodes: Secondly, we conduct comparison experiments on graph node and the adjacency matrix is generated from Bernoulli distribution with parameter . According to Zheng et al. (2018); Yu et al. (2019); Zhu et al. (2020), this edge probability choice corresponds to the fact that large graphs usually have low edge degrees in practice. In this experiment, the input of each epoch is randomly selected samples from the whole datasets that generated before and the total epoch number is 40000. Since this edge probability choice corresponds to the fact that large graphs usually have low edge degrees in practice, following Zhu et al. (2020), we we add to each edge a common bias term initialized to .

Table 2 shows the comparison results. Directly combining imputation and causal discovery works not well when the number of graph node increase, RL+BIC2 only obtains 0.538 in TPR and 108 in SHD. NOTEARS obtains 89 in SHD which is worse than RL+BIC2 in solving large causal graph discovery. In contrast, our method obtains much better performance than both of these direct combination strategy, which improves at least 16.7% in TPR and 32 in SHD. Compared with NOTEARS, we improve 64.1% in TPR and 42 in SHD, which is a great performance gain.

FDR TPR SHD
GAIN+NOTEARS 0.648 0.333 89
GAIN+RL+BIC2 0.523 0.807 79
Ours 0.371 0.974 47
Table 2: Experimental results on incomplete data from Bernoulli distribution with 30 graph nodes.

5.3 Nonlinear Models with Quadratic Functions and Gaussian Process

Nonlinear model with quadratic functions. Furthermore, we evaluated the performance of our approach on data generated from non-linear models with quadratic functions. We used the same data generation process with Zhu et al. (2020) and generated samples, each has 10 attributes. To simulate missing data, we randomly remove the values for each sample with a missing probability of . In this situation, identifiability of the true causal graph is guaranteed Peters et al. (2014).

Experimental results are listed in Table 3. Compared with NOTEARS and RL+BIC2, our approach obtains better performance with a gain of 31% in TPR and 6 in SHD. With a significantly lower SHD, we argue that our approach can robustly generate causal graph from incomplete data of nonlinear model with quadratic functions.

Quadratic function Gaussian Process
FDR TPR SHD FDR TPR SHD
GAIN+NOTEARS 0.625 0.261 23 0.35 0.295 31
GAIN+RL+BIC2 0.33 0.34 19 0.071 0.295 31
Ours 0.25 0.65 13 0.0857 0.727 12
Table 3: Experimental results on incomplete data from nonlinear models

Nonlinear model with Gaussian process. We also compared our approach with other baselines on nonlinear models with Gaussian process. Using the simulation process in Zhu et al. (2020), we further generated missing values to get the observational data by randomly removing values from each sample with a missing probability of 20%.

As we can see from Table 3, our method achieves a performance gain of 43.2% in TPR and improves 19 in SHD. In addition, we can also conclude from the result that, by direct combination with data imputation, state-of-the-art causal discovery methods (NOTEARS and RL+BIC2) all perform poorly in nonlinear models when there are missing values. The reason is that their data imputation is a stand-alone stage which may be not capable to impute a suitable data for the causal graph discovery. Our method combines both data imputation and feature extraction into an encoder and the encoder integrate into a reinforcement learning framework to search the best DAG to fit the incomplete data. The experiments demonstrate that we can obtain much better performance than the direct combination of imputation and DAG learning methods. Moreover, the low SHD demonstrates our method can robustly generate causal graph from the incomplete data based on non-linear models.

5.4 Real Data

We also validate our proposed approach on a real-world dataset which is designed for reconstruction of causal graphs from physiologically relevant primary single cells Sachs et al. (2005). Data were collected after a series of inhibitory interventions and stimulatory cues. Specifically, researchers used a fixed stimulation process to stop cell reactions for , and then recorded and analyzed the effects of each condition on the intracellular signaling networks of human primary naive cell. This dataset is well accepted by the biological community and widely utilized in many domains Zhu et al. (2020). In our experiment, we consider the general perturbation that is to activate cells and induce proliferation and cytokine production. The observational data contains 853 samples and 11 attributes. According to Sachs et al. (2005), the ground truth causal graph has 17 edges evaluated manually with high accuracy. To simulate missing values, we randomly removed the value of each sample with a probability of 20%.

We run the same graph prune process as the above experiment setting during the training for both RL+BIC2 and our approach. The experiment results are listed in Table 4. As we can see from the table, our proposed approach obtains significantly better performance than its competitors. Noticeably, the TPR results of all methods are very low for this real dataset. A possible reason is that the ground truth graph is very sparse with only 17 edges. Since the SHD considers both false negatives and false positives, the lower SHD in our result demonstrates the advantage of our approach in handling real incomplete data further indicates its ability for robustly generating causal graphs from the incomplete real data.

FDR TPR SHD
GAIN+NOTEARS 0.692 0.353 18
GAIN+RL+BIC2 0.8 0.118 18
Ours 0.417 0.411 13
Table 4: Experimental results on the real dataset with missing values.

6 Conclusion

In this paper, we proposed a causal discovery approach for learning causal graphs from observational data with missing values. In particular, we develop an ad-hoc encoder network for extracting a robust feature representation of the incomplete data, and then integrate the learned representation into a reinforcement learning framework to search for the best causal graph. The proposed approach uses deep neural networks to imitate the human brain information process on incomplete data. Experiments on both synthetic and real datasets demonstrate the effectiveness of our approach which can obtain as much as 43.2% performance gain compared to existing state-of-the-art approaches.

References

  • D. M. Chickering, D. Heckerman, and C. Meek (2004) Large-sample learning of bayesian networks is np-hard.

    Journal of Machine Learning Research

    5 (Oct), pp. 1287–1330.
    Cited by: §1.
  • D. M. Chickering (2002) Optimal structure identification with greedy search. Journal of machine learning research 3 (Nov), pp. 507–554. Cited by: §2.
  • C. P. de Campos and Q. Ji (2010) Properties of bayesian dirichlet scores to learn bayesian network structures. In

    Twenty-Fourth AAAI Conference on Artificial Intelligence

    ,
    Cited by: §2.
  • M. Hernán and J. Robins (2020) Causal inference: what if. Boca Raton: Chapman & Hall/CRC. Cited by: §1.
  • M. A. Hernan and J. M. Robins (2010) Causal inference. CRC Boca Raton, FL;. Cited by: §1.
  • P. O. Hoyer, D. Janzing, J. M. Mooij, J. Peters, and B. Schölkopf (2009) Nonlinear causal discovery with additive noise models. In Advances in neural information processing systems, pp. 689–696. Cited by: §4.
  • B. Huang, K. Zhang, Y. Lin, B. Schölkopf, and C. Glymour (2018) Generalized score functions for causal discovery. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1551–1560. Cited by: §1.
  • J. Josse, J. P. Reiter, et al. (2018) Introduction to the special section on missing data. Statistical Science 33 (2), pp. 139–141. Cited by: §2.
  • M. Kalisch and P. Bühlmann (2014) Causal structure learning and inference: a selective review. Quality Technology & Quantitative Management 11 (1), pp. 3–21. Cited by: §2.
  • V. R. Konda and J. N. Tsitsiklis (2000) Actor-critic algorithms. In Advances in neural information processing systems, pp. 1008–1014. Cited by: §3.3, §3.3.
  • S. Lachapelle, P. Brouillard, T. Deleu, and S. Lacoste-Julien (2019) Gradient-based neural dag learning. arXiv preprint arXiv:1906.02226. Cited by: §2, §3.3.
  • J. Li and Z. J. Wang (2009) Controlling the false discovery rate of the association/causality structure learned with the pc algorithm. Journal of Machine Learning Research 10 (Feb), pp. 475–514. Cited by: 1st item.
  • Q. Li, Z. Han, and X. Wu (2018)

    Deeper insights into graph convolutional networks for semi-supervised learning

    .
    In Thirty-Second AAAI Conference on Artificial Intelligence, Cited by: §3.2.
  • I. Mayer, J. Josse, F. Raimundo, and J. Vert (2020) MissDeepCausal: causal inference from incomplete data using deep latent variable models. arXiv preprint arXiv:2002.10837. Cited by: §4.
  • P. Nandy, A. Hauser, M. H. Maathuis, et al. (2018) High-dimensional consistency in score-based and hybrid structure learning. The Annals of Statistics 46 (6A), pp. 3151–3183. Cited by: §2.
  • J. Pearl (2009) Causality. Cambridge university press. Cited by: §1.
  • J. Pearl (2010) An introduction to causal inference. The international journal of biostatistics 6 (2). Cited by: §1.
  • J. Peters and P. Bühlmann (2015) Structural intervention distance for evaluating causal graphs. Neural computation 27 (3), pp. 771–799. Cited by: 3rd item.
  • J. Peters, D. Janzing, and B. Schölkopf (2017) Elements of causal inference: foundations and learning algorithms. MIT press. Cited by: §1, §4.
  • J. Peters, J. M. Mooij, D. Janzing, and B. Schölkopf (2014) Causal discovery with continuous additive noise models. The Journal of Machine Learning Research 15 (1), pp. 2009–2053. Cited by: §1, §5.3.
  • K. Sachs, O. Perez, D. Pe’er, D. A. Lauffenburger, and G. P. Nolan (2005) Causal protein-signaling networks derived from multiparameter single-cell data. Science 308 (5721), pp. 523–529. Cited by: §1, §5.4.
  • Y. Sakamoto, M. Ishiguro, and G. Kitagawa (1986) Akaike information criterion statistics. Dordrecht, The Netherlands: D. Reidel 81. Cited by: §2.
  • G. Schwarz et al. (1978) Estimating the dimension of a model. The annals of statistics 6 (2), pp. 461–464. Cited by: §2.
  • S. Shimizu, P. O. Hoyer, A. Hyvärinen, and A. Kerminen (2006) A linear non-gaussian acyclic model for causal discovery. Journal of Machine Learning Research 7 (Oct), pp. 2003–2030. Cited by: §4, §5.2.
  • P. Spirtes, C. N. Glymour, R. Scheines, and D. Heckerman (2000) Causation, prediction, and search. MIT press. Cited by: §2.
  • S. Triantafillou and I. Tsamardinos (2016) Score-based vs constraint-based causal learning in the presence of confounders.. In CFA@ UAI, pp. 59–67. Cited by: §1.
  • I. Tsamardinos, L. E. Brown, and C. F. Aliferis (2006) The max-min hill-climbing bayesian network structure learning algorithm. Machine learning 65 (1), pp. 31–78. Cited by: §2, 3rd item.
  • A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: §3.2.
  • J. Yoon, J. Jordon, and M. Van Der Schaar (2018) Gain: missing data imputation using generative adversarial nets. arXiv preprint arXiv:1806.02920. Cited by: §3.2, 1st item, 2nd item, §5.1.
  • Y. Yu, J. Chen, T. Gao, and M. Yu (2019) Dag-gnn: dag structure learning with graph neural networks. arXiv preprint arXiv:1904.10098. Cited by: §2, §5.2.
  • J. Zhang (2008) On the completeness of orientation rules for causal discovery in the presence of latent confounders and selection bias. Artificial Intelligence 172 (16-17), pp. 1873–1896. Cited by: §2.
  • X. Zheng, B. Aragam, P. K. Ravikumar, and E. P. Xing (2018) DAGs with no tears: continuous optimization for structure learning. In Advances in Neural Information Processing Systems, pp. 9472–9483. Cited by: §1, §2, §3.3, §3.3, 1st item, §5.2, §5.2.
  • S. Zhu, I. Ng, and Z. Chen (2020) Causal discovery with reinforcement learning. In International Conference on Learning Representations, Cited by: §1, §1, §1, §3.2, §3.3, 2nd item, §5.2, §5.2, §5.2, §5.3, §5.3, §5.4.