Causal Discovery in Physical Systems from Videos

Causal discovery is at the core of human cognition. It enables us to reason about the environment and make counterfactual predictions about unseen scenarios, that can vastly differ from our previous experiences. We consider the task of causal discovery from videos in an end-to-end fashion without supervision on the ground-truth graph structure. In particular, our goal is to discover the structural dependencies among environmental and object variables: inferring the type and strength of interactions that have a causal effect on the behavior of the dynamical system. Our model consists of (a) a perception module that extracts a semantically meaningful and temporally consistent keypoint representation from images, (b) an inference module for determining the graph distribution induced by the detected keypoints, and (c) a dynamics module that can predict the future by conditioning on the inferred graph. We assume access to different configurations and environmental conditions, i.e., data from unknown interventions on the underlying system; thus, we can hope to discover the correct underlying causal graph without explicit interventions. We evaluate our method in a planar multi-body interaction environment and scenarios involving fabrics of different shapes like shirts and pants. Experiments demonstrate that our model can correctly identify the interactions from a short sequence of images and make long-term future predictions. The causal structure assumed by the model also allows it to make counterfactual predictions and extrapolate to systems of unseen interaction graphs or graphs of various sizes.

READ FULL TEXT VIEW PDF

page 6

page 7

page 8

page 15

07/18/2022

A Meta-Reinforcement Learning Algorithm for Causal Discovery

Causal discovery is a major task with the utmost importance for machine ...
10/13/2020

Causal Structure Learning: a Bayesian approach based on random graphs

A Random Graph is a random object which take its values in the space of ...
07/20/2017

Causal Transfer Learning

An important goal in both transfer learning and causal inference is to m...
02/01/2022

Filtered-CoPhy: Unsupervised Learning of Counterfactual Physics in Pixel Space

Learning causal relationships in high-dimensional data (images, videos) ...
06/06/2021

Collaborative Causal Discovery with Atomic Interventions

We introduce a new Collaborative Causal Discovery problem, through which...
07/18/2020

Autoregressive flow-based causal discovery and inference

We posit that autoregressive flow models are well-suited to performing a...
05/29/2021

GINA: Neural Relational Inference From Independent Snapshots

Dynamical systems in which local interactions among agents give rise to ...

1 Introduction

Causal understanding of the world around us is part of the bedrock of intelligence. This ability enables counterfactual reasoning, which often distinguishes algorithmic models from intelligent behavior in humans. This ability to discover latent causal mechanisms from data poses an important technical question towards building intelligent and interactive systems Spirtes et al. (2000); Peters et al. (2017); Glymour et al. (2019). For instance, Figure 1 shows an example of a multi-body system. While the images may convey the identity and position of balls, the structural causal mechanism is latent. Each pair of balls is connected to each other through an edge (say a spring, a rigid rod, or be free). Further, each edge may have a set of hidden confounders, like the rest length of a spring or the rigid rod, that causally affect the physical interaction behavior. The underlying causal structure and governing functional mechanism may not be apparent if observations, such as images, are implicit measurements of ground-truth variables Zhang et al. (2017). Furthermore, they can also vary across different configurations and scenarios within a domain. Hence, we need few-shot causal discovery algorithms purely from image data.

Figure 1: Causal discovery in physical systems from videos. The left figure shows balls, connected by invisible physical relations (shown in grey), moving around. Hidden confounding variables like edge type and edge parameters have a causal effect on the behavior of the underlying system. We humans can observe balls, infer the existence and variables on the edges between the balls, and predict the future. Similarly, in the cloth environment shown on the right, we can find a reduced-order representation by placing temporally consistent keypoints on the images and determine the causal relationships between them to reflect the topology of the cloth.

In a special case, where the entities are all disconnected and the only interactions are of collision-type, there have been a number of models proposed to employ an object-centric formulation in recent literature to directly predict the future from images Watters et al. (2017); Janner et al. (2019); Kipf et al. (2018); Santoro et al. (2017). In such cases, model discovery may not even be necessary given these solutions. However these associative models crumble in the face of more complex stationary underlying generative structures such as different types of latent edges and edge mechanisms Gong et al. (2017). Moreover, they are insufficient to capture novel generative structures and make counterfactual predictions at test time.

In this work, we aim to discover the structural causal model (SCM) to predict the future and reason over counterfactuals. To recover an SCM only from images, we need to first learn a compact state representation, infer a causal graph among these variables as well as identify hidden confounders, finally learn the functional mechanism of dynamics. This is a particularly challenging task in that we only have images and do not have explicit knowledge of the node variables. Furthermore, we neither assume access to ground truth causal graph, nor the hidden confounders and the dynamics that characterize the effect of the physical interactions. In order to tackle this end-to-end causal discovery problem in an unsupervised manner, we learn from datasets that contain episodes generated from different causal graphs but with a shared dynamics model.

Summary of results. The main contributions of this work lie in the one-shot discovery of unseen

causal mechanisms in new environments from partially observed visual data in a continuous state space. This entails jointly performing model class estimation, parameter inference and thereby building a predictive model for new latent structures at test time in a meta-learning framework.

The proposed Visual Causal Discovery Network (V-CDN), consists of three modules for visual perception, structure inference, and dynamics prediction (Figure 2). Specifically, we train the perception module that extracts unsupervised keypoints from the images to enable node discovery, building upon Kulkarni et al. (2019)

. The inference module then takes the predicted keypoints and infers the exogenous variables that govern the interactions between each pair of keypoints using graph neural networks. Conditioned on the inferred graph, the dynamics module learns to predict the future movements of the keypoints. We consider a variety of configurations and scenarios, which gives us different combinations of variables. Thus, we can hope to discover the correct underlying causal graph without explicit interventions.

Experiments show that our proposed model is robust to input noise and works well on multi-body interactions with varying degrees of complexity. Notably, our method can facilitate counterfactual predictions and extrapolate to cases with a variable number of objects and scenarios where the underlying interaction graphs are never seen before. Experiments in a fabric environment also demonstrate the generalization ability of our method, where the same model can handle fabrics of different types and shapes, accurately identifying the dependency structure and modeling the underlying dynamics even when state variables are a reduced-order keypoint-based representation of the original system.

2 Visual Causal Discovery in Physical Systems: V-CDN

In this section, we present the details of our model, which extracts structured representations from videos, discovers the causal relationships, infers the hidden confounding variables on the directed edges, and then predicts the future. Our model directly learns from raw videos, which recovers the underlying causal graph without any ground truth supervision.

Problem formulation.

We consider a dataset of trajectories observed from a latent generative dynamical system, where each datapoint is generated with unknown interventions on both the underlying causal graph structure and parameters affecting the mechanism. The generative process of each episode follows a causal summary graph Peters et al. (2017), , where contains the subcomponents underlying the system at different time steps and , which we assume is invariant over time, denotes the causal relationships between the constituting components. Specifically, for each directed edge , there are both discrete and continuous hidden confounders denoting the type and parameters of the relationship that determines the computation of the underlying structural causal model (SCM) Pearl (2009) and affects the behavior of the dynamical system. We further assume that in the dynamical system, there are no instantaneous edges or edges that go back in time. Note that the causal summary graph may contain cycles, but when spanning over time, the derived causal full time graph is a directed acyclic graph (DAG), as shown in Figure 2.

In this work, we consider the case where we only have access to the data in the form of image sequences, , without any knowledge of the ground truth causal model and the intervention being applied, where is an image of dimension , denoting the data we received at time of episode . The goal is to perform one-short recovery of the causal summary graph from a short sequence of images and simultaneously learn a shared dynamics model that operates on the identified graph to make counterfactual predictions into the future. This is a particularly challenging task and our method serves as a first step for tackling this problem in an end-to-end fashion using unsupervised intermediate keypoint representation.

Figure 2: Model overview. Visual Causal Discovery Network (V-CDN) consists of three components: (a) a perception module to process the images and extract unsupervised keypoints as the state representation, (b) an inference module that observes the movements of the keypoints and determines the existence of the causal relations as well as the associated hidden confounders, and (c) a dynamics module that predicts the future by conditioning on the current state and the inferred causal summary graph.

Overview of Visual Causal Discovery Network (V-CDN).

We aim to find a temporally-consistent (and possibly reduced-order) keypoint-based representation from images using a perception module trained in an unsupervised way,

(1)

where the function , parameterized by , takes raw images as input and outputs a set of keypoints in 2-D coordinates, , that reflect the constituting components in the system. Then, we use an inference module, , parameterized by , that takes the sequence of detected keypoints as input and predicts the edge set, ,

(2)

where . includes and , denoting the latent discrete and continuous confounders associated with the directed edge from to at episode . and together constitute our discovered causal summary graph, conditioned on which, a dynamics module, , parameterized by , aims to predict the state of the keypoints at time ,

(3)

By iteratively applying , we are able to make long-term future predictions.

The perception module, , the inference module, , and dynamics modules, , are shared among all episodes in the dataset consisting of various causal graphs with different discrete and continuous hidden confounders, which enables one-shot adaptation to an unseen graph at test time and make counterfactual predictions by intervening on the identified graph and rolling into the future using the dynamics module.

To train the system, we take an unsupervised keypoint detection algorithms Kulkarni et al. (2019) as our perception module and train it on the image set, , for extracting temporally-consistent keypoints. The inference module and the dynamics module are trained together by minimizing the following objective:

(4)

where is a regularizer imposed on the identified graph, e.g., to encourage sparsity.

2.1 Unsupervised keypoint detection from videos

The perception module’s task is to transform the images into a keypoint representation in an unsupervised way. In this work, we leverage the technique developed in Kulkarni et al. (2019). In particular, we use reconstruction loss over the pixels for encouraging the keypoints to disperse over the foreground of the image. During training, it takes in a source image and a target image sampled from the same episode, and passes them through a feature extractor and a keypoint detector . The method then uses an operation call transport to construct a new feature map, , using a set of local features indicated by the detected keypoints. A refiner network takes in the feature map and generates the reconstruction, . The module optimizes the parameters in the feature extractor, keypoint detector and refiner by minimizing a pixel-wise loss,

, using stochastic gradient descent.

By combining the keypoint-based bottleneck layer and the downstream reconstruction task, the model extracts temporally-consistent keypoints spreading over the foreground of the images. We denote the detected keypoints at time as , where .

2.2 Graph neural networks as the spatial encoder

We use graph neural networks as a building block to model the interactions between different keypoints and generates object- and relation-centric embeddings. Both the inference and the dynamics modules will have the graph neural networks as a submodule to capture the underlying inductive bias.

Specifically, for a set of keypoints, we construct a directed graph , where vertices represent the information on the keypoints and edges represent the directed relation pointing to from , where denotes the associated edge attributes.

We employ a graph neural network with a similar structure as the Interaction Networks (IN) Battaglia et al. (2016) as our spatial encoder, denoted as , to generate the embeddings for the objects and the relations: .

2.3 Inferring the directed edge set of the Causal Summary Graph

After we obtain the keypoints from the images, we use an inference module to discover the edge set of the causal summary graph and infer the parameters associated with the directed edges. The inference module takes the detected keypoints over a small time window within the same episode as input and outputs a posterior distribution over the structure of the graph. More specifically, we denote the keypoint sequence as . Our goal is to predict the distribution of the edge set conditioned on the keypoint sequence using the parameterized inference function, .

To achieve our goal, we first use a graph neural network, as discussed in Section 2.2, to propagate information spatially for each frame, which gives us both node and edge embeddings for each keypoint at each frame. We then aggregate the embeddings over the temporal dimension for each node and edge using a

-D convolutional neural network. Another graph neural network takes in the temporal aggregations and predicts a discrete distribution over the edge types, where the first edge type denotes “null edge”. Conditioned on a sample from the discrete distribution, the model will then predict the continuous edge parameters. The edge type and edge parameters together constitute the

causal summary graph, which determines the existence and the actual mechanism of the interactions between different constitutional components.

In particular, we first propagate the information spatially by feeding the keypoints through a graph neural network , which gives us node and edge embeddings at each time step,

(5)

where the edge set, , contains an edge between edge pair of keypoints with the edge attributes being zero. We then aggregate the information over the temporal dimension for each node and edge using -D convolutional neural networks (CNN):

(6)

which allows our model to handle input sequences of variable lengths.

Taking in the aggregated node and edge embeddings, we use another graph neural network, , that only makes predictions over the edges to predict the categorical distribution over the edge type:

(7)

where and . The output

represents the probabilistic distribution over the type of each edge. When an edge is classified as the first type, i.e.,

is true

, which we denote as “null edge”, it will be removed in subsequent computation and no information will pass through it. Sampling from this discrete distribution is straightforward, but we cannot backpropagate the gradients through it. Instead, we employ the Gumbel-Softmax 

Jang et al. (2016); Maddison et al. (2016) technique, a continuous approximation of the discrete distribution, to get the biased gradients, which makes end-to-end training possible.

Conditioned on the inferred edge type , we would like to predict the continuous parameter on each one of the edges. For this purpose, we construct another edge set , and use a new graph neural network, , to predict the continuous parameters:

(8)

We denote the resulting edge set as , where , indicating the topology of the causal summary graph with both the type and the continuous parameter of the edge effect. The inferred causal summary graph is then represented as .

2.4 Future prediction using the forward dynamics module

The dynamics module, , predicts the future movements of the keypoints by conditioning on the current state and the inferred causal graph: , where we instantiate as a graph recurrent network, .

Since we are directly operating on the predicted keypoints from the perception module, the detected keypoints contain noise and introduce uncertainty on the actual locations. Hence, in practice, we represent the position in the future steps using a multivariate Gaussian distribution, where we predict both the mean and the covariance matrix of the next state for each keypoint.

2.5 Optimizing the model

The perception module is trained independently using the reconstruction loss, . To train the inference module and the dynamics module jointly, we instantiate the objective function shown in Equation 4 by making an analogy to the ELBO objective Kingma and Welling (2013):

(9)

For the prior , we assume that each edge is independent and use a factorized distribution over the edge types as the prior, where . The inference module and the dynamics module are then trained end-to-end using stochastic gradient descent to maximize the objective .

3 Experiments

The goal of our experimental evaluation is to answer the following questions: (1) Can the model perform one-short discovery of the causal summary graph and identify the hidden confounders, including both discrete and continuous variables? (2) How well can the model extrapolate to graphs of different sizes that are not seen during training? (3) How well can the learned model facilitate counterfactual prediction via intervening on the identified summary graph?

Environment.

We study our model in two environments: one includes masses, connected by invisible physical constraints, moving around in a 2-D plane, and the other one contains a fabric of various shapes where we are applying forces to deform it over time (Figure 3).

  • [leftmargin=*]

  • Multi-Body Interaction. There are balls of different colors moving around. At the beginning of each episode, we sample the invisible physical relations between each pair of balls independently, giving us the ground truth

    that is fixed throughout the episode. For each pair of balls, there is a one-third probability that they are not connected or linked by a rigid rod or a spring. We also sample the continuous parameters for each existing edge and fix them within the episode, e.g., the length of the rigid relation or the rest length of the spring.

  • Fabric Manipulation. We set up fabrics of three different types: a shirt, pants, and a towel, where we also vary the shape of the fabrics like the length of the pant leg or the height and width of the towel (Figure 5). We also apply forces on the contour of the fabric to deform and move it around. Our goal is to produce one single model that can handle fabrics of different types and shapes, instead of training separate models for each one of them.

Results on unsupervised keypoint detection.

Figure 3: Unsupervised keypoint detection. The first row shows the input images, and the second row shows an overlay between the predicted keypoints and the image. The perception module assigns keypoints over the foreground of the images and consistently tracks the objects over time across different frames.

We employ the same architecture and training procedure described in Kulkarni et al. (2019) to train our perception module, . Figure 3 shows some qualitative results. Our perception module can spread the keypoints over the foreground of the image and consistently track the object. Please refer to our supplementary materials for video illustrations.

Discovery the Causal Summary Graph and the hidden confounders.

Figure 4: Results on discovering the Causal Summary Graph. Shown in (a) and (b), the accuracy of edge-type classification increases as the inference module observes more frames, which also effectively decreases the uncertainty, calculated as the entropy of the predicted distribution. As exhibited in (c) and (d), there is a strong correlation between the inferred continuous variable and the ground truth hidden confounder.
Figure 5: Qualitative results on predicting the Causal Summary Graph and the future. Our inference module observes a short sequence of images and performs one-shot discovery of the causal summary graph, which recovers the ground truth graph in the Multi-Body environment and captures the underlying connectivity structures in the Cloth environment. The unfilled circles in the right four columns indicate the model’s prediction into the future. We overlay the predicted future keypoints with the truth future for comparison.

The inference module, , takes in a short sequence of the detected keypoints and aims to discover whether there is a causal relation, i.e., a physical connection, between each pair of keypoints and identifies the hidden confounders like the edge type and the edge parameters. The predicted graph will be conditioned by the dynamics module, , for future prediction. The optimization procedure does not require any supervision on the attributes associated with the edges, which allows us to infer the hidden confounders in an unsupervised way.

In the Multi-Body environment, the perception module accurately tracks the location of balls, which allows us to perform a systematic evaluation of the model’s performance by comparing its prediction with the ground truth causal summary graph used to generate the episode. Because we are working in an unsupervised regime, where the predicted edge type is in a discrete latent space distinguishing between null edge, spring, and rigid relation, we need to find a global one-on-one mapping between the prediction, , and the ground truth. We pick the one that gives us the highest accuracy, with the constraint that the first type, where there is no information passing through in the subsequence dynamics prediction, always corresponds to null edge. After the mapping, we evaluate the model’s ability to predict the continuous confounder, , by computing its correlation with the ground truth physical parameters like rest length of the spring connection.

The results are shown in Figure 4. As the model observes more frames, the classification accuracy increases, and the uncertainty decreases, which correlates with our intuition that as we obtain more observations from the environment, we have a better estimate of the exogenous variables that govern the behavior of the system. We also show the comparison with a baseline that is the same as our method except that it does not have the inference module. Our model significantly outperforms the baseline, indicating the importance of the correct modeling of the causal mechanism (Figure 6 (d)). Figure 5 shows some qualitative results, where we include side-by-side comparisons between the identified causal summary graph and the ground truth.

For the cloth environment, the keypoints on the fabrics act as a reduced-order representation of the original system, where we do not know the ground truth causal summary graph. As shown in Figure 5, the same inference module produces different causal graphs for different types of fabrics that reflect the underlying connectivity patterns, which illustrates the model’s ability to recognize the underlying dependency structure.

Extrapolation to unseen causal graphs of different sizes.

Figure 6: Results on extrapolating to unseen graphs of different sizes. Our inference module and dynamics module are trained only in environments containing masses. Thanks to the inductive bias captured by the graph neural networks in our model, it automatically generalizes to scenarios with different numbers of masses from training. The blue bars in the figures show the performance on the test set in the same distribution we trained on, and the orange bars illustrate results on extrapolation. Surprisingly, the model has a better performance in environments with and balls, even if the model has never seen them before.

To evaluate our model’s performance on extrapolation, we also create another 4 test sets in the Multi-Body environment, including 3, 4, 6, and 7 masses, respectively, for which we need to train separate perception modules to reflect the number of the moving components. However, the inference module and the dynamics module do not require retraining; instead, they can directly generalize to systems of different numbers of balls. As shown in Figure 6, the blue bar shows the performance on the test set that has the same number of balls as the training set, while the other bars illustrate the model’s ability to perform extrapolation. Interestingly, for environments with fewer balls, e.g., 3 or 4 balls, even if the model is not directly trained on these scenarios, the performance is yet better.

Counterfactual prediction and extrapolation on parameter change.

Figure 7: Results on counterfactual prediction. We make counterfactual predictions by intervening on the identified causal summary graph and evaluate the performance by comparing the predicted future with the original simulator undergoing the same intervention at . The modeling of the causal mechanism allows it to extrapolate to parameter ranges outside the training distribution.

In our experiment, we make counterfactual predictions by intervening on the estimated hidden confounders and evaluate how well the model predicts the future by making the same intervention on the ground truth simulator. The estimated confounders are in the latent space, which requires a mapping function to get the corresponding parameters in the original simulator. We use the same mapping as described in Section 3, and train a simple linear regressor for transforming the continuous variable. Figure 7 shows the performance on counterfactual predictions, which illustrates our model’s ability to answer “what if” questions and extrapolate to parameter ranges that are outside the training distribution.

4 Related Work

Causal Discovery.

Methods for causal inference from observations can broadly be categorized into three classes. Constraint-based methods (such as PC and FCI) rely on conditional independence tests as constraint-satisfaction to recover Markov-Equivalent Graphs Entner and Hoyer (2010); Spirtes et al. (2000); Colombo et al. (2011). Score based methods (such as GES) assign a score to each DAG, and perform searching in this score space Chickering (2002); Zheng et al. (2018). The third class of methods exploits such asymmetries or causal footprints to uniquely identify a DAG Shimizu (2014); Kalainathan et al. (2018); Goudet et al. (2017); Zhang and Hyvärinen (2009). Further, causal discovery from a combination of observational and interventional data has been studied in the literature Hyttinen et al. (2013); Ghassami et al. (2018); Kocaoglu et al. (2017); Wang et al. (2017); Shanmugam et al. (2015); Peters et al. (2016); Rothenhäusler et al. (2015). Many of these approaches either assume full knowledge of the intervention, make strong assumptions about the model class, or have scalability limitations.

Relational Neural Models.

Several works have attempted modeling multi-body dynamics with graphs Santoro et al. (2017); Battaglia et al. (2018, 2016) and attention Goyal et al. (2019); Vaswani et al. (2017). However, these methods assume the latent generative causal graph is stationary, resulting in poor generalization to variations in either graph structure or its functional parameters. A few recent works Alet et al. (2019); Kipf et al. (2018) have tried to infer the relationship between different entities in the system but not from image data and do not discover the causal structure.

Dynamics from Videos.

Video modeling and prediction have found much attention recently Ye et al. (2019); Hsieh et al. (2018); Kumar et al. (2019); Yi et al. (2020). The idea of learned latent space embeddings for unsupervised loss computation has also enjoyed recent success in prediction Watter et al. (2015); Hafner et al. (2019); Li et al. (2019b, 2020a); Hafner et al. (2020). However, the latent space may not be interpretable and overall model may not generalize. In contrast, keypoints (or particles) provide succinct and generalizable representions across a variety of use cases: particle representation Macklin et al. (2014); Mrowca et al. (2018); Li et al. (2019a); Ummenhofer et al. (2020); Sanchez-Gonzalez et al. (2020); Li et al. (2020b), deformable object modelling Jakab et al. (2018); Suwajanakorn et al. (2018), instance independent class templates Manuelli et al. (2019). However, providing domain-specific labeled data can be tedious, hence unsupervised keypoint learning methods using reconstruction or view-consistency as loss have broader appeal Dundar et al. (2020); Kulkarni et al. (2019).

This paper builds on ideas from unsupervised visual representation learning and leverages it for visual causal discovery wherein the underlying model components use relational modeling to output a Causal Summary Graph, which has not been achieved in prior work for complex video datasets.

5 Conclusion

Our method extracts a structured keypoint-based representation from videos, understands the causal relationships between different constituting components, and makes predictions into the future. The model neither assumes access to the ground truth causal graph, nor the hidden confounders, nor the dynamics that describes the effect of the physical interactions; instead, we learn to discover the dependency structures and model the causal mechanisms end-to-end from images in an unsupervised way, which we hope can facilitate future studies of more generalizable visual reasoning systems.

Broader Impact

Causal reasoning is the process of identifying causality: the relationship between a cause and its effect, which is at the core of human intelligence. Learning directly from observations only without the modeling of the underlying causal structure can lead to the emergence of incorrect associations between the input and the output. The learned model can overfit to the bias associated with the dataset, limiting its ability to generalize outside the training distribution and often leading to catastrophic outcomes when deploying in the real world.

Discovering the causal relationships typically requires learning from data collected in randomized controlled trials or A/B tests where the experimenter controls certain variables of interest. However, carrying out the intervention or randomized trials may be impossible or at least impractical or unethical in many situations.

This work aims at discovering the causal structure and modeling the underlying causal mechanism from visual inputs, where we have access to data from different configurations and scenarios under unknown interventions both on the structure of the causal graph and its parameters. The ability to accurately capture the dependency structures and identify the hidden confounders is of vital importance towards helping the learned models generalize. As we discussed in our experiments, causal modeling improved generalization to both outside the training distribution and also towards high likelihood counterfactual data augmentation.

While excited about these results, it is important to acknowledge that this is a particularly challenging task, and our method serves as an initial step towards the broader goal of building physically grounded visual intelligence. We mainly focussed on the modeling of the dynamical system, while some aspects of the causal graph such as sophisticated dependencies and practical issues arising from sampling rates are not touched upon. Nonetheless, we hope to draw people’s attention to this grand challenge and inspire future research on generalizable physically grounded reasoning from visual inputs without domain-specific feature engineering.

References

  • F. Alet, E. Weng, T. Lozano-Pérez, and L. P. Kaelbling (2019) Neural relational inference with fast modular meta-learning. In Advances in Neural Information Processing Systems, pp. 11804–11815. Cited by: §4.
  • P. Battaglia, R. Pascanu, M. Lai, D. J. Rezende, et al. (2016) Interaction networks for learning about objects, relations and physics. In Advances in neural information processing systems, Cited by: §A.2, §2.2, §4.
  • P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. Zambaldi, M. Malinowski, A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner, et al. (2018)

    Relational inductive biases, deep learning, and graph networks

    .
    arXiv:1806.01261. Cited by: §4.
  • D. M. Chickering (2002) Optimal structure identification with greedy search.

    Journal of machine learning research

    3 (Nov), pp. 507–554.
    Cited by: §4.
  • D. Colombo, M. H. Maathuis, M. Kalisch, and T. S. Richardson (2011) Learning high-dimensional dags with latent and selection variables. In UAI, pp. 850. Cited by: §4.
  • A. Dundar, K. J. Shih, A. Garg, R. Pottorf, A. Tao, and B. Catanzaro (2020) Unsupervised disentanglement of pose, appearance and background from images and videos. arXiv preprint arXiv:2001.09518. Cited by: §4.
  • D. Entner and P. O. Hoyer (2010) On causal discovery from time series data using fci. Probabilistic graphical models, pp. 121–128. Cited by: §4.
  • A. Ghassami, S. Salehkaleybar, N. Kiyavash, and E. Bareinboim (2018) Budgeted experiment design for causal structure learning. In International Conference on Machine Learning, pp. 1724–1733. Cited by: §4.
  • C. Glymour, K. Zhang, and P. Spirtes (2019) Review of causal discovery methods based on graphical models. Frontiers in Genetics 10. Cited by: §1.
  • M. Gong, K. Zhang, B. Schölkopf, C. Glymour, and D. Tao (2017) Causal discovery from temporally aggregated time series. In

    Uncertainty in artificial intelligence: proceedings of the… conference. Conference on Uncertainty in Artificial Intelligence

    ,
    Vol. 2017. Cited by: §1.
  • O. Goudet, D. Kalainathan, P. Caillou, I. Guyon, D. Lopez-Paz, and M. Sebag (2017) Causal generative neural networks. arXiv preprint arXiv:1711.08936. Cited by: §4.
  • A. Goyal, A. Lamb, J. Hoffmann, S. Sodhani, S. Levine, Y. Bengio, and B. Schölkopf (2019) Recurrent independent mechanisms. arXiv preprint arXiv:1909.10893. Cited by: §4.
  • D. Hafner, T. Lillicrap, J. Ba, and M. Norouzi (2020) Dream to control: learning behaviors by latent imagination. In International Conference on Learning Representations, Cited by: §4.
  • D. Hafner, T. Lillicrap, I. Fischer, R. Villegas, D. Ha, H. Lee, and J. Davidson (2019) Learning latent dynamics for planning from pixels. In International Conference on Machine Learning, Cited by: §4.
  • J. Hsieh, B. Liu, D. Huang, L. F. Fei-Fei, and J. C. Niebles (2018) Learning to decompose and disentangle representations for video prediction. In Advances in Neural Information Processing Systems, pp. 517–526. Cited by: §4.
  • A. Hyttinen, F. Eberhardt, and P. O. Hoyer (2013) Experiment selection for causal discovery. The Journal of Machine Learning Research 14 (1), pp. 3041–3071. Cited by: §4.
  • T. Jakab, A. Gupta, H. Bilen, and A. Vedaldi (2018) Unsupervised learning of object landmarks through conditional image generation. In Advances in Neural Information Processing Systems, pp. 4016–4027. Cited by: §A.1, §4.
  • E. Jang, S. Gu, and B. Poole (2016) Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144. Cited by: §2.3.
  • M. Janner, S. Levine, W. T. Freeman, J. B. Tenenbaum, C. Finn, and J. Wu (2019) Reasoning about physical interactions with object-centric models. In International Conference on Learning Representations, Cited by: §1.
  • D. Kalainathan, O. Goudet, I. Guyon, D. Lopez-Paz, and M. Sebag (2018) Sam: structural agnostic model, causal discovery and penalized adversarial learning. arXiv preprint arXiv:1803.04929. Cited by: §4.
  • D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §C.1.
  • D. P. Kingma and M. Welling (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Cited by: §2.5.
  • T. Kipf, E. Fetaya, K. Wang, M. Welling, and R. Zemel (2018) Neural relational inference for interacting systems. In International Conference on Machine Learning, pp. 2688–2697. Cited by: §1, §4.
  • M. Kocaoglu, K. Shanmugam, and E. Bareinboim (2017) Experimental design for learning causal graphs with latent variables. In Advances in Neural Information Processing Systems, pp. 7018–7028. Cited by: §4.
  • T. D. Kulkarni, A. Gupta, C. Ionescu, S. Borgeaud, M. Reynolds, A. Zisserman, and V. Mnih (2019) Unsupervised learning of object keypoints for perception and control. In Advances in Neural Information Processing Systems, pp. 10723–10733. Cited by: §A.1, §C.1, §1, §2, §2.1, §3, §4.
  • M. Kumar, M. Babaeizadeh, D. Erhan, C. Finn, S. Levine, L. Dinh, and D. Kingma (2019) VideoFlow: a conditional flow-based model for stochastic video generation. arXiv preprint arXiv:1903.01434. Cited by: §4.
  • Y. Li, H. He, J. Wu, D. Katabi, and A. Torralba (2020a) Learning compositional koopman operators for model-based control. In International Conference on Learning Representations, Cited by: §4.
  • Y. Li, T. Lin, K. Yi, D. Bear, D. L.K. Yamins, J. Wu, J. B. Tenenbaum, and A. Torralba (2020b) Visual grounding of learned physical models. In International Conference on Machine Learning, Cited by: §4.
  • Y. Li, J. Wu, R. Tedrake, J. B. Tenenbaum, and A. Torralba (2019a) Learning particle dynamics for manipulating rigid bodies, deformable objects, and fluids. In ICLR, Cited by: §4.
  • Y. Li, J. Wu, J. Zhu, J. B. Tenenbaum, A. Torralba, and R. Tedrake (2019b) Propagation networks for model-based control under partial observation. In ICRA, Cited by: §A.2, §4.
  • M. Macklin, M. Müller, N. Chentanez, and T. Kim (2014) Unified particle physics for real-time applications. ACM Transactions on Graphics (TOG) 33 (4), pp. 153. Cited by: §B.2, §4.
  • C. J. Maddison, A. Mnih, and Y. W. Teh (2016)

    The concrete distribution: a continuous relaxation of discrete random variables

    .
    arXiv preprint arXiv:1611.00712. Cited by: §2.3.
  • L. Manuelli, W. Gao, P. Florence, and R. Tedrake (2019) KPAM: keypoint affordances for category-level robotic manipulation. arXiv preprint arXiv:1903.06684. Cited by: §4.
  • D. Mrowca, C. Zhuang, E. Wang, N. Haber, L. F. Fei-Fei, J. Tenenbaum, and D. L. Yamins (2018) Flexible neural representation for physics prediction. In Advances in Neural Information Processing Systems, pp. 8799–8810. Cited by: §4.
  • A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al. (2019) PyTorch: an imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, pp. 8024–8035. Cited by: Appendix C.
  • J. Pearl (2009) Causality. Cambridge university press. Cited by: §2.
  • J. Peters, P. Bühlmann, and N. Meinshausen (2016)

    Causal inference by using invariant prediction: identification and confidence intervals

    .
    Journal of the Royal Statistical Society: Series B (Statistical Methodology) 78 (5), pp. 947–1012. Cited by: §4.
  • J. Peters, D. Janzing, and B. Schölkopf (2017) Elements of causal inference: foundations and learning algorithms. MIT press. Cited by: §1, §2.
  • D. Rothenhäusler, C. Heinze, J. Peters, and N. Meinshausen (2015) BACKSHIFT: learning causal cyclic graphs from unknown shift interventions. In Advances in Neural Information Processing Systems, pp. 1513–1521. Cited by: §4.
  • A. Sanchez-Gonzalez, J. Godwin, T. Pfaff, R. Ying, J. Leskovec, and P. W. Battaglia (2020) Learning to simulate complex physics with graph networks. In International Conference on Machine Learning, Cited by: §4.
  • A. Sanchez-Gonzalez, N. Heess, J. T. Springenberg, J. Merel, M. Riedmiller, R. Hadsell, and P. Battaglia (2018) Graph networks as learnable physics engines for inference and control. arXiv preprint arXiv:1806.01242. Cited by: §A.2.
  • A. Santoro, D. Raposo, D. G. Barrett, M. Malinowski, R. Pascanu, P. Battaglia, and T. Lillicrap (2017) A simple neural network module for relational reasoning. In Advances in neural information processing systems, pp. 4967–4976. Cited by: §1, §4.
  • K. Shanmugam, M. Kocaoglu, A. G. Dimakis, and S. Vishwanath (2015) Learning causal graphs with small interventions. In Advances in Neural Information Processing Systems, pp. 3195–3203. Cited by: §4.
  • S. Shimizu (2014) LiNGAM: non-gaussian methods for estimating causal structures. Behaviormetrika 41 (1), pp. 65–98. Cited by: §4.
  • P. Spirtes, C. N. Glymour, R. Scheines, and D. Heckerman (2000) Causation, prediction, and search. MIT press. Cited by: §1, §4.
  • S. Suwajanakorn, N. Snavely, J. J. Tompson, and M. Norouzi (2018) Discovery of latent 3d keypoints via end-to-end geometric reasoning. In Advances in Neural Information Processing Systems, pp. 2059–2070. Cited by: §A.1, §4.
  • B. Ummenhofer, L. Prantl, N. Thuerey, and V. Koltun (2020) Lagrangian fluid simulation with continuous convolutions. In International Conference on Learning Representations, Cited by: §4.
  • A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: §4.
  • Y. Wang, L. Solus, K. Yang, and C. Uhler (2017) Permutation-based causal inference algorithms with interventions. In Advances in Neural Information Processing Systems, pp. 5822–5831. Cited by: §4.
  • M. Watter, J. Springenberg, J. Boedecker, and M. Riedmiller (2015) Embed to control: a locally linear latent dynamics model for control from raw images. In Advances in neural information processing systems, pp. 2746–2754. Cited by: §4.
  • N. Watters, D. Zoran, T. Weber, P. Battaglia, R. Pascanu, and A. Tacchetti (2017) Visual interaction networks: learning a physics simulator from video. In Advances in neural information processing systems, pp. 4539–4547. Cited by: §1.
  • Y. Ye, M. Singh, A. Gupta, and S. Tulsiani (2019) Compositional video prediction. In

    Proceedings of the IEEE International Conference on Computer Vision

    ,
    pp. 10353–10362. Cited by: §4.
  • K. Yi, C. Gan, Y. Li, P. Kohli, J. Wu, A. Torralba, and J. B. Tenenbaum (2020) {clevrer}: collision events for video representation and reasoning. In International Conference on Learning Representations, Cited by: §4.
  • K. Zhang and A. Hyvärinen (2009) On the identifiability of the post-nonlinear causal model. In 25th Conference on Uncertainty in Artificial Intelligence (UAI 2009), pp. 647–655. Cited by: §4.
  • K. Zhang, M. Gong, J. Ramsey, K. Batmanghelich, P. Spirtes, and C. Glymour (2017) Causal discovery in the presence of measurement error: identifiability conditions. arXiv preprint arXiv:1706.03768. Cited by: §1.
  • Y. Zhang, Y. Guo, Y. Jin, Y. Luo, Z. He, and H. Lee (2018) Unsupervised discovery of object landmarks as structural representations. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    ,
    pp. 2694–2703. Cited by: §A.1.
  • X. Zheng, B. Aragam, P. K. Ravikumar, and E. P. Xing (2018) DAGs with no tears: continuous optimization for structure learning. In Advances in Neural Information Processing Systems, pp. 9472–9483. Cited by: §4.

Appendix A Model details

a.1 Unsupervised keypoint detection from videos

The perception module maps the input images into a set of keypoints in an unsupervised way. Any unsupervised keypoint detection methods that can track a component consistently overtime should suit our use case, and there have been many recently-proposed methods that can serve this purpose Suwajanakorn et al. (2018); Jakab et al. (2018); Zhang et al. (2018). In this work, we use the technique developed in Kulkarni et al. (2019).

As described in Section 2.1 of the main paper, we use reconstruction loss over the pixels for encouraging the keypoints to spread over the foreground of the image. During training, it takes in a source image and a target image sampled from the same episode, and passes them through a feature extractor and a keypoint detector . The model then uses an operation call transport to construct a new feature map using a set of local features indicated by the detected keypoints:

(10)

where

is a heatmap image containing fixed-variance isotropic Gaussians around each of the

points specified by (Figure 8). The model then passes the feature map through a refiner network to get the reconstruction, . We optimize the parameters in the feature extractor, keypoint detector and refiner by minimizing a pixel-wise loss, , using stochastic gradient descent.

a.2 Graph neural networks as the spatial encoder

Graph neural networks act as a building block in our model to capture the interactions between different keypoints and generates object- and relation-centric embeddings. Here, we describe the specific formulation of the graph neural network we used in our inference and dynamics modules.

For a set of keypoints, we construct a directed graph , where vertices represent the information on the keypoints and edges represent the directed relation pointing from to . is the associated edge attributes.

Our graph neural network employs a similar structure as the Interaction Networks (IN) Battaglia et al. (2016) to generate the embeddings for the objects and the relations:

(11)
(12)

where and are object and relation encoders respectively. denotes all edges that point to object . and are the derived object and relation embeddings individually. In practice, we usually propagate the node and edge information over the graph multiple times to improve the expressiveness of the model Sanchez-Gonzalez et al. (2018); Li et al. (2019b).

The graph neural network, denoted as , aggregates the spatial information spanned by the keypoints, passes the information along the edges, and outputs embeddings for the nodes and edges, i.e., . Please see our main paper for how we instantiate as a submodule in the inference and the dynamics modules.

Appendix B Environment details

b.1 Multi-Body Interaction

We use the Pymunk simulator to generate episodes of frames, among which episodes are reserved for testing, and the remaining goes to the training set. At the beginning of each episode, we randomly assign the balls in different positions. For each pair of balls, there is a one-third probability that they are connected by nothing, rigid rod, and spring. The stiffness of the spring relation is set to , and we randomly sample the rest length between . For the rigid relation, we allow the connected two balls to move freely in a small fixed window on their opposing direction, e.g., if the rigid relationship is of length , the distance between the two balls can vary between to . This treatment will force the model to infer the length of the rigid relation instead of naively exploiting the distance between the two balls.

b.2 Fabric Manipulation

We generate episodes of frames. Similar to the Multi-Body environment, we reserve episodes for testing and use the remaining for training our model. As shown in Figure 8, we build fabrics of three different shapes: a shirt, pants, and a towel, where we also vary the shape of the fabrics like the length of the pant leg or the height and width of the towel. To deform the fabrics and move them around, we apply forces on the contour of the fabric and employ the NVIDIA FleX simulator to simulate the motion Macklin et al. (2014).

Appendix C Implementation details

Our implementation is based on PyTorch Paszke et al. (2019), and each instance of the model is trained using one NVIDIA TITAN Xp graphics card.

c.1 Unsupervised keypoint detection

We employ a similar encoder-decoder structure as described in Kulkarni et al. (2019). Both the keypoint detector, , and the feature extractor, , have blocks of convolutional layers that reduce the height and width of the image into a quarter of their original size. The output of the keypoint detector has channels, representation the confidence map of the keypoints, over which computes the exact location of each keypoint by calculating the spatial expectation. We use the operation describe in Equation 10 to get the feature maps . The refiner network, consisting of a few transpose convolutional operators, transforms the features map back to the original size of the target image.

We optimize using Adam optimizer Kingma and Ba (2014) with a learning rate of for about k iterations.

c.2 Predicting the directed edge set using the inference module

We use simple multilayer perceptron (MLP) to instantiate the object encoder,

, and the relation encoder, . To aggregate the temporal information, we use three blocks of convolutional layers for and

. The use of convolutional operators allows the model to handle time series of different lengths, and the output of the CNNs is fed through a max-pooling layer to compute a fixed-dimensional feature vector.

c.3 Joint optimization of the inference module and the dynamics module

We train the inference module, , and the dynamics module,

jointly by optimizing the loss function defined in Section 2.5 using stochastic gradient descent via Adam optimizer with a learning rate of

for about k iterations.

For the exact network architecture and more details in the training procedures of the individual modules, please refer to our code.

Appendix D Additional experimental results

d.1 Unsupervised keypoint detection

Figure 8: Unsupervised keypoint detection. We show some more qualitative results of our perception module and visualize the intermediate results. In each block, the first row shows the input images, and the second row illustrates an overlay between the predicted keypoints and the image. The third and the fourth row show the intermediate results - heatmap spanned by the keypoints and the reconstructed target image.

The combination of the keypoint-based bottleneck layer and the downstream reconstruction task allows the perception module to extract temporally-consistent keypoints dispersing over the foreground of the images. The model accurately tracks the movement of the objects and can naturally handle deformable objects. Figure 8 shows some more qualitative examples of our perception module in both the Multi-Body and the Fabric environments.

d.2 Future prediction in the Fabric environment

Figure 9: Future prediction in the Fabrics environment. When making long-term predictions into the future, our method outperforms the baseline that does not perform causal discovery.

Figure 9 shows a comparison between our model and the baseline, which is the same as our model except that it does not contain an inference module to perform causal discovery. Our model can make more accurate future predictions, indicating the importance of an accurate modeling of the causal mechanisms in the underlying physical system.