Introduction
As is their wont, politicians talk—on television, on the floor of the legislature, in printed quotations, and on their websites and social media feeds. If you read and listen to all these statements, you might notice common tropes and turns of phrase that groups of politicians used to describe some issue [Grimmer and Stewart2013]. You might even discover a list of “talking points” underlying this common behavior.^{1}^{1}1The U.S. State Department, for example, produced a muchdiscussed set of talking points memos in response to the 2012 attack in Benghazi. Similarly, you might be reading the literature in a scientific field and find that a paper from another discipline starts getting cited repeatedly. Which previous paper, or papers, introduced the new technique? Or perhaps you read several news stories about a new product from some company and then find that they all share text with a press release put out by the company.
In each of these cases, we might be interested in structures at differing levels of detail. We might be interested in individual links, e.g., knowing which previous paper it was that later papers were mining for further citations; in cascades, e.g., knowing which news stories are copying from which press releases or from each other; and in networks, e.g., knowing which politicians are most likely to share talking points or which newspapers are most likely to publish press releases from particular businesses or universities. Depending on our data source, some of these structures could be directly observed. With the right API calls, we might observe retweets (links), chains of retweets (cascades), and follower relations (networks) on Twitter. We might also be interested in inferring an underlying social network for which the Twitter follower relation is partial evidence. In contrast, politicians interviewed on television do not explicitly cite the sources of their talking points, which must be inferred.
Observing the diffusion process often reduces to keeping track of when nodes (newspapers, bills, people, etc.) mention a piece of information, reuse a text, get infected, or exhibit a contagion in a general sense. When the structure of the propagation of contagion is hidden and we cannot tell which node infected which, all we have is the result of diffusion process—that is, the timestamp and possibly other information when the nodes get infected. We want to infer the diffusion process itself by using such information to predict the links of underlying network. There have been increasing efforts to uncover and model different types of information cascades on networks [Brugere, Gallagher, and BergerWolf2016]: modeling hidden networks from observed infections [Stack et al.2012, Rodriguez et al.2014], modeling topic diffusion in networks [Gui et al.2014], predicting social influence on individual mobility [Mastrandrea, Fournet, and Barrat2015] and so on.
This work all focuses on using parametric models of the time differences between infections. Such models are useful when the only information we can get from the result of diffusion process is the timestamps of infections. We can hope to make better predictions, however, with access to additional features, such as the location of each node, the similarity between the messages received by two nodes, etc. Popular parametric models cannot incorporate these features into unsupervised training.
In this paper, we propose an edgefactored, conditional loglinear directed spanning tree (DST) model with an unsupervised, contrastive training procedure to infer the link structure of information cascades. After reviewing related work, we describe the DST model in detail, including an efficient inference algorithm using the directed matrixtree theorem and the gradient of our maximum conditional likelihood optimization problem. We then report experiments on the ICWSM Spinn3r dataset, where we can observe the true hyperlink structure for evaluation and compare the proposed method to MultiTree [Rodriguez and Schölkopf2012] and InfoPath [Rodriguez et al.2014] and some simple, but effective, baselines. We conclude by discussing directions for future work.
Related Work
There has been a great deal of work on trying to infer underlying network structure using information cascades, most of which are based on the independent cascade (IC) model [Saito, Nakano, and Kimura2008]. We evaluate the DST model against a transmission based model from rodriguez2012submodular rodriguez2012submodular, which, similar to our work, also uses directed spanning trees to represent cascades, but employs a submodular parameteroptimization method and fixed activation rates. In addition, we compare our work with an advanced model from rodriguez2014uncovering rodriguez2014uncovering, which uses a generative probabilistic model for inferring both static and dynamic diffusion networks. It is a line of work starting from using a generative model with fixed activation rate [Gomez Rodriguez, Leskovec, and Krause2010, Rodriguez and Schölkopf2012, Myers and Leskovec2010]. Later comes the development of inferring the activation rate between nodes to reveal the network structure [Rodriguez, Balduzzi, and Schölkopf2011, Gomez Rodriguez, Leskovec, and Schölkopf2013, Snowsill et al.2011, Rodriguez, Leskovec, and Schölkopf2013, Rodriguez et al.2014]
. zhai2015cascade zhai2015cascade use a Markov chain Monte Carlo approach for the inference problem. linderman2014discovering linderman2014discovering propose a probabilistic model based on mutuallyinteracting point processes and also use MCMC for the inference. gui2014modeling gui2014modeling model topic diffusion in multirelational networks. An interesting approach by amin2014learning amin2014learning infers the unknown network structure, assuming the detailed timestamps for the spread of the contagion are not observed but that “seeds” for cascades can be identified or even induced experimentally. wang2012feature wang2012feature propose featureenhanced probabilistic models for diffusion network inference while still maintaining the requirement that exact propagation times be observed and modeled. daneshmand2014estimating daneshmand2014estimating and abrahao2013trace abrahao2013trace perform theoretical analysis of transmissionbased cascade inference models. While the foregoing approaches are all based on parametric models of propagation time between infections, rong2016model rong2016model experiment with a nonparametric approach to discriminating the distribution of diffusion times between connected and unconnected nodes. Recently, 2016arXiv161000782B 2016arXiv161000782B have compiled a survey about the methods and applications for different network structure inference problems.
Tutte’s directed matrixtree theorem, which plays a key role in our approach, has been used in natural language processing to infer posterior probabilities for edges in nonprojective syntactic dependency trees
[Smith and Smith2007, Koo et al.2007, McDonald and Satta2007] and for inferring semantic hierarchies (i.e., ontologies) over words [Bansal et al.2014].Method
In this section, we present our modeling and inference approaches. We first present a simple loglinear, edgefactored directed spanning tree (DST) model of cascades over network nodes. This allows us to talk concretely about the likelihood objective for supervised and unsupervised training, where we present a contrastive objective function. We note that other models besides the DST model could be trained with this contrastive objective. Finally, we derive the gradient of this objective and its efficient computation using Tutte’s directed matrixtree theorem.
Loglinear Directed Spanning Tree Model
For each cascade, define a set of activated nodes , each of which might be associated with a timestamp and other information that are the input to the model. Nodes thus correspond to (potentially) dateable entities such as webpages or posts, and not aggregates, such as websites or users. Let be a directed spanning tree of , which is a map from child indices to parent indices of the cascade. In the range of mapping we add a new index , which represents a dummy “root” node . This allows us to model both single cascades and to disentangle multiple cascades on a set of nodes , since more than one “seed” node might attach to the dummy root. In the experiments below, we model datasets with both singlerooted (“separated”) and multirooted (“merged”) cascades.
A valid directed spanning tree is by definition acyclic. Every node has exactly one parent, with the edge , except that the root node has indegree 0. We might wish to impose additional constraints on the set of spanning trees: for instance, we might require that edges not connect nodes with timestamps known to be in reverse order. Let be the set of all valid directed spanning trees that satisfy a list of rules in constraint set over , and be the set of all directed spanning trees over the same sequence of but without any constraint being imposed.
Define a loglinear model for trees over . The unnormalized probability of the tree is thus:
(1) 
where
is a feature vector function on cascade and
parameterizes the model. Following [McDonald, Crammer, and Pereira2005], we assume that features are edgefactored:(2) 
where is the score of a directed edge . In other words, given the sequence and the cascade is a directed spanning tree, this directed spanning tree (DST) model assumes that the edges in the tree are all conditionally independent of each other.
Despite the constraints they impose on features, we can perform inference with edgefactored models using tractable algorithms, which is one of the advantages this model brings. Since is not a normalized probability, we divide it by the sum over all possible directed spanning trees, which gives us:
(3) 
where denotes the sum of loglinear scores of all directed spanning trees, i.e., the partition function.
If, for a given set of parameters , we merely wish to find the , we can pass the scores for each candidate edge to the ChuLiuEdmonds maximum directed spanning tree algorithm [Chu and Liu1965, Edmonds1967].
Likelihood of a cascade
When we observe all the directed links in a training set of cascades, we now have the machinery to perform supervised training with maximum conditional likelihood. We can simply maximize the likelihood of the true directed spanning tree for each cascade in our training set, using the gradient computations discussed below.
When we do not observe the true links in a cascade, we need a different objective function. While we cannot restrict the numerator in the likelihood function (3) to a single, true tree, we can restrict it to the set of trees that obey some constraints on valid cascades. As mentioned above, these constraints might, for instance, require that links point forward in time or avoid long gaps. We can now write the likelihood function for each cascade as a sum of the probabilities of all directed spanning trees that meet the constraints :
(4) 
where denotes the sum of loglinear scores of all valid directed spanning trees under constraint set .
This is a contrastive objective function that, instead of maximizing the likelihood of a single outcome, maximizes the likelihood of a neighborhood of possible outcomes contrasted with implicit negative evidence [Smith and Eisner2005]. A similar objective could be used to train other cascade models besides the loglinear the DST model presented above, e.g., models such as the Hawkes process in linderman2014discovering linderman2014discovering.
As noted above, cascades on a given set of nodes are assumed to be independent. We thus have a loglikelihood over all cascades:
(5) 
Maximizing Likelihood
Our goal is to find the parameters that solve the following maximization problem: problem:
(6) 
To solve this problem with quasiNewton numerical optimization methods such as LBFGS [Liu and Nocedal1989], we need to compute the gradient of the objective function, which for a given parameter is given by the following equation:
(7) 
For a cascade that contains nodes, even if we have tractable number of valid directed spanning trees in , there will be (Cayley’s formula) possible directed spanning trees for the normalization factor , which makes the computation intractable. Fortunately, there exists an efficient algorithm that can compute , or , in time.


Cascadelevel inference of DST with different feature sets, in unsupervised learning setting (Table
1(a)) in comparison with naive attacheverythingtoearliest baseline, as well as supervised learning setting (Table
1(b))MatrixTree Theorem and Laplacian Matrix
tutte1984graph tutte1984graph proves that for a set of nodes , the sum of scores of all directed spanning trees in rooted at is
(8) 
where is the matrix produced by deleting the th row and column from Laplacian matrix .
Before we define Laplacian matrix, we first denote:
(9) 
where , which is the parent of in . Recall that we define the unnormalized score of a spanning tree over as a loglinear model using edgefactored scores (Eq 1, 2). Therefore, we have:
(10) 
where represents the multiplicative contribution of the edge from parent to child to the total score of the tree.
Now we can define the Laplacian matrix for directed spanning trees by:
(11)  
where represents a parent node and represents a child node. As for all possible valid directed spanning trees, we will have 0 for all entries where the edge from parent to child doesn’t satisfy the specified constraint set. For all possible directed spanning trees, however, the constraint set is , that is all possible edges.
We can use the LU factorization to compute the matrix inverse, so that the determinant of the Laplacian matrix can be done in times. Meanwhile, the Laplacian matrix is diagonally dominant, in that we use positive edge scores to create the matrix. The matrix therefore is guaranteed to be invertible.
Gradient
Smith:2007ts Smith:2007ts use a similar inference approach for probabilistic models of nonprojective dependency trees. They derive that for any parameter ,
(12)  
Also, for an arbitrary matrix , they derive the gradient of with respect to any cell using the determinant and entries in the inverse matrix:
(13) 
Dataset  Cascade types  Method  Recall  Precision  F1  AP  
ICWSM 2011  Separated Cascades  MultiTree  0.367  0.242  0.292  N/A  
InfoPath  0.414  0.273  0.329  N/A  
DST Basic  0.557  0.368  0.443  0.279  
DST Enhanced  0.842  0.556  0.670  0.599  
Naive Baseline  0.622  0.595  0.608  0.385  
Merged Cascades  DST Basic  0.052  0.034  0.041  0.003  
DST Enhanced  0.057  0.038  0.045  0.003  
Naive Baseline  0.015  0.019  0.017  0.001  

Separated Cascades  MultiTree  0.249  0.196  0.220  N/A  
InfoPath  0.375  0.294  0.330  N/A  
DST Basic  0.618  0.486  0.544  0.452  
DST Enhanced  0.950  0.747  0.836  0.915  
Naive Baseline  0.941  0.933  0.937  0.892  
Merged Cascades  DST Basic  0.083  0.065  0.073  0.012  
DST Enhanced  0.207  0.163  0.182  0.047  
Naive Baseline  0.043  0.043  0.043  0.005 
Experiments
One of the hardest tasks in network inference problems is gathering information about the true network structure. Most existing work has conducted experiments on both synthetic data with different parameter settings and on realworld networks that match the assumptions of proposed method. Generating synthetic data, however, is less feasible if we want to exploit complex textual features, which negates one of the advantages of the DST model. Generating child text from parent documents is beyond the scope of this paper, although we believe it to be a promising direction for future work. In this paper, therefore, we train and test on documents from the ICWSM 2011 Spinn3r dataset [Burton, Kasch, and Soboroff2011]. This allows us to compare our method with MultiTree [Rodriguez and Schölkopf2012] and InfoPath [Rodriguez et al.2014], both of which output a network given a set of cascades. We also analyze the performance of DST at the cascade level, an ability that MultiTree, InfoPath and similar methods lack.
Dataset Description
The ICWSM 2011 Spinn3r dataset consists of 386 million different web posts, such as blog posts, news articles, social media content, etc., made between January 13 and February 14, 2011. We first avoid including hyperlinks that connect two posts from the same websites, as they could simply be intrawebsite navigation links. In addition, we enforce a strict chronological order from source post to destination post to filter out erroneous date fields. Then, by backtracing hyperlinks to the earliest ancestors and computing connected components, we are able to obtain about 75 million clusters, each of which serves as a separate cascade. We only keep the cascades containing between 5 and 100 posts, inclusive. This yields 22,904 cascades containing 205,234 posts from 61,364 distinct websites. We create ground truth for each cascade by using the hyperlinks as a proxy for real information flow. For timevarying network, we include edges only appear in a particular day into the corresponding network, while for the static network we simply include any existing edge regardless of the timestamp.
Of these cascades, approximately 61% don’t have a tree structure, and among the remaining cascades, 84% have flat structures, meaning for each cascade, all nodes other than the earliest one are in fact attached to the earliest node in the ground truth. In this paper we report the performance of our models on the original dataset and on a dataset where the cascades are guaranteed to be trees. To construct the tree cascades, before finding the connected components using hyperlinks, we remove posts which have hyperlinks coming from more than one source website. Selecting, as above, cascades whose sizes are between 5 and 100 nodes, this yields 20,424 separate cascades containing 201,875 posts from 63,576 distinct websites
We also merge cascades to combine all cascades that start within an hour of each other to make the inference problem more challenging and realistic, since if we don’t know links, we are unlikely to know the membership of nodes in cascades exactly. In the original ICWSM 2011 dataset, we obtain 789 merged cascades and for the treeconstrained data, we obtain 938 merged cascades. When merging cascades, we only change the links to the dummy root node, and the underlying network structure remains the same. The DST model would be able to learn different parameters depending on whether we train it on separate cascades or merged cascades. We report on the comparison between both with MultiTree and InfoPath.
Feature Sets
Most existing work on network structure inference described in Related Work only uses the time difference between two nodes as the feature for learning and inference. Our model has the ability to incorporate different features as pointed out in Eq. 1 and 2. Hence in this paper we experiment different features and report on the following sets:

basic feature sets, which include only the node information and timestamp difference, which resembles what the other models do; and

enhanced feature sets, which include the basic feature sets, as well as the languages that both nodes of an edge use, what content types as assigned by Spinn3r (blog, news, etc.), whether a node is the earliest node in the cluster, and the Jaccard distance between the normalized texts in the two nodes.
We use onehot encoding to represent the feature vectors. In addition, we discretize realvalued features by binning them.
Result of Unsupervised Learning at Cascade Level
In practice, we use Apache Spark for parallelizing the computation to speed up the optimization process. We choose batch gradient descent with a fixed learning rate and report the result after iterations. Inspecting the results of the last two iterations confirms that all training runs converge. The constraint set contains edges that satisfy: (1) time constraints, and (2) only nodes within the first hour of a specific cascade can be attached to root.
The DST model outputs finergrained structure than existing approaches and predicts a tree for each cascade, with edges equal to the number of nodes. We report the microaveraged recall, precision, and F1 for the whole dataset.
The top half of Table 1(a) shows the results of training the DST model in an unsupervised setting with different feature sets on both separate cascades dataset and merged cascades dataset. We also include a naive baseline that simply attaches all other nodes to the earliest node in a cascade.
From Table 1(a) we can see the flatness problem. The naive baseline can already achieve 45% recall and 58.7% precision, while knowing the websites and time lags only yields 34.8% recall and 45.4% precision, which partly attributes to the time constraints we apply on creating the Laplacian matrix so that the model can at least gets the earliest node and one of the edges leaving from that node right. On the other hand, the enhanced feature set utilizes the features from the textual content of posts such as the Jaccard distance. Having this information helps the DST model outperform the naive baseline. In the merged clusters setting, instead of only one seed per cascade being attached to the implicit root, we have multiple seeds occurring within the same hour attached to the root. Hence, the naive baseline strategy can at most get the original cascade to which the earliest node belongs right. DST with both feature sets can achieve a better result. We believe in the future, adding more content based features will further boost the performance. We expect, however, that disentangling multiple information flow paths will remain a challenging problem in many domains.
Result of Unsupervised Learning at Network Level
In this section, we evaluate effectiveness on inferring the network structure, comparing to MultiTree and InfoPath. The DST model outputs a tree for each cascade with posterior probabilities for each edge. To convert to a network, we sum all posteriors for a certain edge to get a combined score, from which we obtain a ranked list of edges between websites. We report on two different sets of quantitative measurements: recall/precision/F1 and average precision.
When using InfoPath, we assume an exponential edge transmission model, and sample 20,000 cascades for the stochastic gradient descent. The output has the activation rates for a certain edge in the network per day. We keep those edges which have nonzero activation rate and actually present on that day to counteract the decaying assumption in InfoPath. We then compute the recall/precision/averageprecision for each day. To compare the DST model with InfoPath on the timevarying network, we pick edges from the ranked list of the DST model on each day, the number of which matches InfoPath’s choice. We exclude MultiTree for the lack of ability to model a dynamic network.
Figure 1 shows the comparison between InfoPath and the DST model with different feature sets. We can see that the DST model outperforms InfoPath by a large margin on every metric with the enhanced feature set being the best.
Now we can compare the DST model with MultiTree and InfoPath on the static network. We include every edge in the output of InfoPath. The top part of Table 2 shows a comparison between the two models in a similar way to the comparisons mentioned before, where the number of edges from the DST model equals to the number of total edges selected by InfoPath. As for MultiTree, we keep all the parameters default while setting the number of edges to match InfoPath’s as well. Since MultiTree assumes a fixed activation rate, while InfoPath gives activation rate based on the timestep, there is no way to rank the edges in the static network both methods inferred; therefore, we don’t report averageprecision for them.
The DST model also outperforms MultiTree and InfoPath in inferring static network structure. Notably, the recall/precision of InfoPath is much higher than the recall/precision per day (Figure 1). This is due to the fact that edges InfoPath correctly selects in the static network might not be correct on that specific day.
Enforcing Tree Structure on the Data
In the ICWSM 2011 dataset, 61% of the cascades are DAGs. Since DST, MultiTree, and InfoPath all assume that they are trees, we evaluate performance on data where this constraint is satisfied—i.e., the treeconstrained dataset described above. The bottom part of Table 1(a) shows that the naive baseline for separate cascades achieves 94.1% recall/precision because of flatness. DST with enhanced features beats it by a mere 0.5%. This leaves very little room for DST to improve in cascade structure inference problem for separate cascades. For merged cascades, the naive baseline can at most get the original cascade to which the earliest node belongs right. DST with basic feature set did adequately on finding the earliest nodes but found very few correct edges inside the cascades, while enhanced feature set is better at reconstructing the cascade structures thanks to the knowledge of textual features, which leads about a 600% margin. With only 24.6% recall/precision, there is still room for improvement on this very hard inference problem. On network inference, DST with the enhanced feature set also performs the best for recall and average precision but lags on precision. Table 2, Figure 0(d), 0(e) and 0(f) show similar performance when comparing with MultiTree and InfoPath on inferring different types of network structure.
Result of Supervised Learning at Cascade Level
Our proposed model has the ability to perform both supervised and unsupervised learning, with different objective functions. One of the main contributions of the DST model is to be able to learn the cascade level structure in a featureenhanced and unsupervised way. However, supervised learning can establish on upper bound for unsupervised performance when trained with the same features.
Table 1(b) shows the result of supervised learning using DST on the merged cascades with tree structure enforced. Since there are only 938 merged cascades, we perform a 10fold cross validation on both dataset and we report the result of 5 folds. We split the training and test set by interleaved roundrobin sampling from the merged cascades dataset. Although not precisely comparable to DST in the unsupervised setting due to this jackknifing, Table 1(b) still shows results about twice as large as for unsupervised training.
Conclusion
We have proposed a method to uncover the network structure of information cascades using an edgefactored, conditional loglinear model, which can incorporate more features than most comparable models. This directed spanning tree (DST) model can also infer finer grained structure on the cascade level, besides inferring global network structure. We utilize the matrixtree theorem to prove that the likelihood function of the conditional model can be solved in cubic time and to derive a contrastive, unsupervised training procedure. We show that for ICWSM 2011 Spinn3r dataset, our proposed method outperforms the baseline MultiTree and InfoPath methods in terms of recall, precision, and average precision. In the future, we expect that applications of this technique could benefit from richer textual features—including full generative models of child document text—and different model structures trained with the contrastive approach.
Acknowledgements
This research was supported by the National Institute on Deafness and Other Communication Disorders of the National Institutes of Health under award number R01DC009834 and by the Andrew W. Mellon Foundation. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH or the Mellon.
References
 [Abrahao et al.2013] Abrahao, B.; Chierichetti, F.; Kleinberg, R.; and Panconesi, A. 2013. Trace complexity of network inference. In KDD, 491–499.
 [Amin, Heidari, and Kearns2014] Amin, K.; Heidari, H.; and Kearns, M. 2014. Learning from contagion (without timestamps). In ICML, 1845–1853.
 [Bansal et al.2014] Bansal, M.; Burkett, D.; de Melo, G.; and Klein, D. 2014. Structured learning for taxonomy induction with belief propagation. In ACL, 1041–1051.
 [Brugere, Gallagher, and BergerWolf2016] Brugere, I.; Gallagher, B.; and BergerWolf, T. Y. 2016. Network Structure Inference, A Survey: Motivations, Methods, and Applications. ArXiv eprints.
 [Burton, Kasch, and Soboroff2011] Burton, K.; Kasch, N.; and Soboroff, I. 2011. The ICWSM 2011 Spinn3r dataset. In ICWSM.
 [Chu and Liu1965] Chu, Y., and Liu, T. 1965. On the shortest arborescence of a directed graph. Science Sinica 14:1396–1400.
 [Daneshmand et al.2014] Daneshmand, H.; GomezRodriguez, M.; Song, L.; and Schoelkopf, B. 2014. Estimating diffusion network structures: Recovery conditions, sample complexity & softthresholding algorithm. In ICML, 793–801.
 [Edmonds1967] Edmonds, J. 1967. Optimum branchings. Journal of Research of the National Bureau of Standards 71B:233–240.
 [Gomez Rodriguez, Leskovec, and Krause2010] Gomez Rodriguez, M.; Leskovec, J.; and Krause, A. 2010. Inferring networks of diffusion and influence. In KDD, 1019–1028.
 [Gomez Rodriguez, Leskovec, and Schölkopf2013] Gomez Rodriguez, M.; Leskovec, J.; and Schölkopf, B. 2013. Structure and dynamics of information pathways in online media. In WSDM, 23–32.
 [Grimmer and Stewart2013] Grimmer, J., and Stewart, B. M. 2013. Text as data: The promise and pitfalls of automatic content analysis methods for political texts. Political Analysis 21(3):267–297.
 [Gui et al.2014] Gui, H.; Sun, Y.; Han, J.; and Brova, G. 2014. Modeling topic diffusion in multirelational bibliographic information networks. In CIKM, 649–658.
 [Koo et al.2007] Koo, T.; Globerson, A.; Carreras Pérez, X.; and Collins, M. 2007. Structured prediction models via the matrixtree theorem. In EMNLPCoNLL, 141–150.
 [Linderman and Adams2014] Linderman, S. W., and Adams, R. P. 2014. Discovering latent network structure in point process data. In ICML, 1413–1421.
 [Liu and Nocedal1989] Liu, D. C., and Nocedal, J. 1989. On the limited memory BFGS method for large scale optimization. In Mathematical Programming, volume 45. Springer. 503–528.
 [Mastrandrea, Fournet, and Barrat2015] Mastrandrea, R.; Fournet, J.; and Barrat, A. 2015. Contact patterns in a high school: A comparison between data collected using wearable sensors, contact diaries and friendship surveys. PLoS ONE 10(9):e0136497.
 [McDonald and Satta2007] McDonald, R., and Satta, G. 2007. On the complexity of nonprojective datadriven dependency parsing. In IWPT, 121–132.
 [McDonald, Crammer, and Pereira2005] McDonald, R.; Crammer, K.; and Pereira, F. 2005. Online largemargin training of dependency parsers. In ACL, 91–98.
 [Myers and Leskovec2010] Myers, S., and Leskovec, J. 2010. On the convexity of latent social network inference. In NIPS, 1741–1749.
 [Rodriguez and Schölkopf2012] Rodriguez, M. G., and Schölkopf, B. 2012. Submodular inference of diffusion networks from multiple trees. In ICML, 1–8.
 [Rodriguez, Balduzzi, and Schölkopf2011] Rodriguez, M. G.; Balduzzi, D.; and Schölkopf, B. 2011. Uncovering the temporal dynamics of diffusion networks. In ICML, 561–568.
 [Rodriguez et al.2014] Rodriguez, M. G.; Leskovec, J.; Balduzzi, D.; and Schölkopf, B. 2014. Uncovering the structure and temporal dynamics of information propagation. Network Science 2(01):26–65.
 [Rodriguez, Leskovec, and Schölkopf2013] Rodriguez, M. G.; Leskovec, J.; and Schölkopf, B. 2013. Modeling information propagation with survival theory. In ICML, 666–674.
 [Rong, Zhu, and Cheng2016] Rong, Y.; Zhu, Q.; and Cheng, H. 2016. A modelfree approach to infer the diffusion network from event cascade. In CIKM, 1653–1662.
 [Saito, Nakano, and Kimura2008] Saito, K.; Nakano, R.; and Kimura, M. 2008. Prediction of information diffusion probabilities for independent cascade model. In Knowledgebased intelligent information and engineering systems, 67–75. Springer.
 [Smith and Eisner2005] Smith, N. A., and Eisner, J. 2005. Contrastive estimation: Training loglinear models on unlabeled data. In ACL.
 [Smith and Smith2007] Smith, D. A., and Smith, N. A. 2007. Probabilistic models of nonprojective dependency trees. In EMNLPCoNLL.
 [Snowsill et al.2011] Snowsill, T. M.; Fyson, N.; De Bie, T.; and Cristianini, N. 2011. Refining causality: Who copied from whom? In KDD, 466–474.
 [Stack et al.2012] Stack, J. C.; Bansal, S.; Kumar, V. A.; and Grenfell, B. 2012. Inferring populationlevel contact heterogeneity from common epidemic data. Journal of the Royal Society Interface rsif20120578.
 [Tutte1984] Tutte, W. T. 1984. Graph Theory, volume 21 of Encyclopedia of Mathematics and its Applications. AddisonWesley.

[Wang, Ermon, and
Hopcroft2012]
Wang, L.; Ermon, S.; and Hopcroft, J. E.
2012.
Featureenhanced probabilistic models for diffusion network
inference.
In
Joint European Conference on Machine Learning and Knowledge Discovery in Databases
, 499–514.  [Zhai, Wu, and Xu2015] Zhai, X.; Wu, W.; and Xu, W. 2015. Cascade source inference in networks: A Markov chain Monte Carlo approach. Computational Social Networks 2(1).
Comments
There are no comments yet.