1 Introduction
Numerous planning tools produce plans that call for executing tasks nonlinearly. Usually, such plans are represented as a tree, where the leaves indicate primitive tasks, and other nodes represent compound tasks consisting of executing their subtasks either in parallel (also called “concurrent” tasks [11]) or in sequence. [9, 22, 21, 17].
Given such a hierarchical plan representation, it is frequently of interest to evaluate its desirability in terms of resource consumption, such as fuel, cost, or time. The answer to such questions can be used to decide which of a set of plans, all valid as far as achieving the goal(s) are concerned, is better given a userspecified utility function. Another reason to compute these distributions is to support runtime monitoring of resources, generating alerts to the execution software or human operator if resource consumption in practice has a high probability of surpassing a given threshold.
While most tools aim at good average performance of the plan, in which case one may ignore the full distribution and consider only the expected resource consumption [3], our paper focuses on providing guarantees for the probability of meeting deadlines. This type of analysis is needed, e.g., in ServiceLevelAgreements (SLA) where guarantees of the form: “response time less than 1mSec in at least 95% of the cases” are common [5].
We assume that an hierarchical plan is given in the form of a tree, with uncertain resource consumption of the primitive actions in the network, provided as a probability distribution. The problem is to compute a property of interest of the distribution for the entire task network. In this paper, we focus mainly on the issue of computing the probability
of satisfying a deadline (i.e. that the makespan of the plan is less than a given value). Since in the abovementioned applications for these computations, one needs results in realtime (for monitoring) or multiple such computations (in comparing candidate plans), efficient computation here is crucial, and is more important than in, e.g., offline planning.We show that computing is NPhard even for the simple sum of independent random variables (r.v.s), the first contribution of this paper. A deterministic polynomialtime approximation scheme for this problem is proposed, the second contribution of this paper. Error bounds are analyzed and are shown to be tight. For discrete r.v.s with finite support, finding the distribution of the maximum can be done in loworder polynomial time. However, when compounded with errors generated due to approximation in subtrees, handling this case requires careful analysis of the resulting error. The approximations developed for both sequence and parallel nodes are combined into an overall algorithm for task trees, with an analysis of the resulting error bounds, yielding a polynomialtime (additive error) approximation scheme for computing the probability of satisfying a deadline for the complete network, another contribution of this paper.
We also briefly consider computing expected makespan. Since for discrete r.v.s, in parallel nodes one can compute an exact distribution efficiently, it is easy to compute an expected makespan in this case as well as for sequence nodes. Despite that, we show that for trees with both parallel and sequence nodes, computing the expected makespan is NPhard.
Experiments are provided in order to examine the quality of approximation in practice when compared to the theoretical error bounds. A simple sampling scheme is also provided as a yardstick, even though the sampling does not come with error guarantees, but only bounds in probability. Finally, we examine our results in light of related work in the fields of planning and scheduling, as well as probabilistic reasoning.
2 Problem statement
We are given a hierarchical plan represented as a task tree consisting of three types of nodes: primitive actions as leaves, sequence nodes, and parallel nodes. Primitive action nodes contain distributions over their resource consumption. Although any other resource can be represented, we will assume henceforth, in order to be more concrete, that the only resource of interest is time. A sequence node denotes a task that has been decomposed into subtasks, represented by the children of , which must be executed in sequence in order to execute . We assume that a subtask of begins as soon as its predecessor in the sequence terminates. Task terminates when its last subtask terminates. A parallel node also denotes a decomposed task, but subtasks begin execution in parallel immediately when task begins execution; terminates as soon as all of the children of terminate.
Resource consumption is uncertain, and described as probability distributions in the leaf nodes. We assume that the distributions are independent (but not necessarily identically distributed). We also assume initially that the r.v.s are discrete and have finite support (i.e. number of values for which the probability is nonzero). As the resource of interest is assumed to be completion time, let each leaf node have a completiontime distribution
, in some cases represented as a cumulative distribution function form (CDF)
.The main computational problem tackled in this paper is: Given a task tree and a deadline , compute the probability that satisfies the deadline (i.e. terminate in time ). We show that this problem is NPhard and provide an approximation algorithm. The above deadline problem reflects a step utility function: a constant positive utility for all less than or equal to a deadline time , and for all . We also briefly consider a linear utility function, requiring computation of the expected completion time of and show that this expectation problem is also NPhard.
3 Sequence nodes
Since the makespan of a sequence node is the sum of durations of its components, the deadline problem on sequence nodes entails computation of (part of) the CDF of a sum of r.v.s which is an NPhard problem (shown in Section 6). Thus, there is a need for an approximation algorithm for sequence nodes. The main ingredient in our approximation scheme is the operator specified as follows:
Definition 1 (The operator).
For a discrete r.v. and a parameter , consider the sequence of elements in the support of defined recursively by: and, if the set is not empty, Let be the length of this sequence, i.e., let be the first index for which the set above is empty. For notational convenience, define . Now, define to be the random variable specified by:
For example, if is a r.v. such that and , the r.v. is given by and . In words, the operator removes consecutive domain values whose accumulated probability is less than and adds their probability mass to the element in the support that precedes them.
Using the operator, we are now ready to introduce the main operator of this section:
Definition 2.
Let be and .
takes a set of r.v.s and computes a r.v. that represents an approximation of their sum by applying the operator after adding each of the variables. The parameter , (see below) specifies the accuracy of approximation.
The operator can be implemented by the procedure outlined in Algorithm 1. The algorithm computes the distribution
using convolution (the Convolve() operator in line 3) in a straightforward manner. Computing the convolution is itself straightforward and not further discussed here. However, since the support of the resulting distribution may be exponential in the number of convolution operations, the algorithm must trim the distribution representation to avoid this exponential blowup. This decreases the support size, while introducing error. The trick is to keep the support size under control, while making sure that the error does not increase beyond a desired tolerance. Note that the size of the support can also be decreased by simple “binning” schemes, but these do not provide the desired guarantees. In the algorithm, the PDF of a r.v.
is represented by the list , which consists of pairs, where and is the probability , the latter denoted by . We assume that is kept sorted in increasing order of .We proceed to show that Algorithm 1 indeed approximates the sum of the r.v.s, and analyze its accuracy/efficiency tradeoff. A notion of approximation relevant to deadlines is:
Definition 3.
For r.v.s , , and we write if for all .
Note that this definition is asymmetric, because, as shown below, our algorithm never underestimates the exact value. For the proof of the Lemma 1 below, we establish the following technical claim (can be proven by induction on ):
Claim 1.
Let and be sequences of real numbers such that for all and , then
We now bound the approximation error of sums of r.v.s:
Lemma 1.
For discrete r.v.s and , if and , then .
Proof.
Finally, we show that the difference is also nonnegative:
The first term here is nonnegative by Claim 1, the second is nonnegative because it is a sum of nonnegative numbers. ∎
We now show that is an approximation of :
Lemma 2.
Proof.
Let . Let be the support of and let . We have,
(1) 
because, after Trim, the probabilities of elements that were removed from the support are assigned to the element that precedes them. From Equation (1) we get:
The inequality follows from the observation that, for all , , because is never greater than in Algorithm 1. ∎
To bound the amount of memory needed for our approximation algorithm, the next lemma bounds the size of the support of the trimmed r.v.:
Lemma 3.
Proof.
Let and let be the support of . And, for notational convenience, let . Let . Then, . According to algorithm 1, lines 1112, and for all . Therefore, . Using the fact , we get: . ∎
These lemmas highlight the main idea behind our approximation algorithm: the Trim operator trades off approximation error for a reduced size of the support. The fact that this tradeoff is linear allows us to get a linear approximation error in polynomial time, as shown below:
Theorem 1.
If for all and then , where .
Proof.
Theorem 2.
Assuming that , the procedure can be computed in time using memory, where is the size of the largest support of any of the s.
Proof.
From Lemma 3, the size of list in Algorithm 1 is at most just after the convolution, after which it is trimmed, so the space complexity is . thus takes time , where the logarithmic factor is required internally for sorting. Since the runtime of the Trim operator is linear, and the outer loop iterates times, the overall runtime of the algorithm is . ∎
Example 1.
The error bound provided in Theorem 1 is tight, i.e. may results in error : Let and such that . Consider, for very small , the r.v. defined by:
and, for , let the r.v.s be such that , , and zero otherwise.
4 Parallel nodes
Unlike sequence composition, the deadline problem for parallel composition is easy to compute, since the execution time of a parallel composition is the maximum of the durations:
(2) 
where the last equality follows from independence of the r.v.s. We denote the construction of the CDF using Equation (2) by . If the r.v.s are all discrete with finite support, incurs linear space, and computation time .
If the task tree consists only of parallel nodes, one can compute the exact CDF, with the same overall runtime. However, when the task tree contain both sequence and parallel nodes we may get only approximate CDFs as input, and now the above straightforward computation can compound the errors. When the input CDFs are themselves approximations, we bound the resulting error:
Lemma 4.
For discrete r.v.s , , if for all , and for some , then, for any , we have: where .
Proof.
Since for each , this expression is nonnegative. ∎
5 Task trees: mixed sequence/parallel
Given a task tree and a accuracy requirement , we generate a distribution for a r.v. approximating the true duration distribution for the task tree. We introduce the algorithm and prove that the algorithm indeed returns an approximation of the completion time of the plan. For a node , let be the sub tree with as root and let be the set of children of . We use the notation to denote the total number of nodes in .
Algorithm 2, that implements the operator Network, is a straightforward postorder traversal of the task tree. The only remaining issue is handling the error, in an amortized approach, as seen in the proof of the following theorem.
Theorem 3.
Given a task tree , let be a r.v. representing the true distribution of the completion time for the network. Then .
Proof.
By induction on . Base: , the node must be primitive, and Network will just return the distribution unchanged which is obviously an approximation of itself. Suppose the claim is true for . Let be a task tree of size and let be the root of . If is a node, by the induction hypothesis that , and by Theorem 1, the maximum accumulated error is = for , therefore, as required. If is a node, by the induction hypothesis that , where so . Then, by Lemma 4, using and , we get that as required. ∎
Theorem 4.
Let be the size of the task tree , and the size of the maximal support of each of the primitive tasks. If and , the Network approximation algorithm runs in time , using memory.
Proof.
The runtime and space bounds can be derived from the bounds on Sequence and on Parallel, as follows. In the Network algorithm, the trim accuracy parameter is less than or equal to . The support size (called in Theorem 2) of the variables input to Sequence are . Therefore, the complexity of the Sequence algorithm is and the complexity of the Parallel operator is . The time and space for sequence dominate, so the total time complexity is times the complexity of Sequence and the space complexity is that of Sequence. ∎
If the constraining assumptions on and in Theorem 4 are lifted, the complexity is still polynomial: replace one instance of by , and the other by in the runtime complexity expression.
6 Complexity results
We show that the deadline problem is NPhard, even for a task tree consisting only of primitive tasks and one sequence node, i.e. linear plans.
Lemma 5.
Let be a set of discrete realvalued r.v.s specified by probability mass functions with finite supports, , and . Then, deciding whether is NPHard.
Proof.
By reduction from SubsetSum [12, problem number SP13]. Recall that SubsetSum is: given a set of integers, and integer target value , is there a subset of whose sum is exactly ? Given an instance of SubsetSum, create the twovalued r.v.s with and . By construction, there exists a subset of summing to if and only if .
Suppose that algorithm can decide in polynomial time. Then, since the r.v.s are twovalued uniform r.v.s, the only possible values of are integer multiples of , and we can to compute using a binary search on using calls to . To determine whether , simply use this scheme twice, since is true if and only if . ∎
Theorem 5.
Finding the probability that a task tree satisfies a deadline is NPhard.
Proof.
Given a task tree consisting of leaf nodes, all being children of a single sequence node, its makespan is the sum of the completion times of the leaves. The theorem follows immediately from Lemma 5. ∎
Finally, we consider the linear utility function, i.e. the problem of computing an expected makespan of a task network. Note that although for linear plans the deadline problem is NPhard, the expectation problem is trivial because the expectation of the sum of r.v.s is equal to the sum of the expectations of the s. For parallel nodes, it is easy to compute the CDF and therefore also easy to compute the expected value. Despite that, for task networks consisting of both sequence nodes and parallel nodes, these methods cannot be effectively combined, and in fact, we have:
Theorem 6.
Computing the expected completion time of a task network is NPhard.
Proof.
By reduction from subset sum. Construct r.v.s (“primitive tasks”) as in the proof of Lemma 5, and denote by the r.v. . Construct one parallel node with two children, one being the a sequence node having the completion time distribution defined by , the other being a primitive task that has a completion time with probability 1. (We will use more than one such case, which differ only in the value of , hence the subscript ). Denote by the r.v. that represents the completion time distribution of the parallel node, using this construction, with the respective . Now consider computing the expectation of the for the following cases: and . Thus we have, for , by construction and the definition of expectation:
where the second equality follows from the all being integervalued r.v.s (and therefore is also integer valued). Subtracting these expectations, we have . Therefore, using the computed expected values, we can compute , and thus also , in polynomial time. ∎
7 Empirical Evaluation
We examine our approximation bounds in practice, and compare the results to exact computation of the CDF and to a simple stochastic sampling scheme. Three types of task trees are used in this evaluation: task trees used as execution plans for the (blinded for review) team entry in the DARPA robotics challenge (DRC simulation phase), linear plans (seq), and plans for for the Logistics domain (from IPC2 http://ipc.icapsconference.org/
). The primitive task distributions were uniform distributions discretized to
values.In the Logistics domain, packages are to be transported by trucks or airplanes. Hierarchical plans were generated by the JSHOP2 planner [21] for this domain and consisted of one parallel node (packages delivered in parallel), with children all being sequential plans. The duration distribution of all primitive tasks is uniform but the support parameters were determined by the type of the task, in some tasks the distribution is fixed (such as for load and unload) and in others the distribution depends on the velocity of the vehicle and on the distance to be travelled.
Task Tree  Approx. alg.,  Sampling,# samples  
0.1  0.01  
Drive(DRC)  47  2  [0.005, 0.009]  [0.0004, 0.0004]  0.0072  0.0009 
47  4  [0.01, 0.02]  [0.0009, 0.001]  0.0075  0.0011  
47  10  [0.01, 0.03]  [0.001, 0.003]  0.0083  0.0015  
Seq  10  4  [0.03, 0.04]  [0.003, 0.004]  0.008  0.0016 
10  10  [0.03, 0.06]  [0.003, 0.007]  0.0117  0.001  
Logistics  45  4  [0.004,0.004]  [0.0004,0.0004]  0.008  0.0006 
45  10  [0.005,0.006]  [0.0004,0.0006]  0.013  0.001 
After running our approximation algorithm we also ran a variant that uses an inverted version of the Trim operator, providing a lower bound of the CDF, as well as the upper bound generated by Algorithm 2. Running both variants allows us to bound the actual error, costing only a doubling of the runtime. Despite the fact that our error bound is theoretically tight, in practice and with actual distributions, according to Table 1, the resulting error in the algorithm was usually much better than the theoretical bound.
Task Tree  Exact  Approx.,  Sampling, # samples  
0.1  0.01  
Drive (DRC)  47  2  1.49  0.141  1.14  1.92  190.4 
47  4  18.9  0.34  7.91  2.1  211.5  
47  10  h  1.036  32.94  2.81  279.1  
Seq  10  4  0.23  0.003  0.02  0.545  54.22 
10  10  10.22  0.008  0.073  0.724  72.4  
Logistics  45  4  373.3  0.2  7  2.5  256 
45  10  h  2.19  120  3.12  314 
We ran the exact algorithm, our approximation algorithm with , and a simple simulation with to samples, on networks from the DRC implementation, sequence nodes with 10, 20, and 50 children, and 20 Logistics domain plans, and several values of . Results for a typical indicative subset (regretfully reduced due to space limits) are shown in tables 1 (error comparison) and 2 (runtime comparison). The exact algorithm times out in some cases. Both our approximation algorithm and the sampling algorithm handle all these cases, as our algorithm’s runtime is polynomial in , , and as is the sampling algorithm’s (time linear in number of samples).
The advantage of the approximation algorithm is mainly in providing bounds with certainty as opposed to the bounds inprobability provided by sampling. Additionally, as predicted by theory, accuracy of the approximation algorithm improves linearly with (and almost linear in runtime), whereas accuracy of sampling improves only as a square root of the number of samples. Thus, even in cases where sampling initially outperformed the approximation algorithm, increasing the required accuracy for both algorithms, eventually the approximation algorithm overtook the sampling algorithm.
8 Discussion
Numerous issues remain unresolved, briefly discussed below. Trivial improvements to the Trim operator are possible, such as the inverse version of the operator used to generate a lower bound for the empirical results. Other candidate improvements are not performing trimming (or even stopping a trimming operation) if the current support size is below , which may increase accuracy but also the runtime. Another point is that in the combined algorithm, space and time complexity can be reduced by adding some Trim operations, especially after processing a parallel node, which is not done in our version. This may reduce accuracy, a tradeoff yet to be examined. Another option is, when given a specific threshold, trying for higher accuracy in just the region of the threshold, but how to do that is nontrivial. For sampling schemes such methods are known, including adaptive sampling [4, 19], stratified sampling, and other schemes. It may be possible to apply such schemes to deterministic algorithms as well  an interesting issue for future work.
Extension to continuous distributions: our algorithm can handle them by prerunning a version of the Trim operator on the primitive task distribution. Since one cannot iterate over support values in a continuous distribution, start with the smallest support value (even if it is ), and find the value at which the CDF increases by . This requires access to the inverse of the CDF, which is available, either exactly or approximately, for many types of distributions.
We showed that the expectation problem is also NPhard. A natural question is on approximation algorithms for the expectation problem, but the answer here is not so obvious. Sampling algorithms may run into trouble if the target distribution contains major outliers, i.e. values very far from other values but with extremely low probability. Our approximation algorithm can also be used asis to estimate the CDF and then to approximate the expectation, but we do not expect it to perform well because our current
Trim operator only limits the amount of probability mass moved at each location to , but does not limit the “distance” over which it is moved. The latter may be arbitrarily bad for estimating the expectation. Possibly adding simple binning schemes to the Trim operator in addition to limiting the moved probability mass to may work, another issue for future research.Related work on computing makespan distributions includes [16]
, which examines sum of Bernoulli distributed r.v.s. Other work examines both deterministic
[20] and MonteCarlo techniques [4, 19]. Distribution of maximum of r.v.s was studied in [8], with a focus mostly on continuous distributions.Complexity of finding the probability that the makespan is under a given threshold in task networks was shown to be NPhard in [14], even when the completion time of each task has a Bernoulli distribution. Nevertheless, our results are orthogonal as the source of the complexity in [14] is in the graph structure, whereas in our setting the complexity is due to the size of the support. In fact for linear plans (an NPhard case in our setting), the probability of meeting the deadline can be computed in loworder polynomial time for Bernoulli distributions, using straightforward dynamic programming. Makespan distributions in series parallel networks in the i.i.d. case was examined in [13], without considering algorithmic issues. There is also a significant body of work on estimating the makespan of plans and schedules [15, 10, 1], within a context of a planner or schedueler. The analysis in these papers is based on averaging or on limit theorems, and does not provide a guaranteed approximation scheme.
Copmuting the distribution of the makespan in trees is a seemingly trivial problem in probabilistic reasoning [23]. Given the task network, it is straightforward to represent the distribution using a Bayes network (BN) that has one node per task, and where the children of a node in the task networks are represented by BN nodes that are parents of the BN node representing . This results in a treeshaped BN, where it is well known that probabilistic reasoning can be done in time linear in the number of nodes, e.g. by belief propagation (message passing) [23, 18]. The difficulty is in the potentially exponential size of variable domains, which our algorithm, essentially a limited form of approximate belief propagation, avoids by trimming.
Looking at makespan distribution computation as probabilistic reasoning leads to interesting issues for future research, such as how to handle task completion times that have dependencies, represented as a BN. Since reasoning in BNs is NPhard even for binaryvalued variables [7, 6], this is unlikely in general. But for cases where the BN toplogy is tractable, such as for BNs with bounded treewidth [2], or directedpath singly connected BNs [24], a deterministic polynomialtime approximation scheme for the makespan distribution may be achievable. The research literature contains numerous randomized approximation schemes that handle depenencies [23, 25], especially for the case with no evidence. In fact out original implementation of the sampling scheme in our DARPA robotics challenge entry handled dependent durations. It is unclear whether such sampling schemes can be adapted to handle dependencies and arbitrary evidence, such as: “the completion time of compound task in the network is known to be exactly 1 hour from now”. Finally, one might consider additional commonly used utility functions, such as a “soft” deadline: the utility is a constant before the deadline , decreasing linearly to 0 until for some “grace” duration , and 0 thereafter.
References
 [1] J Christopher Beck and Nic Wilson. Proactive algorithms for job shop scheduling with probabilistic durations. J. Artif. Intell. Res.(JAIR), 28:183–232, 2007.
 [2] Hans L. Bodlaender. Treewidth: Characterizations, applications, and computations. In Proceedings of the 32Nd International Conference on GraphTheoretic Concepts in Computer Science, WG’06, pages 1–14, Berlin, Heidelberg, 2006. SpringerVerlag.
 [3] Alessio Bonfietti, Michele Lombardi, and Michela Milano. Disregarding duration uncertainty in partial order schedules? Yes, we can! In Integration of AI and OR Techniques in Constraint Programming, pages 210–225. Springer, 2014.
 [4] Christian G Bucher. Adaptive sampling: an iterative fast Monte Carlo procedure. Structural Safety, 5(2):119–126, 1988.
 [5] Rajkumar Buyya, Saurabh Kumar Garg, and Rodrigo N Calheiros. SLAoriented resource provisioning for cloud computing: Challenges, architecture, and solutions. In Cloud and Service Computing (CSC), 2011 International Conference on, pages 1–10. IEEE, 2011.
 [6] Gregory F. Cooper. The computational complexity of probabilistic inference using Bayesian belief networks. Artificial Intelligence, 42 (23):393–405, 1990.
 [7] Paul Dagum and Michael Luby. Approximating probabilistic inference in Bayesian belief networks is NPhard. Artificial Intelligence, 60 (1):141–153, 1993.
 [8] Luc Devroye. Generating the maximum of independent identically distributed random variables. Computers & Mathematics with Applications, 6(3):305–315, 1980.
 [9] Kutluhan Erol, James Hendler, and Dana S Nau. HTN planning: Complexity and expressivity. In AAAI, volume 94, pages 1123–1128, 1994.
 [10] Na Fu, Pradeep Varakantham, and Hoong Chuin Lau. Towards finding robust execution strategies for RCPSP/max with durational uncertainty. In ICAPS, pages 73–80, 2010.
 [11] Alfredo Gabaldon. Programming hierarchical task networks in the situation calculus. In AIPS’02 Workshop on Online Planning and Scheduling, 2002.
 [12] Michael R. Garey and David S. Johnson. Computers and Intractability; A Guide to the Theory of NPCompleteness. W. H. Freeman & Co., New York, NY, USA, 1990.
 [13] WJ Gutjahr and G Ch Pflug. Average execution times of series–parallel networks. Séminaire Lotharingien de Combinatoire, 29:9, 1992.
 [14] Jane N Hagstrom. Computational complexity of PERT problems. Networks, 18(2):139–147, 1988.
 [15] Willy Herroelen and Roel Leus. Project scheduling under uncertainty: Survey and research potentials. European journal of operational research, 165(2):289–306, 2005.

[16]
Yili Hong.
On computing the distribution function for the poisson binomial distribution.
Computational Statistics & Data Analysis, 59:41–51, 2013.  [17] John Paul Kelly, Adi Botea, and Sven Koenig. Offline planning with Hierarchical Task Networks in video games. In AIIDE, pages 60–65, 2008.
 [18] Jin H. Kim and Judea Pearl. A computation model for causal and diagnostic reasoning in inference systems. In Proceedings of the 6th International Joint Conference on AI, 1983.
 [19] Richard J Lipton, Jeffrey F Naughton, and Donovan A Schneider. Practical selectivity estimation through adaptive sampling, volume 19. ACM, 1990.
 [20] Sophie Mercier. Discrete random bounds for general random variables and applications to reliability. European journal of operational research, 177(1):378–405, 2007.
 [21] Dana S Nau, TszChiu Au, Okhtay Ilghami, Ugur Kuter, J William Murdock, Dan Wu, and Fusun Yaman. SHOP2: An HTN planning system. J. Artif. Intell. Res. (JAIR), 20:379–404, 2003.
 [22] Dana S Nau, Stephen JJ Smith, Kutluhan Erol, et al. Control strategies in HTN planning: Theory versus practice. In AAAI/IAAI, pages 1127–1133, 1998.
 [23] Judea Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Mateo, CA, 1988.
 [24] S. E. Shimony and C. Domshlak. Complexity of probabilistic reasoning in directedpath singly connected Bayes networks. Artificial Intelligence, 151:213–225, 2003.

[25]
Changhe Yuan and Marek J. Druzdzel.
Importance sampling algorithms for Bayesian networks: Principles and performance.
Mathematical and Computer Modelling, 43(9–10):1189 – 1207, 2006.
Comments
There are no comments yet.