On Constrained Open-World Probabilistic Databases

02/27/2019
by   Tal Friedman, et al.
12

Increasing amounts of available data have led to a heightened need for representing large-scale probabilistic knowledge bases. One approach is to use a probabilistic database, a model with strong assumptions that allow for efficiently answering many interesting queries. Recent work on open-world probabilistic databases strengthens the semantics of these probabilistic databases by discarding the assumption that any information not present in the data must be false. While intuitive, these semantics are not sufficiently precise to give reasonable answers to queries. We propose overcoming these issues by using constraints to restrict this open world. We provide an algorithm for one class of queries, and establish a basic hardness result for another. Finally, we propose an efficient and tight approximation for a large class of queries.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

12/28/2021

Monads for Measurable Queries in Probabilistic Databases

We consider a bag (multiset) monad on the category of standard Borel spa...
03/28/2022

HypeR: Hypothetical Reasoning With What-If and How-To Queries Using a Probabilistic Causal Approach

What-if (provisioning for an update to a database) and how-to (how to mo...
03/20/2013

Non-monotonic Negation in Probabilistic Deductive Databases

In this paper we study the uses and the semantics of non-monotonic negat...
12/02/2020

Complex Coordinate-Based Meta-Analysis with Probabilistic Programming

With the growing number of published functional magnetic resonance imagi...
10/30/2020

Independence in Infinite Probabilistic Databases

Probabilistic databases (PDBs) model uncertainty in data. The current st...
07/02/2018

Probabilistic Databases with an Infinite Open-World Assumption

Probabilistic databases (PDBs) introduce uncertainty into relational dat...
04/04/2017

Probabilistic Search for Structured Data via Probabilistic Programming and Nonparametric Bayes

Databases are widespread, yet extracting relevant data can be difficult....
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

An ubiquitous pursuit in the study of knowledge base representation is the search for a model that can represent uncertainty while simultaneously answering interesting queries efficiently. The key underlying challenge is that these goals are at odds with each other. Modelling

uncertainty requires additional model complexity. At the same time, the ability to answer meaningful queries usually demands fewer model assumptions. Both of these properties are at odds with the key limiting factor of tractability: success in the first two goals is not nearly as impactful if it is not achieved efficiently. Unfortunately, probabilistic reasoning is often computationally hard, even on databases (Roth, 1996; Dalvi and Suciu, 2012).

One approach towards achieving this goal is to begin with a simple model such a probabilistic database (PDB) (Suciu et al., 2011; Van den Broeck and Suciu, 2017). A PDB models uncertainty, but is inherently simple and makes very strong independence assumptions and closed-world assumptions allowing for tractability on a very large class of queries (Dalvi and Suciu, 2007, 2012). However, PDBs can fall short under non-ideal circumstances, as their semantics are brittle to incomplete knowledge bases (Ceylan et al., 2016).

To bring PDBs closer to the desired goal, Ceylan et al. (2016) propose open-world probabilistic databases (OpenPDB), wherein the semantics of a PDB are strengthened to relax the closed-world assumption. While OpenPDBs maintain a large class of tractable queries, their semantics are so relaxed these queries lose their precision: they model further uncertainty, but in exchange give less useful query answers.

In this work, we aim to overcome these querying challenges, while simultaneously maintaining the degree of uncertainty modeled by OpenPDBs. To achieve this, we propose further strengthening the semantics of OpenPDBs by constraining the mean probability allowed for a relation. These constraints work at the

schematic level, meaning no additional per-item information is required. They are practically motivated by knowledge of summary statistics, of how many tuples we expect to be true. A theoretical analysis shows that, despite their simplicity, such constraints fundamentally change the difficulty landscape of queries, leading us to propose a general-purpose approximation scheme.

The rest of the paper is organized as follows: Section 2 provides necessary background on relational logic and PDBs, as well as an introduction to OpenPDBs. Section 3 motivates and introduces our construction for constraining OpenPDBs. Section 4 analyses exact solutions subject to these constraints, providing a class of tractable queries along with an algorithm. It also shows that the problem is in general hard, even in some cases where standard PDB queries are tractable. Section 5 investigates an efficient and provably bounded approximation scheme. Section 6 discusses our findings, and summarizes interesting directions that we leave as open problems.

2 Background

This section provides background and motivation for probabilistic databases and their open-world counterparts. Notation and definitions are adapted from Ceylan et al. (2016).

2.1 Relational Logic and Databases

We now describe necessary background from function-free finite-domain first-order logic. An atom consists of a predicate of arity , together with arguments. These arguments can either be constants or variables. A ground atom is an atom that contains no variables. A formula is a series of atoms combined with conjunctions () or disjunctions (), and with quantifiers . A substitution replaces all occurences of by in a formula .

A relational vocabulary is comprised of a set of predicates and a domain . Using the Herbrand semantics (Hinrichs and Genesereth, 2006), the Herbrand base of is the set of all ground atoms possible given and . A -interpretation is then an assignment of truth values to every element of the Herbrand base of . We say that models a formula whenever satisfies . This is denoted by .

Scientist
Einstein
Erdős
von Neumann
CoAuthor
Einstein Erdős
Erdős von Neumann
Figure 1: Example relational database. Notice that the first row of the right table corresponds to the atom CoAuthor(Einstein, Erdős).

A reasonable starting point for the target knowledge base to construct would be to use a traditional relational database. Using the standard model-theoretic view (Abiteboul et al., 1995), a relational database for a vocabulary is a -interpretation . Less formally, a relational database consists of a series of relations, each of which corresponds to a predicate. Each relation consists of a series of rows, also called tuples, each of which corresponds to an atom of the predicate being true. Any atom not appearing as a row in the relation is considered to be false, following the closed-world assumption (Reiter, 1981). Figure 1 shows an example database.

2.2 Probabilistic Databases

Scientist
Einstein 0.8
Erdős 0.8
von Neumann 0.9
Shakespeare 0.2
CoAuthor
Einstein Erdős 0.8
Erdős von Neumann 0.9
von Neumann Einstein 0.5
Figure 2: Example probabilistic database. Tuples are now of the form where is the probability of the tuple being present.

Despite the success of relational databases, their deterministic nature leads to a few shortcomings. A common way to gather a large knowledge base is to apply some sort of statistical model (Carlson et al., 2010; Suchanek et al., 2007; Peters et al., 2014; Dong et al., 2014) which returns a probability value for potential tuples. Adapting the output of such a model to a relational database involves thresholding on the probability value, discarding valuable information along the way. A probabilistic database (PDB) circumvents this problem by assigning each tuple a probability.

Definition 1.

A (tuple-independent) probabilistic database for a vocabulary is a finite set of tuples of the form where is a -atom and . Furthermore, each can appear at most once.

Given such a collection of tuples and their probabilities, we are now going to define a distribution

over relational databases. The semantics of this distribution are given by treating each tuple as an independent random variable.

Definition 2.

A probabilistic database for vocabulary

induces a probability distribution over

-interpretations :

Notice this last statement is again making the closed-world assumption: any tuple that we have no information about is assigned probability zero. Figure 2 shows an example PDB.

Probabilistic Queries

In relational databases, the fundamental task we are interested in solving is how to answer queries. The same is true for probabilistic databases, with the only difference being that we are now interested in probabilities over queries. In particular, we are interested in queries that are fully quantified - also known as Boolean queries. On a relational database, this corresponds to a query that has an answer of True or False.

For example, on the database given in Figure 1, we might ask if there is a scientist who is a coauthor:

If we instead asked this query of the probabilistic database in Figure 2, we would be computing the probability by summing over the worlds in which the query is true:

Queries of this form that are a conjunction of atoms are called conjunctive queries. They are commonly shortened as:

A disjunction of conjunctive queries is known as a union of conjunctive queries (UCQ). UCQs have been shown to live in a dichotomy of efficient evaluation (Dalvi and Suciu, 2012): computing the probability of a UCQ is either polynomial in the size of the database, or it is -hard. This property can be checked through the syntax of a query, and we say that a UCQ is safe if it admits efficient evaluation. In the literature of probabilistic databases (Suciu et al., 2011; Dalvi and Suciu, 2012), as well as throughout the rest of this paper, UCQs are the primary query object studied.

2.3 Open-World Probabilistic Databases

In the context of automatically constructing a knowledge base, as is done in for example NELL (Carlson et al., 2010) or Google’s Knowledge Vault (Dong et al., 2014), making the closed-world assumption is conceptually unreasonable. Conversely, it is also not feasible to include all possible tuples and their probabilities in the knowledge base. The resulting difficulty is that there are an enormous number of probabilistic facts that can be scraped from the internet, and by definition these tools will keep only those with the very highest probability. As a result, knowledge bases like NELL (Carlson et al., 2010), PaleoDeepDive (Peters et al., 2014), and YAGO (Suchanek et al., 2007) consist almost entirely of probabilities above . This tells us that the knowledge base we are looking at is fundamentally incomplete. In response to this problem, Ceylan et al. (2016) propose the notion of a completion for a probabilistic database.

Definition 3.

A -completion of a probabilistic database is another probabilistic database obtained as follows. For each atom that does not appear in , we add tuple to for some .

Then, we can define the open world of possible databases in terms of the set of distributions induced by all completions.

Definition 4.

An open-world probabilistic database (OpenPDB) is a pair , where is a probabilistic database and . induces a set of probability distributions such that a distribution P belongs to iff P is induced by some -completion of probabilistic database .

Open-World Queries

OpenPDBs specify a set of probability distributions rather than a single one, meaning that a given query produces a set of possible probabilities rather than a single one. We focus on computing the minimum and maximum possible probability values that can be achieved by completing the database.

Definition 5.

The probability interval of a Boolean query in OpenPDB is , where

In general, computing the probability interval for some first-order is not tractable. As observed in Ceylan et al. (2016), however, the situation is different for UCQ queries, because they are monotone (they contain no negations). For UCQs, the upper and lower bounds are given respectively by the full completion (where all unknown probabilities are ), and the closed world database. This is a direct result of the fact that OpenPDBs form a credal set: a closed convex set of probability measures, meaning that probability bounds always come from extreme points (Cozman, 2000). Furthermore, Ceylan et al. (2016) also provide an algorithm for efficiently computing this upper bound corresponding to a full completion, and show that it works whenever the UCQ is safe.

3 Mean-Constrained Completions

This section motivates the need to strengthen the OpenPDB semantics, and introduces our novel probabilistic data model.

3.1 Motivation

The ability to perform efficient query evaluation provides an appealing case for OpenPDBs. They give a more reasonable semantics, better matching their use, and for a large class of queries they come at no extra cost in comparison to traditional PDBs. However, in practice computing an upper bound in this way tends to give results very close to . Intuitively, this makes sense: our upper bound comes from simultaneously assuming that every possible missing atom has some reasonable probability. While such a bound is easy to compute, it is too strong of a relaxation of the closed-world assumption.

The crux of this issue is that OpenPDBs consider every possible circumstance for unknown tuples: even ones that are clearly unreasonable. For example, suppose that a table in our database describes whether or not a person is a scientist. The OpenPDB model considers the possibility that every person it knows nothing about has a nontrivial probability of being a scientist - this will clearly return nonsensical query results as we know that fewer than 1% of the population are scientists.

In order to consider a restricted subset of completions representing reasonable situations, we propose directly incorporating these summary statistics. Specifically, we place constraints on the overall probability of a relation across the entire population. In the scientist example, our model only considers completions in which the total probability mass of people being scientists totals less than 1%. This allows us to include more information at the schema level, without having more information about each individual.

To illustrate the effect this has, consider a schema in which we have 3 relations: denoting whether one lives in Los Angeles, denoting whether one lives in Springfield, and denoting whether one is a scientist. Using a vocabulary of 500 people where each person is present in at most one relation, Table 1 shows the resulting upper probability bound under different model assumptions, where the constrained open-world restricts at most of mass on , on , and on . In particular, notice how extreme the difference is in upper bound with and without constraints being imposed. The closed-world probability of both of these queries is always 0, as each person in our database only has a known probability for at most one relation. It is clear that of these three options, the constrained open-world is the most reasonable – the rest of this section formalizes this idea and investigates the resulting properties.

Query CW OW COW
Table 1: Comparison of upper bounds for the same query and database with different model assumptions: Closed-World (CW), Open-World (OW), and Constrained Open-World (COW).

3.2 Formalization

We begin here by defining mean based constraints, before examining some immediate observations about the structure of the resulting constrained database.

Definition 6.

Suppose we have a PDB , and let be the set of probabilistic tuples in relation . Let be a probability threshold. Then a mean tuple probability constraint (MTP constraint) is a linear constraint of the form

Definition 7.

We say that a -completion is -constrained if the -completed database satisfies MTP . If it satisfies all of , then we say it is -constrained.

Being -constrained is not a property of OpenPDBs, but of their PDB completions. Hence, we are interested in the subset of completions that satisfy this property.

Definition 8.

An OpenPDB together with MTP constraints induces a set of probability distributions , where distribution P belongs to iff P is induced by some -constrained -completion of .

Much like with standard OpenPDBs, for a Boolean query we are interested in computing bounds on .

Definition 9.

The probability interval of a Boolean query in OpenPDB with MTP constraints is , where

3.3 Completion Properties

A necessary property of OpenPDBs for efficient query evaluation is that they are credal – this is what allows us to consider only a finite subset of possible completions. MTP-constrained OpenPDBs maintain this property.111Proofs of all theorems and lemmas are given in the appendix available at: https://anonymousfiles.io/oFn92Ti2/

Proposition 1.

Suppose we have an OpenPDB together with MTP constraints . Then the induced set of probability distributions is credal.

This property allows us to examine only a finite subset of configurations when looking at potential completions, since query probability bounds of a credal set are always achieved at points of extrema (Cozman, 2000). Next, we would like to characterize these points of extrema, by showing that the number of tuples not on their own individual boundaries (that is, or ) is given by the number of MTP constraints.

Theorem 2.

Suppose we have an OpenPDB with MTP constraints , and a UCQ . If is a -constrained -completion satisfying , there exist completed tuples with such that

That is, our upper bound is given by a completion that has at most added tuples with probability not exactly or . Intuitively, each MTP constraint contributes a single non-boundary tuple, which can be thought of as the “leftover” probability mass once the rest has been assigned in full.

This insight allows us to treat MTP query evaluation as a combinatorial optimization problem for the rest of this paper. Thus, we only consider the case where achieving the mean tuple probability exactly leaves us with every individual tuple at its boundary. To see that we can do this, we observe that Theorem 

2 leaves a single tuple per MTP constraint not necessarily on the boundary. But this tuple can always be forced to be on the boundary by very slightly increasing the mean of the constraint, as follows.

Corollary 3.

Suppose we have an OpenPDB with MTP constraints , and a UCQ . Suppose further that each relation in has at most constraint in , and that each constraint allows adding open-world probability mass exactly divisible by . Then if is a -constrained -completion of with , we have

Our investigation into the algorithmic properties of MTP query evaluation will be focused on constraining a single relation, subject to a single combinatorial budget constraint.

4 Exact MTP Query Evaluation

With Section 3 formalizing MTP constraints and showing that computing upper bounds subject to MTP constraints is a combinatorial problem of choosing which -probability tuples to add in the completion, we now investigate exact solutions.

4.1 An Algorithm for Inversion-Free Queries

We begin by describing a class of queries which admits poly-time evaluation subject to an MTP constraint. We first need to define some syntactic properties of queries.

Definition 10.

Let be a conjunctive query, and let denote the set of relations containing variable . We say that is hierarchical if for any , we have either , , or .

Intuitively, a conjunctive query being hierarchical indicates that it can either be separated into independent parts (the case), or there is some variable that appears in every atom. This simple syntactic property is the basis for determining whether query evaluation on a conjunctive query can be done in polynomial time (Dalvi and Suciu, 2007). We can further expand on this definition in the context of UCQs.

Definition 11.

A UCQ is inversion-free if each of its conjuncts is hierarchical, and they all share the same hierarchy.222See Jha and Suciu (2011) for a more detailed definition. If is not inversion-free, we say that it has an inversion.

This query class remains tractable under MTP constraints.

Theorem 4.

For any inversion-free query , evaluating the probability subject to an MTP constraint is in PTIME.

In order to prove Theorem 4, we provide a polytime algorithm for MTP query evaluation on inversion-free queries. As with OpenPDBs, our algorithm depends on Algorithm 1, the standard lifted inference algorithm for PDBs. Algorithm 1 proceeds in steps recursively processing to compute query probabilities in polynomial time for safe queries (Dalvi and Suciu, 2012). Further details of the algorithm including the necessary preprocessing steps and notation can be found in Dalvi and Suciu (2012) and Gribkoff et al. (2014a).

1:UCQ , prob. database with constants .
2:The probability
3:Step 0  Base of Recursion
4:     if  is a single ground atom  
5:         if   return else return               
6:Step 1  Rewriting of Query
7:     Convert to conjunction of UCQ:
8:Step 2  Decomposable Conjunction
9:     if  and where  
10:         return      
11:Step 3  Inclusion-Exclusion
12:     if  but has no independent  
13:         return      
14:Step 4  Decomposable Disjunction
15:     if  where  
16:         return      
17:Step 5  Decomposable Existential Quantifier
18:     if  has a separator variable  
19:         return      
20:Step 6  Fail (the query is #P-hard)
Algorithm 1 , abbreviated by

We now present an algorithm for doing exact MTP query evaluation on inversion-free queries. For brevity, we present the binary case; the general case follows similarly and can be found in appendix. Suppose that we have a probabilistic database , a domain of constants denoted , a query , and an MTP constraint on relation allowing us to add exactly tuples with probability . Suppose that immediately reaches Step 5 of Algorithm 1 (other steps will be discussed later), implying that and are unique variables in the query. We let denote the upper query probability of subject to an MTP constraint allowing budget on the relevant portion of . That is, tells us the highest probability we can achieve for a partial assignment given a fixed budget. Observe that we can compute all entries of using a slight modification of Algorithm 1 where we compute probabilities with and without each added tuple. This will take time polynomial in .

Next, we impose an ordering on the domain. Then we let denote the upper query probability of

with a budget of on the relevant portions of . Then considers all possible substitutions in our first index, meaning we have effectively removed a variable. Doing this repeatedly would allow us to perform exact MTP query evaluation. However, is non-trivial to compute, and cannot be done by simply modifying Algorithm 1. Instead, we observe the following recurrence:

Intuitively, this recurrence says that since the tuples from each fixed constant are all independent, we do not need to store which budget configuration on the first constants got us our optimal solution. Thus, when we add the th constant, we just need to check each possible value we could assign to our new constant, and see which gives the overall highest probability. This recurrence can be implemented efficiently, yielding a dynamic programming algorithm that runs in time polynomial in the domain size and budget.

Finally, we would like to generalize this algorithm beyond assuming that immediately reaches Step 5 of Algorithm 1. Looking at other cases, we see that Steps 0 and 1 have no effect on this recurrence, and Steps 2 and 4 correspond to multiplicative factors. For a query that reaches Step 3 (inclusion-exclusion), we need to construct such and for each term in the inclusion-exclusion sum, and follow the analogous recurrence. Notice that the modified algorithm would only work in the case where we can always pick a common variable for all sub-queries to do dynamic programming on – that is, when the query is inversion-free, as was our assumption.

4.2 Queries with Inversion

We now show that allowing for inversions in safe queries can cause MTP query evaluation to become NP-hard. Interestingly, this means that MTP constraints fundamentally change the difficulty landscape of query evaluation.

To show this, we investigate the following UCQ query.

A key observation here is that the query is a safe UCQ. That is, if we ignore constraints and evaluate it subject to the closed- or open-world semantics, computing the probability of the query would be polynomial in the size of the database. We now show that this is not the case for open-world query evaluation subject to a single MTP constraint on .

Theorem 5.

Evaluating the upper query probability bound subject to an MTP constraint on is NP-hard.

The full proof of Theorem 5 can be found in appendix, showing a reduction from the NP-complete 3-dimensional matching problem to computing  with an MTP constraint on . It uses the following intuitive correspondence.

Definition 12.

Let be finite disjoint sets representing nodes, and let be the set of available hyperedges. Then is a matching if for any distinct triples , we have that . The 3-dimensional matching decision problem is to determine for a given and positive integer if there exists a matching with .

The set of available tuples for will correspond to all edges in . Our MTP constraint forces a decision on which subset of to take and include in the -completion.

However, if we simply queried to maximize , this completion need not correspond to a matching. Instead, we have the conjunct which is maximized when each tuple chosen from has a different value. Similar conjuncts for and ensure that the query is maximized when using distinct and values. Putting all of these together ensures that the query probability is maximized when the subset of tuples chosen to complete form a matching.

Finally, the last part of the query ensures that inference on is tractable, but it is unaffected by the choice of tuples in .

5 Approximate MTP Query Evaluation

With Section 4.2 answering definitively that a general-purpose algorithm for evaluating MTP query bounds is unlikely to exist, even when restricted to safe queries, an approximation is the logical next step. We now restrict our discussion to situations where we constrain a single relation, and dig deeper into the properties of MTP constraints to show their submodular structure. We then exploit this property to achieve efficient bounds with guarantees.

5.1 On the Submodularity of Adding Tuples

To formally define and prove the submodular structure of the problem, we analyze query evaluation as a set function on adding tuples. We begin with a few relevant definitions.

Definition 13.

Suppose that we have an OpenPDB , with an MTP constraint on a single relation , and we let be the set of possible tuples we can add to . Then the set query probability function is defined as

Intuitively, this function describes the probability of the query as a function of which open tuples have been added. It provides a way to reason about the combinatorial properties of this optimization problem. Observe that is the closed-world probability of the query, while is the open-world probability.

We want to show that is a submodular set function.

Definition 14.

A submodular set function is a function such that for every , and every , we have that .

Theorem 6.

The set query probability function is submodular for any tuple independent probabilistic database and UCQ query without self-joins.

This gives us the desired submodularity property, which we can exploit to build efficient approximation algorithms.

5.2 From Submodularity to Approximation

Given the knowledge that the probability of a safe query without self-joins is submodular in the completion of a single relation, we are now tasked with using this to construct an efficient approximation. Since we further know the probability is also monotone as we have restricted our language to UCQs, Nemhauser et al. (1978) tells us that we can get a approximation using a simple greedy algorithm. The final requirement to achieve the approximation described in Nemhauser et al. (1978) is that our set function must have the property that . This can be achieved in a straightforward manner as follows.

Definition 15.

In the context of the set query probability function of Definition 13, the normalized set query probability function is defined as

Proposition 7.

Any normalized set query probability function is monotone, submodular, and satifies .

By simply normalizing the set query probability function, we can now directly apply the greedy approximation described in Nemhauser et al. (1978). We slightly modify Algorithm 1 to efficiently compute the next best tuple to add based on the current database, and add it. This is repeated until adding another tuple would violate the MTP constraint. Finally, we say that is the approximation given by this greedy algorithm and recall that the true upper bound is . We observe that . Furthermore, Nemhauser et al. (1978) tells us the following:

Combining these and multiplying through gives us the following upper and lower bound on the desired probability.

It should be noted that depending on the query and database, it is possible for this upper bound to exceed .

6 Discussion, Future & Related Work

We propose the novel problem of constraining open-world probabilistic databases at the schema level, without having any additional ground information over individuals. We introduced a formal mechanism for doing this, by limiting the mean tuple probability allowed in any given completion, and then sought to compute bounds subject to these constraints. We now discuss remaining open problems and related work.

Section 4 showed that there exists a query that is NP-hard to compute exactly, and also presented a tractable algorithm for a class of inversion-free queries. The question remains how hard the other queries are - in particular, is the algorithm presented complete. Is there a complexity dichotomy, that is, a set of syntactic properties that determine the hardness of a query subject to MTP constraints. Questions of this form are a central object of study in probabilistic databases. It has been explored for conjunctive queries (Dalvi and Suciu, 2007), UCQs (Dalvi and Suciu, 2012), and a more general class of queries with negation (Fink and Olteanu, 2016).

The central goal of our work is to find stronger semantics based on OpenPDBs, while still maintaining their desirable tractability. This notion of achieving a powerful semantics while maintaining tractability is a common topic of study. Raedt and Kimmig (2015)

study this problem by using a probabilistic interpretation of logic programs to define a model, leading to powerful semantics but a more limited scope of tractability

(Fierens et al., 2015). The description logics (Nardi et al., 2003) is a knowledge representation formalism that can be used as the basis for a semantics. This is implemented in a probabilistic setting in, for example, probabilistic ontologies (Riguzzi et al., 2012, 2015), probabilistic description logics (Heinsohn, 1994), probabilistic description logic programs (Lukasiewicz, 2005), or the bayesian description logics (Ceylan and Peñaloza, 2014).

Probabilistic databases in particular are of interest due to their simplicity and practicality. Foundational work defines a few types of probabilistic semantics, and provides efficient algorithms as well as when they can be applied (Dalvi and Suciu, 2004, 2007, 2012). These algorithms along with practical improvements are implemented as industrial level systems such as MystiQ (Ré and Suciu, 2008), SPROUT (Olteanu et al., 2009), MayBMS (Huang et al., 2009), and Trio which implements the closely related Uncertainty-Lineage Databases (Benjelloun et al., 2007). Problems outside of simple query evaluation are also points of interest for PDBs, for example the most probable database (Gribkoff et al., 2014b), or of ranking the top-k results (Ré et al., 2007). In the context of OpenPDBs in particular, Grohe and Lindner (2018) study the notion of an infinite open world, using techniques from analysis to explore when this is feasible.

References

  • Abiteboul et al. [1995] Serge Abiteboul, Richard Hull, and Victor Vianu. Foundations of databases. 1995.
  • Benjelloun et al. [2007] Omar Benjelloun, Anish Das Sarma, Alon Y. Halevy, Martin Theobald, and Jennifer Widom. Databases with uncertainty and lineage. The VLDB Journal, 17:243–264, 2007.
  • Carlson et al. [2010] Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R. Hruschka, and Tom M. Mitchell. Toward an architecture for never-ending language learning. In AAAI, 2010.
  • Ceylan and Peñaloza [2014] İsmail İlkan Ceylan and Rafael Peñaloza. The bayesian description logic . In Stéphane Demri, Deepak Kapur, and Christoph Weidenbach, editors, Automated Reasoning, pages 480–494, Cham, 2014. Springer International Publishing.
  • Ceylan et al. [2016] Ismail Ilkan Ceylan, Adnan Darwiche, and Guy Van den Broeck. Open-world probabilistic databases. In KR, 2016.
  • Cozman [2000] Fábio Gagliardi Cozman. Credal networks. Artif. Intell., 120:199–233, 2000.
  • Dalvi and Suciu [2004] Nilesh N. Dalvi and Dan Suciu. Efficient query evaluation on probabilistic databases. The VLDB Journal, 16:523–544, 2004.
  • Dalvi and Suciu [2007] Nilesh N. Dalvi and Dan Suciu. The dichotomy of conjunctive queries on probabilistic structures. In PODS, 2007.
  • Dalvi and Suciu [2012] Nilesh N. Dalvi and Dan Suciu. The dichotomy of probabilistic inference for unions of conjunctive queries. J. ACM, 59:30:1–30:87, 2012.
  • Dong et al. [2014] Xin Luna Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. Knowledge vault: A web-scale approach to probabilistic knowledge fusion. In The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’14, New York, NY, USA - August 24 - 27, 2014, pages 601–610, 2014. Evgeniy Gabrilovich Wilko Horn Ni Lao Kevin Murphy Thomas Strohmann Shaohua Sun Wei Zhang Geremy Heitz.
  • Fierens et al. [2015] Daan Fierens, Guy Van den Broeck, Joris Renkens, Dimitar Sht. Shterionov, Bernd Gutmann, Ingo Thon, Gerda Janssens, and Luc De Raedt. Inference and learning in probabilistic logic programs using weighted boolean formulas. TPLP, 15:358–401, 2015.
  • Fink and Olteanu [2016] Robert Fink and Dan Olteanu. Dichotomies for queries with negation in probabilistic databases. ACM Trans. Database Syst., 41:4:1–4:47, 2016.
  • Gribkoff et al. [2014a] Eric Gribkoff, Guy Van den Broeck, and Dan Suciu. Understanding the complexity of lifted inference and asymmetric weighted model counting. In UAI, 2014.
  • Gribkoff et al. [2014b] Eric Gribkoff, Guy Van den Broeck, and Dan Suciu. The most probable database problem. Proc. BUDA, 2014.
  • Grohe and Lindner [2018] Martin Grohe and Peter Lindner. Probabilistic databases with an infinite open-world assumption. CoRR, abs/1807.00607, 2018.
  • Heinsohn [1994] Jochen Heinsohn. Probabilistic description logics. In UAI, 1994.
  • Hinrichs and Genesereth [2006] Timothy Hinrichs and Michael Genesereth. Herbrand logic. Technical Report LG-2006-02, Stanford University, 2006.
  • Huang et al. [2009] Jiewen Huang, Lyublena Antova, Christoph Koch, and Dan Olteanu. Maybms: a probabilistic database management system. In SIGMOD Conference, 2009.
  • Jha and Suciu [2011] Abhay Jha and Dan Suciu. Knowledge compilation meets database theory: Compiling queries to decision diagrams. Theory of Computing Systems, 52:403–440, 2011.
  • Lukasiewicz [2005] Thomas Lukasiewicz. Probabilistic description logic programs. In ECSQARU, 2005.
  • Nardi et al. [2003] D. Nardi, Werner Nutt, and Francesco M. Donini. The description logic handbook: Theory, implementation, and applications. In Description Logic Handbook, 2003.
  • Nemhauser et al. [1978] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An analysis of approximations for maximizing submodular set functions—i. Mathematical Programming, 14(1):265–294, Dec 1978.
  • Olteanu et al. [2009] Dan Olteanu, Jiewen Huang, and Christoph Koch. Sprout: Lazy vs. eager query plans for tuple-independent probabilistic databases. 2009 IEEE 25th International Conference on Data Engineering, pages 640–651, 2009.
  • Peters et al. [2014] Shanan E Peters, Ce Zhang, Miron Livny, and Christopher Ré. A machine reading system for assembling synthetic paleontological databases. In PloS one, 2014.
  • Raedt and Kimmig [2015] Luc De Raedt and Angelika Kimmig. Probabilistic (logic) programming concepts. Machine Learning, 100:5–47, 2015.
  • Ré and Suciu [2008] Christopher Ré and Dan Suciu. Managing probabilistic data with mystiq: The can-do, the could-do, and the can’t-do. In SUM, 2008.
  • et al. [2007] Christopher Ré, Nilesh N. Dalvi, and Dan Suciu. Efficient top-k query evaluation on probabilistic data. 2007 IEEE 23rd International Conference on Data Engineering, pages 886–895, 2007.
  • Reiter [1981] Raymond Reiter. On closed world data bases. In

    Readings in artificial intelligence

    , pages 119–140. Elsevier, 1981.
  • Riguzzi et al. [2012] Fabrizio Riguzzi, Elena Bellodi, Evelina Lamma, and Riccardo Zese. Epistemic and statistical probabilistic ontologies. In URSW, 2012.
  • Riguzzi et al. [2015] Fabrizio Riguzzi, Elena Bellodi, Evelina Lamma, and Riccardo Zese. Reasoning with probabilistic ontologies. In IJCAI, 2015.
  • Roth [1996] Dan Roth. On the hardness of approximate reasoning. Artificial Intelligence, 82(1-2):273–302, 1996.
  • Suchanek et al. [2007] Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. Yago: A Core of Semantic Knowledge. In 16th International Conference on the World Wide Web, pages 697–706, 2007.
  • Suciu et al. [2011] Dan Suciu, Dan Olteanu, R. Christopher, and Christoph Koch. Probabilistic Databases. Morgan & Claypool Publishers, 1st edition, 2011.
  • Van den Broeck and Suciu [2017] Guy Van den Broeck and Dan Suciu. Query Processing on Probabilistic Data: A Survey. Foundations and Trends in Databases. Now Publishers, August 2017.

Appendix A Proofs of Theorems, Lemmas, and Propositions

a.1 Proof of Proposition 1

Proof.

To prove this, we need to show that is both closed and convex.

Due to the way our constraints are defined, we know that , where is the set of all completions satisfying (but not necessarily having all tuple probabilities ). We already know that is credal, and thus closed and convex. is a half-space, which we also know is closed and convex. The intersection of closed spaces is closed, and the intersection of convex spaces is convex, so is credal. ∎

a.2 Proof of Theorem 2

Proof.

Since is credal, we are interested here in determining the point of extrema of , as this will tell us precisely which completions can represent boundaries.

Consider the construction of the set , and suppose that there are possible open-world tuples, meaning that . As we observed in the proof of Theorem 1, , where is the set of all completions satisfying . We now make three key observations about these sets:

  1. Each individual possible open-world tuple is described by the intersection of 2 half-spaces: that is, the tuple on dimension is described by and . is the intersection of all of these half-spaces.

  2. For any individual open-world tuple, the boundaries of the two half-spaces that describe it cannot intersect each other.

  3. An MTP constraint is a linear constraint, and thus can be described by a single half-space. So is described by the intersection of these half-spaces.

Observations 1 and 3, together with Lemma 10 tells us that any point of extrema of must be given by the intersection of the boundaries of at least of the half-spaces that form . Observation 3 tells us that at most of these half-spaces come from MTP constraints, leaving the boundaries of at least half-spaces which come from . Finally, observation 2 tells us that each of these half-spaces is describing a different open world tuple. But this means we must have at least tuples which lie on the boundary of one of their defining half-spaces: they must be either or .

a.3 Proof of Theorem 5

Before we present the formal proof, we state and prove 2 Lemmas we will need.

Lemma 8.

Suppose we have two completions and of , which only differ on a single triple, that is and . Further suppose that , , and that contains no triples with x-value , but does contain at least 1 triple with x-value . Then .

Proof.

We will apply a similar technique here to the one used to prove Theorem 6, where we directly examine , the logical formula found by grounding . Since is a union of conjunctive queries, must be a DNF. Each conjunct either does not contain , in which case it does not vary with the choice of completion, or it contains it exactly once. Any conjunct containing an atom of not assigned probability by a completion is logically false.

In order to prove that , let us compare the ground atoms that result from each. It is clear that the only spot on which they differ is on conjuncts involving or . Any conjuncts involving one of these and or will also have an identical effect on the probability of the query, since the completions are identical over and .

Finally this means we need to compare the term with the term . Observe that we know contains triples with x-value , which means the term only contributes new probability mass when is true and none of the other triples involving are true. However, does not contain any triples with x-value , so the term contributes the maximum probability possible. Thus, for any choice of probabilities on such that , we have that . ∎

Lemma 9.

The upper bound is maximized if and only if is a completion formed by a matching of size , where is the maximum number of tuples with probability that can be added to in .

Proof.

Observe that if we begin with a completion given by a matching, we can repeatedly apply Lemma 8 to arrive at any completion. Thus a completion given by a matching must have higher probability than any completion not given by a matching. ∎

Finally, we are ready to present the proof of Theorem 5.

Proof.

Suppose we are given an instance of a 3-dimensional matching problem and an integer . Let be wherever , or respectively, and 0 everywhere else. Additionally, let be unknown for any , and 0 otherwise. Finally, we place an MTP constraint on ensuring that at most tuples can be added, and let . Then Lemma 9 tells us that evaluated on this database will be maximized if and only if the completion used corresponds to a matching of size . We determine this probability using a standard probabilistic database query algorithm, and fixing to have entries for some disjoint set of triples.

Finally, we use our oracle for MTP constrained query evaluation to check with the database we constructed from the matching problem. We compare the upper bound given by the oracle, and if it is equal to , Lemma 9 tells us that a matching of size does exist. Similarly, if the upper bound given by the oracle is lower than , Lemma 9 tells us a matching of size does not exist. ∎

Lemma 10.

If is a set formed by the intersection of half-spaces, has no points of extrema.

Proof.

Written as a set of linear equalities, the solution clearly must have at least 1 degree of freedom. This indicates that for any potential extrema point

, one can move in either direction along this degree of freedom to construct an open line intersecting , but entirely contained in . ∎

a.4 Proof of Theorem 6

Proof.

Without directly computing probabilities, let us inspect , the logical formula we get by grounding . is a union of conjunctive queries, and thus is a very large disjunction of conjuncts. Each conjunct can contain our constrained relation at most once due to the query not having self-joins, and any one of these conjuncts containing an atom of not assigned any probability is logically false.

Next, to show that is submodular, let , and let be given. We assign names to the following subformulas of

  • () is the disjunction of all conjuncts of which are not logically false due to missing tuples in ()

  • is the disjunction of all conjuncts of containing the tuple

Additionally, since , we also know that . Now, we make a few observations relating these quantities with our desired values for submodularity:

Finally, we have the following:

Appendix B General Algorithm for Inversion-Free Queries

We now present an algorithm for doing exact MTP query evaluation on inversion-free queries. Suppose that we have a probabilistic database , a domain of constants denoted , a query , and an MTP constraint on relation allowing us to add tuples. For any , we let denote the upper query probability of subject to an MTP constraint allowing budget on the relevant portion of . That is, tells us the highest probability we can achieve for a partial assignment given a fixed budget. Observe that we can compute all entries of using a slight modification of Algorithm 1. This will take time polynomial in .

Next, we impose an ordering on the domain. For any , we let denote the upper query probability of

(1)

with a budget of on the relevant portions of . Then considers all possible substitutions in our first index, meaning we no longer need to worry about it. Doing this repeatedly would allow us to perform exact MTP query evaluation. However, is non-trivial to compute, and cannot be done by simply modifying Algorithm 1. Instead, we observe the following recurrence:

Intuitively, this recurrence is saying that since the tuples from each fixed constant are independent of each other, we can add a new constant to our vocabulary by simply considering all combinations of budget assignments. This recurrence can be implemented efficiently, yielding a dynamic programming algorithm that runs in time polynomial in the domain size and budget.

The keen reader will now observe that the above definition and recurrence only make sense if immediately reaches Step 5 of Algorithm 1. While this is true, we see that Steps 0 and 1 have no effect on this recurrence, and Steps 2 and 4 correspond to multiplicative factors. For a query that reaches Step 3: inclusion-exclusion, we indeed need to construct such matrices for each sub-query. Notice that the modified algorithm would only work in the case where we can always pick a common for all sub-queries to do dynamic programming on - that is, when the query is inversion-free.