Data analysis tasks are becoming increasingly important in information systems. Although these tasks are currently implemented using code written in standard programming languages, in recent years there has been a significant shift towards declarative solutions where the definition of the task is clearly separated from its implementation [Alvaro et al.2010, Markl2014, Seo et al.2015, Wang et al.2015, Shkapsky et al.2016, Kaminski et al.2017].
Languages for declarative data analysis are typically rule-based, and they have already been implemented in reasoning engines such as BOOM [Alvaro et al.2010], DeALS [Shkapsky et al.2016], Myria [Wang et al.2015], SociaLite [Seo et al.2015], Overlog [Loo et al.2009], Dyna [Eisner and Filardo2011], and Yedalog [Chin et al.2015].
Formally, such declarative languages can be seen as variants of logic programming equipped with means for capturing quantitative aspects of the data, such as arithmetic function symbols and aggregates. It is, however, well-known since the ’90s that the combination of recursion with numeric computations in rules easily leads to semantic difficulties [Mumick et al.1990, Kemp and Stuckey1991, Beeri et al.1991, Van Gelder1992, Consens and Mendelzon1993, Ganguly et al.1995, Ross and Sagiv1997, Mazuran et al.2013], and/or undecidability of reasoning [Dantsin et al.2001, Kaminski et al.2017]. In particular, undecidability carries over to the languages underpinning the aforementioned reasoning engines for data analysis.
DBLP:conf/ijcai/KaminskiGKMH17 have recently proposed the language of limit Datalog programs—a decidable variant of negation-free Datalog equipped with arithmetic functions over the integers that is expressive enough to capture many data analysis tasks. The key feature of limit programs is that all intensional predicates with a numeric argument are limit predicates, the extension of which represents minimal () or maximal () bounds of numeric values. For instance, if we encode a weighted directed graph as facts over a ternary predicate and a unary predicate in the obvious way, then the following rules encode the all-pairs shortest path problem, where the ternary limit predicate is used to encode the distance from any node to any other node in the graph as the length of a shortest path between them.
The semantics of predicates is defined such that a fact is entailed from these rules and a dataset if and only if the distance from to is at most ; as a result, all facts with are also entailed. This is in contrast to standard first order predicates, where there is no semantic relationship between and . The intended semantics of limit predicates can be axiomatised using rules over standard predicates; in particular, our example limit program is equivalent to a standard logic program consisting of rules (1), (2), and the following rule (3), where is now treated as a regular first-order predicate:
DBLP:conf/ijcai/KaminskiGKMH17 showed that, under certain restrictions on the use of multiplication, reasoning (i.e., fact entailment) over limit programs is decidable and -complete in data complexity; then, they proposed a practical fragment with tractable data complexity.
Limit Datalog programs as defined in prior work are, however, positive and hence do not allow for negation-as-failure in the body of rules. Non-monotonic negation applied to limit atoms can be useful, not only to express a wider range of data analysis tasks, but also to declaratively obtain solutions to problems where the cost of such solutions is defined by a positive limit program. For instance, our example limit program consisting of rules (1) and (2) provides the length of a shortest path between any two nodes, but does not provide access to any of the paths themselves—an issue that we will be able to solve using non-monotonic negation.
In this paper, we study the language of limit programs with stratified negation-as-failure. Our language extends both positive limit Datalog as defined in prior work and plain (function-free) Datalog with stratified negation. We argue that our language provides useful additional expressivity, but at the expense of increased complexity of reasoning; for programs with restricted use of multiplication, complexity jumps from -completeness in the case of positive programs, to -completeness for programs with stratified negation. We also show that the tractable fragment of positive limit programs defined in [Kaminski et al.2017] can be seamlessly extended with stratified negation while preserving tractability of reasoning; furthermore, the extended fragment is sufficiently expressive to capture the relevant data analysis tasks.
The proofs of all our results are given in
In this section we recapitulate the syntax and semantics of Datalog programs with integer arithmetic and stratified negation (see e.g., [Dantsin et al.2001] for an excellent survey).
Syntax We assume a fixed vocabulary of countably infinite, mutually disjoint sets of predicates equipped with non-negative arities, objects, object variables, and numeric variables. Each position of an -ary predicate is of either object or numeric sort. An object term is an object or an object variable. A numeric term is an integer, a numeric variable, or of the form , , or where and are numeric terms and , , and are the standard arithmetic functions. A constant is an object or an integer. A standard atom is of the form , with an -ary predicate and each a term matching the sort of the -th position of . A (standard) positive literal is a standard atom, and a (standard) negative literal is of the form , for a standard atom. A comparison atom is of the form or , with and the usual comparison predicates over the integers, and and numeric terms. We write as an abbreviation for . A term, atom or literal is ground if it has no variables.
A rule has the form , where the body is a possibly empty conjunction of standard literals and comparison atoms , and the head is a standard atom. We assume without loss of generality that standard body literals are function-free; indeed, a conjunction with a functional term can be equivalently rewritten by replacing with a fresh variable and adding to the conjunction. A rule is safe if each object variable in occurs in a positive literal in the body of . A ground instance of is obtained from by substituting each variable by a constant of the right sort.
A fact is a rule with empty body and a function-free standard atom in the head that has no variables in object positions and no repeated variables in numeric positions. Intuitively, a variable in a fact says that the fact holds for every integer in the position. As a convention, we will omit and use symbol instead of variables when writing facts. A dataset is a finite set of facts. Dataset is ordered if (i) it contains facts , , , , for some repetition-free enumeration of all objects in ; and (ii) it contains no other facts over predicates , , and . A program is a finite set of safe rules; without loss of generality we assume that distinct rules do not share variables. A predicate is intensional (IDB) in a program if occurs in in the head of a rule that is not a fact; otherwise, is extensional (EDB) in . Program is positive if it has no negative literals, and it is semi-positive if negation occurs only in front of EDB atoms. A stratification of is a function mapping each predicate to a positive integer such that, for each rule with the head over a predicate and each standard body literal over , we have if is positive, and if is negative. Program is stratified if it admits a stratification. Given a stratification , we write for the -th stratum of over —that is, the set of all rules in whose head predicates satisfy . Note that each stratum is a semi-positive program.
Semantics A (Herbrand) interpretation is a possibly infinite set of ground facts (i.e., facts without ). Interpretation satisfies a ground atom , written , if either (i) is a standard atom such that evaluation of the arithmetic functions in under the usual semantics over integers produces a fact in ; or (ii) is a comparison atom that evaluates to under the usual semantics. Interpretation satisfies a ground negative literal , written , if . The notion of satisfaction is extended to conjunctions of ground literals, rules, and programs as in first-order logic, with all variables in rules implicitly universally quantified. If satisfies a program , then is a model of . For a Herbrand interpretation and a (possibly infinite) semi-positive set of rules, let be the set of facts such that is a ground instance of a rule in and . Given a program and a stratification of , for each we define interpretation by induction on and :
The materialisation of is the interpretation , for the greatest number such that . The materialisation of a program does not depend on the chosen stratification. A stratified program entails a fact , written , if for every ground instance of . For positive programs, this definition coincides with the usual first-order notion of entailment: for positive and a fact, if and only if holds for all .
Reasoning We study the computational properties of checking whether , for a program, a dataset, and a fact. We are interested in data complexity, which assumes that only and form the input while is fixed. Unless otherwise stated, all numbers in the input are coded in binary, and the size of is the size of its representation. Checking is undecidable even if the only arithmetic function in is [Dantsin et al.2001] and predicates have at most one numeric position [Kaminski et al.2017].
We use standard definitions of the basic complexity classes such as , , , and . Given a complexity class ,
is the class of decision problems solvable in polynomial time by deterministic Turing machines with an oracle for a problem in; functional class is defined similarly. Finally, is a synonym for .
3 Stratified Limit Programs
We introduce stratified limit programs as a language that can be seen as either a semantic or a syntactic restriction of Datalog with integer arithmetic and stratified negation. Our language is also an extension of that in [Kaminski et al.2017] with stratified negation.
A stratified limit program is a pair where
is a stratified program where each predicate either has no numeric position, in which case it is an object predicate, or only its last position is numeric, in which case it is a numeric predicate, and
is a partial function from numeric predicates to that is total on the IDB predicates in and on predicates occurring in non-ground facts.
A numeric predicate is a (or ) limit predicate if (or , respectively). Numeric predicates that are not limit predicates are ordinary. An atom, fact or literal is numeric, limit, etc. if so is the used predicate.
All notions defined on ordinary Datalog programs (such as EDB and IDB predicates, stratification, etc.) transfer to limit programs by applying them to . We often abuse notation and write instead of when is clear from the context or immaterial. Whenever we consider a union of two limit programs, we silently assume that they coincide on . Finally, we denote (or ) by if is a (or, respectively, ) limit predicate.
Intuitively, a limit fact says that the value of for a tuple of objects is or more, if is , or or less, if is . For example, a limit fact in our all-pairs shortest path example says that node is reachable from node via a path with cost or less. The intended semantics of limit predicates can be axiomatised using standard rules as given next.
An interpretation satisfies a limit program if it satisfies the program , where contains the following rule for each limit predicate in :
The materialisation of is ; and entails , written , if .
We next demonstrate the use of stratified negation on examples. One of the main uses of negation of a limit atom is to ‘access’ the limit value (e.g., the length of a shortest path) attained by the atom in the materialisation of previous strata, and then exploit such values in further computations. To facilitate such use of negation in examples, we introduce a new operator as syntactic sugar in the language.
The least upper bound expression of a (or ) limit atom is the conjunction where (or , respectively) and is a fresh variable.
Clearly, for an interpretation and a ground atom if is the limit integer such that .
An input of the single-pair shortest path problem can be encoded in the obvious way as a dataset using a ternary ordinary numeric predicate to represent the graph’s weighted edges, and unary facts and to identify the source and target nodes and , respectively. The stratified limit program given next computes, together with (where all edge weights are positive), a DAG over a binary object predicate such that every maximal path in the DAG is a shortest path from to .
The first stratum consists of rules (4) and (5), and computes the length of a shortest path from to all other nodes using the predicate ; in particular, if and only if is the length of a shortest path from to . Then, in a second stratum, the program computes the predicate such that if and only if the edge is part of a shortest path from to . ∎
The closeness centrality of a node in a strongly connected weighted directed graph is a measure of how central the node is in the graph [Sabidussi1966]; variants of this measure are useful, for instance, for the analysis of market potential. Most commonly, closeness centrality of a node is defined as , where is the length of a shortest path from to ; the sum in the denominator is often called the farness centrality of . We next give a limit program computing a node of maximal closeness centrality in a given directed graph. We encode a graph as an ordered dataset using, as before, a unary object predicate and a ternary ordinary numeric predicate . Program consists of rules (8)–(16), where , and are predicates, and and are object predicates.
The first stratum consists of rules (8)–(12). Rules (8) and (9) compute the distance (length of a shortest path) between any two nodes. Rules (10)–(12) then compute the farness centrality of each node based on the aforementioned distances; for this, the program exploits the order predicates to iterate over the nodes in the graph while recording the best value obtained so far in the iteration using an auxiliary predicate . In the second stratum (rules (13)–(16)), the program uses negation to compute the node of minimum farness centrality (and hence of maximum closeness centrality), which is recorded using the predicate; the order is again exploited to iterate over nodes, and an auxiliary predicate is used to record the current node of the iteration and the node with the best centrality encountered so far. ∎
4 Stratified Limit-Linear Programs
By results in [Kaminski et al.2017], checking fact entailment is undecidable even for positive limit programs. Essentially, this follows from the fact that checking rule applicability over a set of facts requires solving arbitrary non-linear inequalities over integers—that is, solving the 10th Hilbert problem, which is undecidable. To regain decidability, they proposed a restriction on positive limit programs, called limit-linearity, which ensures that every program satisfying the restriction can be transformed using a grounding technique so that all numeric terms in the resulting program are linear. In particular, this implies that rule applicability can be determined by solving a system of linear inequalities, which is feasible in
. As a result, fact entailment for positive limit-linear programs is-complete in data complexity.
We next extend the notion of limit-linearity to programs with stratified negation, and define semi-grounding as a way to simplify a limit-linear program by replacing certain types of variables with constants. We then prove that fact entailment is -complete in data complexity for such programs. All programs in our previous examples are limit-linear as per the definition given next.
A numeric variable is guarded in a rule of a stratified limit program if
either occurs in a positive ordinary literal in ;
or the body of contains the literals
where is a (or ) predicate, (or , respectively), and .
Rule is limit-linear if each numeric term in is of the form , where each is a distinct numeric variable not occurring in in a (positive or negative) ordinary numeric literal, term uses only variables occurring in a positive ordinary literal in , and terms with use only variables that are guarded in and do not use . A limit-linear program contains only limit-linear rules.
A rule is semi-ground if all variables in are numeric and occur only in limit and comparison atoms. The semi-grounding of a program is obtained by replacing, in every rule in , each object variable and each numeric variable occurring in an ordinary numeric atom in with a constant in in all possible ways.
It is easily seen that the semi-grounding of a limit-linear program entails the same facts as for every dataset. Furthermore, as in prior work, Definition 6 ensures that the semi-grounding of a positive limit-linear program contains only linear numeric terms; finally, for programs with stratified negation, it ensures that negation can be eliminated while preserving limit-linearity when the program is materialised stratum-by-stratum, as we will discuss in detail later on.
Decidability of fact entailment for positive limit-linear programs is established by first semi-grounding the program and then reducing fact entailment over the resulting program to the validity problem of Presburger formulas [Kaminski et al.2017]—that is, first-order formulas interpreted over the integers and composed using only variables, constants and , functions and , and the comparisons.
The extension of such a reduction to stratified limit programs, however, is complicated by the fact that in the presence of negation-as-failure, entailment no longer coincides with classical first-order entailment. We thus adopt a different approach, where we show decidability and establish data complexity upper bounds according to the following steps.
Step 1. We extend the results in [Kaminski et al.2017] for positive programs by showing that, for every positive limit-linear program and dataset , we can compute in a finite representation of its (possibly infinite) materialisation (see Lemma 8 and Corollary 9). This representation is called the pseudo-materialisation of .
Step 2. We further extend the results in Step 1 to semi-positive limit-linear programs, where negation occurs only in front of EDB predicates. For this, we show that fact entailment for such programs can be reduced in polynomial time in the size of the data to fact entailment over semi-ground positive limit-linear programs by exploiting the notion of a reduct (see Definition 10 and Lemma 11). Thus, we can assume existence of an oracle for computing the pseudo-materialisation of a semi-positive limit-linear program.
Step 3. We provide an algorithm (see Algorithm 1) that decides entailment of a fact by a stratified limit-linear program using oracle from Step 2. The algorithm maintains a pseudo-materialisation , which is initially empty and is constructed bottom-up stratum by stratum. In each step , the algorithm updates the pseudo-materialisation by applying to the union of the pseudo-materialisation for stratum and the rules in the -th stratum. The final , from which entailment of is obtained, is computed using a constant number of oracle calls in the size of the data, which yields a data complexity upper bound (Proposition 13 and Theorem 15).
In what follows, we specify each of these steps. We start by formally defining the notion of a pseudo-materialisation of a stratified limit program , which compactly represents the materialisation . Intuitively, can be infinite because it can contain, for any limit predicate and tuple of objects of suitable arity, an infinite number of facts of the form . However, if the materialisation has facts of this form, then either there is a limit value such that for each and for each , or for every integer . As argued in prior work, it then suffices for the pseudo-materialisation to contain only a single fact in the former case, or in the latter case.
A pseudo-interpretation is a set of facts such that occurs only in facts over limit predicates and holds for all facts and in with limit .
The pseudo-materialisation of a limit program , written , is the (unique) pseudo-interpretation such that
an object or ordinary numeric fact is contained in if and only if it is contained in ; and
for each limit predicate , object tuple , and integer ,
if and only if and for all , and
if and only if for all integers .
We now strengthen the results in [Kaminski et al.2017] by establishing a bound on the size of pseudo-materialisations of positive, limit-linear programs.
Let be a semi-ground, positive, limit-linear program, and let be a limit dataset. Then and the magnitude of each integer in is bounded polynomially in the largest magnitude of an integer in , exponentially in , and double-exponentially in , where stands for the size of the representation of assuming that all numbers take unit space.
By Lemma 8, the pseudo-materialisation of contains at most linearly many facts; furthermore, the size of each such fact is bounded polynomially once is considered fixed. Hence, the pseudo-materialisation of can be computed in in data complexity, even if is not semi-ground.
Let be a positive, limit-linear program. Then the function mapping each limit dataset to is computable in in .
In our second step, we extend this result to semi-positive programs. For this, we start by defining the notion of a reduct of a semi-positive limit-linear program . The reduct is obtained by first computing a semi-ground instance of and then eliminating all negative literals in while preserving fact entailment. Intuitively, negative literals can be eliminated because they involve only EDB predicates; as a result, their extension can be computed in polynomial time from the facts in alone. To eliminate a ground negative literal , it suffices to check whether is entailed by the facts in and simplify all rules containing accordingly; in turn, limit literals involving a numeric variable can be rewritten as comparisons of with a constant computed from the facts in .
Let be a semi-positive, limit-linear program and let be the subset of all facts in . The reduct of is obtained by first computing the semi-grounding of and then applying the following transformations to each rule and each negative body literal in :
if , for a ground atom, delete if , and delete from otherwise,
if is a non-ground limit literal, then
delete if for each integer ;
delete from if for each ; and
replace in with otherwise, where .
Note that semi-ground programs disallow non-ground negative literals over ordinary numeric predicates, which is why these are not considered in Definition 10. As shown by the following lemma, reducts allow us to reduce fact entailment for semi-positive, limit-linear programs to semi-ground, positive, limit-linear programs.
For a semi-positive, limit-linear program and a limit dataset, the reduct of , and a fact, we have if and only if . Moreover can be computed in polynomial time in , is polynomially bounded in , and .
Let be a semi-positive, limit-linear program. Then the function mapping each limit dataset to is computable in in .
We are now ready to present Algorithm 1, which decides entailment of a fact by a stratified limit-linear program . The algorithm uses an oracle for computing the pseudo-materialisation of a semi-positive program. The existence of such oracle and its computational bounds are ensured by Lemma 12. Algorithm 1 constructs the pseudo-materialisation of stratum by stratum in a bottom-up fashion. For each stratum , the algorithm uses oracle to compute the pseudo-materialisation of the program consisting of the rules in the current stratum and the facts in the pseudo-materialisation computed for the previous stratum. Once has been constructed, entailment of is checked directly over .
Correctness of the algorithm is immediate by the properties of and the correspondence between pseudo-materialisations and materialisations. Moreover, if oracle runs in in data complexity, for some complexity class , then it can only return a pseudo-interpretation that is polynomially bounded in data complexity; as a result, Algorithm 1 runs in since the number of strata of does not depend on the input dataset.
If oracle is computable in in data complexity, then Algorithm 1 runs in in data complexity.
For a stratified, limit-linear program and a fact, deciding is in in data complexity.
The matching lower bound is obtained by reduction from the OddMinSAT problem [Krentel1988]. An instance of OddMinSAT consists of a repetition-free tuple of variables and a satisfiable propositional formula over these variables. The question is whether the truth assignment satisfying for which the tuple is lexicographically minimal, assuming , among all satisfying truth assignments of has . In our reduction, is encoded as a dataset using object predicates and to encode the structure of and numeric predicates to encode the order of variables in ; a fixed, two-strata program then goes through all assignments in the ascending lexicographic order and evaluates the encoding of on until it finds some that makes true; then derives fact if and only if . Thus, if and only if belongs to the language of OddMinSAT.
For a stratified, limit-linear program and a fact, deciding is -complete in data complexity. The lower bound holds already for programs with two strata.
5 A Tractable Fragment
Tractability in data complexity is an important requirement in data-intensive applications. In this section, we propose a syntactic restriction on stratified, limit-linear programs that is sufficient to ensure tractability of fact entailment in data complexity. Our restriction extends that of type consistency in prior work to account for negation. The programs in Examples 4 and 5 are type-consistent.
A semi-ground, limit-linear rule is type-consistent if
each numeric term in is of the form where is an integer and each , , is a nonzero integer, called the coefficient of variable in ;
each numeric variable occurs in exactly one standard body literal;
each numeric variable in a negative literal is guarded;
if the head of is a limit atom, then each unguarded variable occurring in with a positive (or negative) coefficient also occurs in the body in a (unique) positive limit literal that is of the same (or different, respectively) type (i.e., vs. ) as ;
for each comparison or in , each unguarded variable occurring in with a positive (or negative) coefficient also occurs in a (unique) positive (or , respectively) body literal, and each unguarded variable occurring in with a positive (or negative) coefficient occurs in a (unique) positive (or , respectively) body literal.
A semi-ground, stratified, limit-linear program is type-consistent if all of its rules are type-consistent. A stratified limit-linear program is type-consistent if the program obtained by first semi-grounding and then simplifying all numeric terms as much as possible is type-consistent.
Similarly to type-consistency for positive programs, Definition 16 ensures that divergence of limit facts to can be detected in polynomial time when constructing a pseudo-materialisation (see [Kaminski et al.2017] for details). Furthermore, the conditions in Definition 16 have been crafted such that the reduct of a semi-positive type-consistent program (and hence of any intermediate program considered while materialising a stratified program) can be trivially rewritten into a positive type-consistent program. For this, it is essential to require a guarded use of negation (see third condition in Definition 16).
For a semi-positive, type-consistent program and a limit dataset, the reduct of is polynomially rewritable to a positive, semi-ground, type-consistent program such that, for each fact , if and only if .
Lemma 17 allows us to extend the polytime algorithm in [Kaminski et al.2017] for computing the pseudo-materialisation of a positive type-consistent program to semi-positive programs, thus obtaining a tractable implementation of oracle restricted to type-consistent programs. This suffices since Algorithm 1, when given a type-consistent program as input, only applies to type-consistent programs. Thus, by Proposition 13, we obtain a polynomial time upper bound on the data complexity of fact entailment for type-consistent programs with stratified negation. Since plain Datalog is already -hard in data complexity, this upper bound is tight.
For a stratified, type-consistent program and a fact, deciding is -complete in data complexity.
Finally, as we show next, our extended notion of type consistency can be efficiently recognised.
Checking whether a stratified, limit-linear program is type-consistent is in LogSpace.
6 Conclusion and Future Work
Motivated by declarative data analysis applications, we have extended the language of limit programs with stratified negation-as-failure. We have shown that the additional expressive power provided by our extended language comes at a computational cost, but we have also identified sufficient syntactic conditions that ensure tractability of reasoning in data complexity. There are many avenues for future work. First, it would be interesting to formally study the expressive power of our language. Since type-consistent programs extend plain (function-free) Datalog with stratified negation, it is clear that they capture on ordered datasets [Dantsin et al.2001], and we conjecture that the full language of stratified limit-linear programs captures . From a more practical perspective, we believe that limit programs can naturally express many tasks that admit a dynamic programming solution (e.g., variants of the knapsack problem, and many others). Conceptually, a dynamic programming approach can be seen as a three-stage process: first, one constructs an acyclic ‘graph of subproblems’ that orders the subproblems from smallest to largest; then, one computes a shortest/longest path over this graph to obtain the value of optimal solutions; finally, one backwards-computes the actual solution by tracing back in the graph. Capturing the third stage seems to always require non-monotonic negation (as illustrated in our path computation example), whereas the first stage may or may not require it depending on the problem. Finally, the second stage can be realised with a (recursive) positive program. Second, our formalism should be extended with aggregate functions. Although certain forms of aggregation can be simulated using arithmetic functions and iterating over the object domain by exploiting the ordering, having aggregation explicitly would allow us to express certain tasks in a more natural way. Third, we would like to go beyond stratified negation and investigate the theoretical properties of limit Datalog under well-founded [Van Gelder et al.1991] or the stable model semantics [Gelfond and Lifschitz1988]. Finally, we plan to implement our reasoning algorithms and test them in practice.
This research was supported by the EPSRC projects DBOnto, MaSI, and ED.
- [Alvaro et al.2010] Peter Alvaro, Tyson Condie, Neil Conway, Khaled Elmeleegy, Joseph M. Hellerstein, and Russell Sears. BOOM analytics: exploring data-centric, declarative programming for the cloud. In EuroSys 2010, pages 223–236. ACM, 2010.
- [Beeri et al.1991] Catriel Beeri, Shamim A. Naqvi, Oded Shmueli, and Shalom Tsur. Set constructors in a logic database language. J. Log. Program., 10(3&4):181–232, 1991.
- [Chin et al.2015] Brian Chin, Daniel von Dincklage, Vuk Ercegovac, Peter Hawkins, Mark S. Miller, Franz Josef Och, Christopher Olston, and Fernando Pereira. Yedalog: Exploring knowledge at scale. In SNAPL 2015, volume 32 of LIPIcs, pages 63–78. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2015.
- [Chistikov and Haase2016] Dmitry Chistikov and Christoph Haase. The taming of the semi-linear set. In ICALP, volume 55 of LIPIcs, pages 128:1–128:13. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2016.
- [Consens and Mendelzon1993] Mariano P. Consens and Alberto O. Mendelzon. Low complexity aggregation in GraphLog and Datalog. Theor. Comput. Sci., 116(1):95–116, 1993.
- [Dantsin et al.2001] Evgeny Dantsin, Thomas Eiter, Georg Gottlob, and Andrei Voronkov. Complexity and expressive power of logic programming. ACM Comput. Surv., 33(3):374–425, 2001.
- [Eisner and Filardo2011] Jason Eisner and Nathaniel Wesley Filardo. Dyna: Extending datalog for modern AI. In Datalog 2010, volume 6702 of LNCS, pages 181–220. Springer, 2011.
- [Ganguly et al.1995] Sumit Ganguly, Sergio Greco, and Carlo Zaniolo. Extrema predicates in deductive databases. J. Comput. Syst. Sci., 51(2):244–259, 1995.
- [Gelfond and Lifschitz1988] Michael Gelfond and Vladimir Lifschitz. The stable model semantics for logic programming. In ICLP/SLP 1988, pages 1070–1080. MIT Press, 1988.
- [Kaminski et al.2017] Mark Kaminski, Bernardo Cuenca Grau, Egor V. Kostylev, Boris Motik, and Ian Horrocks. Foundations of declarative data analysis using limit datalog programs. In IJCAI 2017, pages 1123–1130. ijcai.org, 2017.
- [Kemp and Stuckey1991] David B. Kemp and Peter J. Stuckey. Semantics of logic programs with aggregates. In ISLP 1991, pages 387–401. MIT Press, 1991.
- [Krentel1988] Mark W. Krentel. The complexity of optimization problems. J. Comput. System Sci., 36(3):490–509, 1988.
- [Loo et al.2009] Boon Thau Loo, Tyson Condie, Minos N. Garofalakis, David E. Gay, Joseph M. Hellerstein, Petros Maniatis, Raghu Ramakrishnan, Timothy Roscoe, and Ion Stoica. Declarative networking. Commun. ACM, 52(11):87–95, 2009.
- [Markl2014] Volker Markl. Breaking the chains: On declarative data analysis and data independence in the big data era. PVLDB, 7(13):1730–1733, 2014.
- [Mazuran et al.2013] Mirjana Mazuran, Edoardo Serra, and Carlo Zaniolo. Extending the power of datalog recursion. VLDB J., 22(4):471–493, 2013.
- [Mumick et al.1990] Inderpal Singh Mumick, Hamid Pirahesh, and Raghu Ramakrishnan. The magic of duplicates and aggregates. In VLDB 1990, pages 264–277. Morgan Kaufmann, 1990.
- [Ross and Sagiv1997] Kenneth A. Ross and Yehoshua Sagiv. Monotonic aggregation in deductive databases. J. Comput. System Sci., 54(1):79–97, 1997.
- [Sabidussi1966] Gert Sabidussi. The centrality index of a graph. Psychometrika, 31(4):581–603, 1966.
- [Seo et al.2015] Jiwon Seo, Stephen Guo, and Monica S. Lam. SociaLite: An efficient graph query language based on datalog. IEEE Trans. Knowl. Data Eng., 27(7):1824–1837, 2015.
- [Shkapsky et al.2016] Alexander Shkapsky, Mohan Yang, Matteo Interlandi, Hsuan Chiu, Tyson Condie, and Carlo Zaniolo. Big data analytics with datalog queries on Spark. In SIGMOD 2016, pages 1135–1149. ACM, 2016.
- [Van Gelder et al.1991] Allen Van Gelder, Kenneth A. Ross, and John S. Schlipf. The well-founded semantics for general logic programs. J. ACM, 38(3):620–650, 1991.
- [Van Gelder1992] Allen Van Gelder. The well-founded semantics of aggregation. In PODS 1992, pages 127–138. ACM Press, 1992.
- [Wang et al.2015] Jingjing Wang, Magdalena Balazinska, and Daniel Halperin. Asynchronous and fault-tolerant recursive datalog evaluation in shared-nothing engines. PVLDB, 8(12):1542–1553, 2015.