1 Introduction
Prolog’s success with advanced applications demonstrated the ability of declarative languages to express powerful algorithms as “logic + control.” Then, after observing that in relational database management systems, “control” and optimization are provided by the system implicitly, Datalog researchers sought the ability to express powerful applications using only declarative logicbased constructs. After initial successes, which e.g., led to the introduction of recursive queries in SQL, Datalog encountered two major obstacles as data analytics grew increasingly complex: (i) lack of expressive power at the language level, and (ii) lack of scalability and performance at the system level.
These problems became clear with the rise of more complex descriptive and predictive BigData analytics. For instance, the indepth study of data mining algorithms [10] carried out in the late 90s by the IBM DB2 team concluded that the best way to carry out predictive analytics is to load the data from an external database into main memory and then write an efficient implementation in a procedural language to mine the data from the cache. However, recent advances in architectures supporting inmemory parallel and distributed computing have led to the renaissance of powerful declarativelanguage based systems like LogicBlox [4], BigDatalog [12], SociaLite [11], BigDatalogMC [14], Myria [13] and RASQL [8] that can scale efficiently on multicore machines as well as on distributed clusters. In fact, some of these generalpurpose systems like BigDatalog and RASQL have outperformed commercial graph engines like GraphX for many classical graph analytic tasks. This has brought the focus back on to the first challenge (i) – how to express the wide spectrum of predictive and prescriptive analytics in declarative query languages. This problem has assumed great significance today with the revolution of machine learning driven data analytics, since “indatabase analytics” can save data scientists considerable time and effort, which is otherwise repeatedly spent in extracting features from databases via multiple joins, aggregations and projections and then exporting the dataset for use in external learning tools to generate the desired analytics [2]. Modern researchers have worked toward this “indatabase analytics” solution by writing userdefined functions in procedural languages or using other lowlevel system interfaces, which the query engines can then import [7]. However this approach raises three fundamental challenges:

Productivity and Developability:
Writing efficient implementations of advanced data analytic applications (or even modifying them) using lowlevel system APIs require data science knowledge as well as system engineering skills. This can strongly hinder the productivity of data scientists and thus the development of these advanced applications.

Portability: Userdefined functions written in one systemlevel API may not be directly portable to other systems where the architecture and underlying optimizations differ.

Optimization: Here, the application developer is entrusted with the responsibility to write an optimal userdefined function, which is contrary to the work and vision of the database community in the 90s [9] that aspired for a highlevel declarative language like SQL supported by implicit query optimization techniques.
In this paper, we argue that these problems can be addressed by simple extensions that enable the use of aggregate functions in the recursive definitions of logicbased languages, such as Datalog, Prolog, and even SQL. To that effect, we use different case studies to show that simple aggregates in declarative recursive computation can express concisely and declaratively a host of advanced applications ranging from graph analytics and dynamic programming (DP) based optimization problems to data mining and machine learning (ML) algorithms. While the use of nonmonotonic aggregates in recursive programs raises difficult semantic issues, the newly introduced notion of premappability () [15] can ensure the equivalence of former programs with that of aggregatestratified programs under certain conditions. Following this notion of , we further illustrate stepbystep how a data scientist or an application developer can very easily verify the semantic correctness of the declarative programs, which provide these complex ML/AIpowered data analytic solutions. Before diving into these case studies, let us briefly introduce .
2 PreMappable Constraints in Graph Queries
We consider a Datalog query, given by rules , to compute the shortest paths between all vertices in a graph given by the relation arc(X, Y, D), where is the distance between vertices and . In this query, as shown in rule , the aggregate min is defined on groupby variables and , at a stratum higher than the recursive rules ( and ). Thus, we use the compact head notation often used in the literature for aggregates.
The min and max aggregates can also be viewed as constraints enforced upon the results returned in the head of the rule: i.e., for the example at hand the min constraint is enforced on . This view allows us to define the semantics of by reexpressing it with negation as shown in rules and . This guarantees that the program has a perfectmodel semantics, although the iterated fixpoint computation of such model can be very inefficient and even nonterminating in presence of cycles.
The aforementioned inefficiency can be cured with , whereby the min aggregate can be pushed inside the recursion within the same stratum, as shown in rules and . Because of this transformation is equivalencepreserving [5] since the program below has a minimal fixpoint and computes the atoms of the original program in a finite number of iterations.
In general, this transformation holds true for any constraint and Immediate Consequence Operator (defined over recursive rules) , if , for every interpretation of the program.
Testing that was satisfied during the execution of a program is straightforward [8]. Furthermore, simple formal tools [17] are at hand to prove that holds for any possible execution of a given program, but due to space limitations we will simply use the reasoning embedded in those tools to prove for the cases at hand. For example, is always satisfied by base rules such as [16], and hence we only need to prove the property for the recursive rule , i.e. we prove that the additional constraint ) can be imposed on without changing the result returned in the head in as much as this is constrained by ). Indeed every that violates produces a value that violates and it is thus eliminated. So the addition of does not change the result when is also in place. An even more dramatic situation occurs, if we replace with, say, in our recursive rule. Then it is clear that the result computed in the head of the rule is invariant w.r.t the value of , and therefore we could even select the min of these values. In other words, we here have that . Obviously this is a special case of , that will be called intrinsic (or i in short). Another special case of , called radical (or r in short) occurs when the equality holds. This is for instance the case when the condition is added to the rule , which specifies that we are only interested in the paths that originate in . Then this condition can be pushed all the way to the nonrecursive base rule , leaving the recursive rule unchanged and thus amenable to the min optimization previously described. While the use of r in pushing constants was widely studied in the Datalog literature, the use of i and full in dealing with nonmonotonic constraints was introduced in [16].
3 Dynamic Programming based Optimization Problem
Consider the classic coin change problem: given a value V and an infinite supply of each of valued coins, what is the minimum number of coins needed to get change for V amount? Traditionally, declarative programming languages attempt to solve this through a stratified program: the lower stratum recursively enumerates over all the possible ways to make up the value V, while the min aggregate is applied at the next stratum to select the desired answer. Obviously, such simple stratified recursive solutions are computationally extremely inefficient. In procedural languages, these problems are solved efficiently with dynamic programming (DP) based optimization. Such DP based solutions utilize the “optimal substructure property” of the problem i.e., the optimal solution of the given problem can be evaluated from the optimal solutions of its subproblems, which are, in turn, progressively calculated and stored in memory (memoization). For example, consider an extensional predicate coins having the atoms coins(2), coins(3) and coins(6), which represent coins with values 2 cents, 3 cents and 6 cents respectively. Now, we need at least 2 coins to make up the value cents (3 cents + 6 cents). Note, we can also make up 6 cents using 3 coins of 2 cents each. However, the optimal solution to make up 9 cents should also in turn use the best alternative available to make up 6 cents, which is to use 1 coin of 6 cent itself. Based on this discussion, the example program below, described by rules , shows how this solution can be succinctly expressed in Datalog with aggregate in recursion. This program can be executed in a topdown fashion and the optimal number of coins required to make up the change is determined by passing the value of V (9 in our example) to the recursive predicate num (as shown by the query goal).
The successive bindings for the predicate num are calculated from the coin value C under consideration (as V  C) and are passed in a topdown manner (topdown information passing) till the exit rule is reached. The min aggregate inside recursion ensures that for every topdown recursive call (subproblem) only the optimal solution is retained. With this said materialization of the intensional predicate num (analogous to memoization), this program execution is almost akin to a DP based solution except one difference — pure DP based implementations are usually executed in a bottomup manner. In the same vein, it is worth mentioning that many interesting DP algorithms (e.g., computing minimum number of operations required for a chain matrix multiplication) can also be effectively computed with queries, containing aggregates in recursion, using bottomup seminaive evaluation identical to the DP implementations. We next focus our attention on validating for the above program. Note the definition of , i or r does not refer to any evaluation strategy for processing the recursive query i.e. the definitions are agnostic of topdown, bottomup or magic sets based recursive query evaluation strategies. Interestingly, the use of “optimal substructure property” in DP algorithms itself guarantees the validity of . This can be illustrated as follows with respect to the min constraint: consider inserting an additional constraint ) on in the recursive rule . Naturally, any Y, which does not satisfy , will produce a N that violates the min aggregate in the head of rule and hence will be discarded. Since, the imposition of in the rule body does not change the result when in the head (of rule ) is applied, the min constraint can be pushed inside recursion i.e., , thus validating .
4 KNearest Neighbors Classifier
nearest neighbors is a popular nonparametric instancebased lazy classifier, which stores all instances of the training data. Classification of a test point is computed based on a simple majority vote among
nearest^{1}^{1}1Based on metrics like Euclidean distance. training instances of the test point, where the latter is assigned into the class that majority of the neighbors belong to.In the Datalog program, defined by rules , the predicate te(Id,X,Y) denotes a relational instance of twodimensional test points represented by their Id and coordinates (X,Y). Likewise, the predicate tr(Id,X,Y,Label) denotes the relational instance of training points represented by their Id, coordinates (X,Y) and corresponding class Label. In this example, rule calculates the Euclidean distance between the test and all the training points, while the recursive rule with aggregate determines the nearest neighbors for each of the test point. Symbolically, the predicate nearestK(IdA,D,IdB,J) represents the training instance IdB is the Jth nearest neighbor of the test point IdA located at a distance of D apart. Finally, rules aggregates the votes for different classes and performs the classification by majority voting. cMax in rule is a special construct that extracts the corresponding class Label that received the maximum votes for a given test point. Rule can be alternatively expressed without cMax, as shown in rules . In terms of simple relational algebra, the constructs cMin or cMax can be thought of denoting the projection of specific columns (attributes like in and in ) from a tuple, which satisfies the min or max aggregate constraint respectively. However, these special constructs are mere syntactic sugar as illustrated before with equivalent rules , which do not use any of these constructs.
We now verify that the min aggregate in the recursive rule satisfies and ensures semantic correctness. Note the exit rule always trivially satisfies the definition, since the interpretation, of the recursive predicate is initially an empty set. Thus, we focus our attention only on the recursive rule . We now prove that satisfies i: consider inserting an additional constraint in the body of the rule that defines the min constraint on the recursive predicate nearestK in the body (creating an interpretation in the rule body). If this min constraint in the body ensures that for a given and , is the minimum distance of the th nearest neighbor, then for the corresponding valid , without the min aggregate in the head will produce all potential th neighbors whose distances are higher than (i.e., distance of th neighbor), thereby being identical to . Thus, we have, validating satisfies i, since the recursive rule remains invariant to the inclusion of the additional constraint in the rule body.
Similar to nearest neighbor classifier, several other data mining algorithms like
spanning tree based graph clustering, vertex and edge based clustering, tree approximation of Bayesian networks, etc. — all depend on the discovery of a subsequence of elements in sorted order and can likewise be expressed with
using aggregates in recursion. It is also worth observing that while our declarative nearest algorithm requires more lines of code than the other cases presented in this paper, it can still be expressed with only seven lines of logical rules as compared to standard learning tools like Scikitlearn that implements this in 150+ lines of procedural or objectoriented code.5 IterativeConvergent Machine Learning Models
Iterativeconvergent machine learning (ML) models like SVM, perceptron, linear regression, logistic regression models, etc. are often trained with batch gradient descent and can be written declaratively as Datalog programs with XYstratification, as shown in
[3]. Rules show a simple XYstratified program template to train a typical iterativeconvergent machine learning model. J denotes the temporal argument, while training_data (in ) is an extensional predicate representing the training set and model(J, M) is an intensional predicate defining the model M learned at iteration J. The model is initialized using the predicate init_model and the rule computes the corresponding error and gradient at every iteration based on the current model and the training data using the predicate compute (defined according to the learning algorithm under consideration). The final rule assigns the new model for the next iteration based on the current model and the associated gradient using the update predicate (also defined according to the learning algorithm at hand). Since many iterativeconvergent ML models are formulated as convex optimization problems, the error gradually reduces over iterations and the model converges when the error reduces below a threshold .Interestingly, an equivalent version of the above program can be expressed with aggregates and premappable constraints in recursion, as shown with rules . The stopping criterion pushed inside the recursion in rule satisfies r, since and would both generate the same atoms in find, where the error E is above the threshold (assuming convex optimization function). Also note, the max aggregate defined over the recursive predicate find trivially satisfies i.
6 Conclusion
Today BigData applications are often developed and operated in silos, which only support a particular family of tasks – e.g. only descriptive analytics or only graph analytics or only some ML models and so on. This lack of a unifying model makes development extremely ad hoc, and hard to port efficiently over multiple platforms. For instance, on many graph applications native Scala with Apache Spark cannot match the performance of systems like RaSQL, which can plan the best data partitioning/swapping strategy for the whole query and optimize the seminaive evaluation accordingly [8]. However, as demonstrated in this paper, a simple extension to declarative programming model, which allows use of aggregates and easily verifiable premappable constraints in recursion, can enable developers to write concise declarative programs (in Datalog, Prolog or SQL) and express a plethora of applications ranging from graph analytics to data mining and machine learning algorithms. This will also increase the productivity of developers and data scientists, since they can work only on the logical aspect of the program without being concerned about the underlying physical optimizations.
References
 [1]

[2]
Mahmoud Abo Khamis,
Hung Q. Ngo,
XuanLong Nguyen,
Dan Olteanu &
Maximilian Schleich
(2018):
InDatabase Learning with Sparse Tensors
. In: SIGMOD/PODS’18, doi:http://dx.doi.org/10.1145/3196959.3196960.  [3] Vinayak R. Borkar et al. (2012): Declarative Systems for LargeScale Machine Learning. In: Bulletin of the IEEE Computer Society Technical Committee on Data Engineering, doi:http://dx.doi.org/10.1.1.362.4961.
 [4] Molham Aref, Balder ten Cate, Todd J. Green, Benny Kimelfeld, Dan Olteanu, Emir Pasalic, Todd L. Veldhuizen & Geoffrey Washburn (2015): Design and Implementation of the LogicBlox System. In: SIGMOD’15, doi:http://dx.doi.org/10.1145/2723372.2742796.
 [5] Tyson Condie, Ariyam Das, Matteo Interlandi, Alexander Shkapsky, Mohan Yang & Carlo Zaniolo (2018): Scalingup reasoning and advanced analytics on BigData. TPLP 18(56), pp. 806–845, doi:http://dx.doi.org/10.1017/S1471068418000418.
 [6] Ariyam Das & Carlo Zaniolo (2019): A Case for Stale Synchronous Distributed Model for Declarative Recursive Computation. In: 35th International Conference on Logic Programming, ICLP’19.
 [7] Xixuan Feng, Arun Kumar, Benjamin Recht & Christopher Ré (2012): Towards a Unified Architecture for inRDBMS Analytics. In: SIGMOD’12, pp. 325–336, doi:http://dx.doi.org/10.1145/2213836.2213874.
 [8] Jiaqi Gu, Yugo Watanabe, William Mazza, Alexander Shkapsky, Mohan Yang, Ling Ding & Carlo Zaniolo (2019): RaSQL: Greater Power and Performance for Big Data Analytics with RecursiveaggregateSQL on Spark. In: SIGMOD’19, doi:http://dx.doi.org/10.1145/3299869.3324959.
 [9] Tomasz Imielinski & Heikki Mannila (1996): A Database Perspective on Knowledge Discovery. Commun. ACM 39(11), pp. 58–64, doi:http://dx.doi.org/10.1145/240455.240472.
 [10] Sunita Sarawagi, Shiby Thomas & Rakesh Agrawal (2000): Integrating Association Rule Mining with Relational Database Systems: Alternatives and Implications. Data Mining and Knowledge Discovery 4(2), doi:http://dx.doi.org/10.1145/276304.276335.
 [11] Jiwon Seo, Jongsoo Park, Jaeho Shin & Monica S. Lam (2013): Distributed Socialite: A Datalogbased Language for Largescale Graph Analysis. Proc. VLDB Endow. 6(14), pp. 1906–1917, doi:http://dx.doi.org/10.14778/2556549.2556572.
 [12] Alexander Shkapsky, Mohan Yang, Matteo Interlandi, Hsuan Chiu, Tyson Condie & Carlo Zaniolo (2016): Big Data Analytics with Datalog Queries on Spark. In: SIGMOD’16, doi:http://dx.doi.org/10.1145/2882903.2915229.
 [13] Jingjing Wang, Magdalena Balazinska & Daniel Halperin (2015): Asynchronous and Faulttolerant Recursive Datalog Evaluation in Sharednothing Engines. Proc. VLDB Endow. 8(12), pp. 1542–1553, doi:http://dx.doi.org/10.14778/2824032.2824052.
 [14] Mohan Yang, Alexander Shkapsky & Carlo Zaniolo (2017): Scaling up the performance of more powerful Datalog systems on multicore machines. VLDB J. 26(2), pp. 229–248, doi:http://dx.doi.org/10.1007/s007780160448z.
 [15] Carlo Zaniolo, Mohan Yang, Ariyam Das & Matteo Interlandi (2016): The Magic of Pushing Extrema into Recursion: Simple, Powerful Datalog Programs. In: AMW’16.
 [16] Carlo Zaniolo, Mohan Yang, Matteo Interlandi, Ariyam Das, Alexander Shkapsky & Tyson Condie (2017): Fixpoint semantics and optimization of recursive Datalog programs with aggregates. TPLP 17(56), pp. 1048–1065, doi:http://dx.doi.org/10.1017/S1471068417000436.
 [17] Carlo Zaniolo, Mohan Yang, Matteo Interlandi, Ariyam Das, Alexander Shkapsky & Tyson Condie (2018): Declarative BigData Algorithms via Aggregates and Relational Database Dependencies. In: AMW’18.
Comments
There are no comments yet.