1 Database research for data science
Database research is thriving in the data science era. Relational data remains ubiquitous. According to a recent Kaggle survey [16], most data scientists use relational data. The widespread use of relational data maintains the relevance of existing relational processing techniques. Furthermore, the new requirements brought by the machine learning workloads have led to new relational processing techniques. In this tutorial, we overview some of these existing and new techniques. They rely to a varying degree on an integration of relational database engines and machine learning libraries.
This first part of the tutorial overviews two main approaches at the interface of data processing engines and machine learning libraries. The main message is that this interface provides a fruitful and exciting opportunity for database research to shine. A tighter integration of the database and machine learning computation uncovers new research challenges and can lead to significant performance improvements.
A typical approach to learning over relational data involves the construction of the training dataset using a feature extraction query that joins the input relations and constructs new features using aggregates over the data columns. This query can be expressed in SQL and executed using a database system, e.g., PostgreSQL or SparkSQL
[50], or Python Pandas [26] for Jupyter notebooks. The desired model is then learned using a machine learning library, e.g., scikitlearn [38], R [40][1], or MLlib [27]. This approach ignores the structure of the underlying relational data at the expense of runtime performance. It is dubbed structureagnostic in this tutorial. It puts together two blackbox specialised systems for data processing and machine learning and may work for virtually any dataset and model. There are several significant downsides of this approach, including: the materialisation of the result of the feature extraction query; the export of this result from the data processing system to the machine learning library; high maintenance cost in case of changes to the underlying data; the limitations of each of the two systems become a limitation of their combination. These downsides hinder the runtime performance of a data science solution using this approach.The tutorial will also present an alternative approach, dubbed structureaware learning. It exploits the data sparsity and the structure of the relational data, in particular the various dependencies in the data and the result of the feature extraction query (multivalued, functional). In this approach, the learning algorithm is opened up and rewritten such that its dataintensive components are moved closer to the data inside the relational query processor. Such components can be expressed as (groupby) aggregates over the feature extraction query. Part 2.1 of the tutorial is dedicated to them. Their output size is much smaller than that of the feature extraction query and can be computed asymptotically faster than the feature extraction query itself. Since this approach avoids the materialisation of the training dataset and its move between the two systems, it enjoys excellent runtime performance. Opening up the learning black box also allows to consider known mechanisms to maintain the dataintensive components under data updates [33].
The tutorial will contrast the aforementioned two approaches and present experimental evidence for the superior runtime performance of the structureaware over structureagnostic [42]. This runtime performance can in fact be translated into accuracy performance: Within the time budget of training one model with the structureagnostic approach, the structureaware approach may train many possible models and eventually choose one with the best accuracy. This part will conclude with a brief account to several instantiations of these two approaches in the literature.
2 Structureaware Learning
Part 1 of the tutorial sets up the scene for Part 2, which overviews principles behind the structureaware approach for learning over relational data. We divide this second part of the tutorial into three blocks. The first block looks at how to turn the learning problem into a database problem.
The second block overviews techniques for computing and maintaining batches of aggregates that arise from mapping the learning problem to a database problem. It considers a variety of types of structure that can be exploited for improving the runtime performance of such techniques, including algebraic, combinatorial, statistical, and geometric structure. Techniques presented in this block aim at lowering the asymptotic computational complexity.
Finally, the third block overviews engineering tools that proved effective for improving runtime performance, such as parallelisation, specialisation to workload and dataset, and lowlevel sharing of data access between different components of the code. Such tools can effectively lower the constant factors of the computation time.
2.1 Turn the ML Problem into a DB Problem
The tutorial will explain how to express the dataintensive computation of the learning task using various forms of aggregation over the data matrix that are readily supported by database query languages. A particular focus will be on the learning task for ridge linear regression, support vector machines, and treestructured Bayesian networks (ChowLiu trees) – as representatives for wider classes of models and objective functions for the learning task. An indepth treatment for specific models and objective functions is provided in the literature, e.g.,
[24, 43, 4, 3, 42, 5, 2, 20].For learning using the leastsquares loss function, the gradient vector of this function is built up using sumproduct aggregates over the model features and parameters: For each pair of features (database attributes)
and , there is one such aggregate sum() that sums over all tuples in the training dataset the product of the values of the two attributes. In case an attribute corresponds to a categorical feature, then it is promoted from the sum to the groupby clause. If both attributes are in the groupby clause, then the sum becomes the count of the number of occurrences of each pair of categories of the two attributes: sum() group by . This approach applies to ridge linear regression [43, 4] and polynomial regression models in general [35], factorisation machines [3, 4, 5], sumproduct networks [10], principal component analysis
[4], quadratically regularised lowrank models [15], and QR and SVD decompositions [49].For decision trees, the computation of the cost functions for each attribute and condition at a decision tree node can be expressed by a sumproduct aggregate with a filter condition. The cost functions used by algorithms such as CART
[9]for constructing regression trees rely on aggregates that compute the variance of the response
conditioned on a filter restricting the value of an attribute to be equal, less than, or greater than a given constant (threshold ): VARIANCE() WHERE . For a categorical attribute, the filter condition expresses its membership in a set of possible categories. The thresholds and categories are decided in advance based on the distribution of values for . The variance aggregate is expressed using the sum of squares, the square of sum, and the count. For classification trees, the aggregates encode the entropy or the Gini index using groupby counts to compute value frequencies in the data matrix.A large class of models, including support vector machines, are trained using subgradient descent. They use nonpolynomial loss functions, such as (ordinal) hinge, Huber, scalene, and epsilon insensitive, that are defined by multiple cases conditioned on additive inequalities of the form , where and are constants and are the features. The efficient computation of aggregates conditioned on additive inequalities calls for new algorithms beyond the classical ones for theta joins [2, 20]. Similar aggregates are derived for means clustering [2].
2.2 Exploit Structure of Data and Problem
This part of the tutorial overviews principles behind new efficient algorithms for batches of groupby aggregates, worstcase optimal equality joins, and additive inequality joins. Such algorithms power the structureaware learning paradigm by systematically exploiting the structure of the relational data to lower the computational complexity and improve the runtime performance. The tutorial discusses the algebraic, combinatorial, statistical, and geometric structure of relational data, with a focus on the algebraic structure.
A. Algebraic structure. A relation is a sumproduct expression, where the sum is the set union and the product is the Cartesian product. The computation expressed using relational algebra can be captured using (semi)rings. There is extensive work in the literature on relations over provenance semirings [13], generalised multiset relations [21], and factors over multiple semirings [6]. The tutorial will overview particular properties that make the rings effective for computing and maintaining aggregates, as required by structureaware learning.
A.1. Distributivity of product over sum. This law allows to factor out data blocks common to several tuples in a relation, represent them once, and compute over them once. It is the main conceptual ingredient of factorised databases [36, 35]. By systematically applying the distributivity law, relations can be represented more succinctly yet losslessly as directed acyclic graphs with fewer data value repetitions. The factorisations of relations representing the answers to relational queries can be asymptotically smaller than the standard representation as list of tuples. This applies to a rich class of queries made up of joins [8, 36], selections, projections, unions, groupby aggregates [7], and orderby clauses [7]. More relevant to our tutorial, factorisation can also lower the computational complexity of machine learning over feature extraction queries [43, 35, 4, 2]. The tutorial will exemplify factorised computation for joins and aggregates such as those discussed in Part 2.1. The framework of Functional Aggregate Queries [6] generalises factorised databases to semirings beyond sumproduct and shows that many problems across Computer Science can benefit from factorised computation. LMFAO [42, 41], FIVM [32, 33], and IVM [19, 18] employ factorised query computation and maintenance. These algorithms factorise the query into a hierarchy of increasingly simpler views, which are maintained bottomup under data updates.
A.2. Sumproduct abstraction. By conveniently overloading the sum and product operations in a (semi)ring, we can capture the computation for many different tasks over relational data, including the type of aggregates needed for training models. This is exemplified extensively in the FAQ framework [6], where the same structure of computation but with possibly different definitions of sum and product can be used to compute, among others, database queries, matrix chains, and inference queries (marginals and MAP) in probabilistic graphical models. The tutorial will exemplify this for covariance matrices [32] and mutual information [33] over feature extraction queries, used in the context of training models over relational data.
A.3. Additive inverse. To accommodate efficient mechanisms for incremental maintenance of learned models over feature extraction queries, we extend tuples to carry along payloads, which are elements from a ring. These payloads can for instance capture the multiplicities of the corresponding tuples or, in case of tuples in the query result, the number of derivations of that tuple from the input tuples via the query. Similarly, the aggregates needed for training a specific model may represent payloads. Tuple inserts and deletes to the underlying data can be modelled uniformly as inserts with appropriate payloads and respectively that follow the additive inverse law of the ring: , where is the neutral element for summation in the ring. Whenever a tuple’s payload becomes , then it is not anymore part of the data. In the simplest case, is an integer representing the multiplicity [21, 22, 17]. Further examples discussed in the literature consider rings for factorised data representation and covariance matrices [32, 33]. The efficient maintenance of covariance matrices makes it possible to keep ridge linear regression models fresh under highthroughput data changes [32]. A recent tutorial overviews advances in incremental view maintenance [12]. Our tutorial will go through several examples showing how rings can be used to maintain models under data updates.
B. Combinatorial structure. The combinatorial structure prevalent in relational data is captured by notions such as the width measure of the query and the degree of a data value. For reasons of time limitation, the tutorial will only mention this type of structure. A brief overview is given in a recent keynote [34]. If a feature extraction query has width , then its data complexity is for a database of size , where hides logarithmic factors in . Similar complexities have been shown for learning a variety of models over feature extraction queries [43, 35, 2] using prototypes such as F [43] and LMFAO [42]. There are several width measures proposed in the literature yet they are beyond the scope of our tutorial. The degree information captures the number of occurrences of a data value in the input database [31]. Several existing query processing and maintenance algorithms, e.g., worstcase optimal join algorithms [30] and worstcase optimal incremental maintenance of triangle [17, 18] and hierarchical [19] queries, adapt their execution strategy depending on the degree of data values, with different strategies for highdegree and lowdegree values. A special form of bounded degree is given by functional dependencies. They can be used to lower the learning runtime for ridge polynomial regression models and factorisation machines. Instead of learning a given model, we can instead learn a reparameterised model with fewer parameters and then map it back to the original model [4, 5].
C. Statistical structure. In case of very large datasets, a feasible approach is to learn approximately over data samples [28]. When learning over feature extraction queries, we would like to sample from the input data through the queries. Prior work considered the problem of sampling through selection conditions and joins, e.g., the ripple joins [14] and the wander joins [25], and for specific classes of machine learning models [37].
D. Geometric structure.
This type of structure becomes relevant whenever we use distance measures. Clustering algorithms can exploit such measures, e.g., the optimal transport distance between two probability measures, and the distancebased triangle inequality. The relational
means (Rkmeans) [11] is a prime example of structureaware learning approaches that exploits the geometric structure of the underlying data. It achieves a constantfactor approximation of the means objective by clustering over a small coreset instead of the full result of the feature extraction query.2.3 Engineering Tools of DB Researcher
Towards taming the computational challenge raised by structureaware learning, a large effort focuses on a toolbox of systems techniques such as: specialisation for workload, data, and hardware; observing the memory hierarchy and blocking operations; distribution and parallelisation. The tutorial will highlight several recent efforts in this space, in particular on compiling the task of learning specific models over feature extraction queries into efficiently executable lowlevel code. Such techniques can lead to significant runtime performance improvements, as reported for the AC/DC [47], FIVM [32, 33], LMFAO [42, 41], and IFAQ [47, 48] prototypes. The tutorial will make the case for such a compilation approach. These systems are based on prior work on specialisation for queries and database schema [29, 45, 46]. The LMFAO system also systematically shares computation across the batch of aggregates for structureaware learning and takes advantage of multicore CPU architectures for domain and task parallelism.
3 Future Research
The tutorial will conclude with reflections on the state of machine learning over relational data and will pinpoint several directions of future research in systems and theory for structureaware learning. In particular, it will consider questions on the limits of structureaware learning and how to make it readily useful for practical data science projects. It will also give a glimpse of ongoing work of the authors on maintaining machine learning models under updates.
References
 [1] M. Abadi, P. Barham, J. Chen, and et al. Tensorflow: A system for largescale machine learning. In OSDI, pages 265–283, 2016.
 [2] M. Abo Khamis, R. R. Curtin, B. Moseley, H. Q. Ngo, X. Nguyen, D. Olteanu, and M. Schleich. On functional aggregate queries with additive inequalities. In PODS, pages 414–431, 2019.
 [3] M. Abo Khamis, H. Q. Ngo, X. Nguyen, D. Olteanu, and M. Schleich. AC/DC: Indatabase learning thunderstruck. In DEEM, pages 8:1–8:10, 2018.

[4]
M. Abo Khamis, H. Q. Ngo, X. Nguyen, D. Olteanu, and M. Schleich.
Indatabase learning with sparse tensors.
In PODS, pages 325–340, 2018.  [5] M. Abo Khamis, H. Q. Ngo, X. Nguyen, D. Olteanu, and M. Schleich. Learning models over relational data using sparse tensors and functional dependencies. ACM Trans. Database Syst., 45(2), 2020.
 [6] M. Abo Khamis, H. Q. Ngo, and A. Rudra. FAQ: questions asked frequently. In PODS, pages 13–28, 2016.
 [7] N. Bakibayev, T. Kociský, D. Olteanu, and J. Závodný. Aggregation and ordering in factorised databases. PVLDB, 6(14):1990–2001, 2013.
 [8] N. Bakibayev, D. Olteanu, and J. Závodný. FDB: A query engine for factorised relational databases. PVLDB, 5(11):1232–1243, 2012.
 [9] L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees. Wadsworth and Brooks, Monterey, CA, 1984.
 [10] G. Chichirim. Structured SumProduct Networks, May 2020. BSc thesis, University of Oxford.
 [11] R. R. Curtin, B. Moseley, H. Q. Ngo, X. Nguyen, D. Olteanu, and M. Schleich. Rkmeans: Fast clustering for relational data. In AISTATS, volume 108 of Proc. ML Res., pages 2742–2752, 2020.
 [12] I. Elghandour, A. Kara, D. Olteanu, and S. Vansummeren. Incremental techniques for largescale dynamic query processing. In CIKM, pages 2297–2298, 2018. Tutorial.
 [13] T. J. Green and V. Tannen. The semiring framework for database provenance. In PODS, pages 93–99, 2017.
 [14] P. J. Haas and J. M. Hellerstein. Ripple joins for online aggregation. In SIGMOD, pages 287–298, 1999.
 [15] G.R. Inelus. Quadratically Regularised Principal Component Analysis over multirelational databases, 2019. MSc thesis, University of Oxford.
 [16] Kaggle. The State of Data Science and Machine Learning, 2017. https://www.kaggle.com/surveys/2017.
 [17] A. Kara, H. Q. Ngo, M. Nikolic, D. Olteanu, and H. Zhang. Counting triangles under updates in worstcase optimal time. In ICDT, pages 4:1–4:18, 2019.
 [18] A. Kara, H. Q. Ngo, M. Nikolic, D. Olteanu, and H. Zhang. Maintaining triangle queries under updates. ACM Trans. Database Syst., 2020.
 [19] A. Kara, M. Nikolic, D. Olteanu, and H. Zhang. Tradeoffs in static and dynamic evaluation of hierarchical queries. In PODS, pages 375–392, 2020.
 [20] M. A. Khamis, R. R. Curtin, B. Moseley, H. Q. Ngo, X. Nguyen, D. Olteanu, and M. Schleich. Functional aggregate queries with additive inequalities. ACM Trans. Database Syst., 45(4):17:1–17:41, 2020.
 [21] C. Koch. Incremental query evaluation in a ring of databases. In PODS, pages 87–98, 2010.
 [22] C. Koch, Y. Ahmad, O. Kennedy, M. Nikolic, A. Nötzli, D. Lupei, and A. Shaikhha. Dbtoaster: higherorder delta processing for dynamic, frequently fresh views. VLDB J., 23(2):253–278, 2014.
 [23] A. Kumar, M. Boehm, and J. Yang. Data management in machine learning: Challenges, techniques, and systems. In SIGMOD, pages 1717–1722, 2017.
 [24] A. Kumar, J. F. Naughton, and J. M. Patel. Learning generalized linear models over normalized data. In SIGMOD, pages 1969–1984, 2015.
 [25] F. Li, B. Wu, K. Yi, and Z. Zhao. Wander join and XDB: online aggregation via random walks. ACM Trans. Database Syst., 44(1):2:1–2:41, 2019.
 [26] W. McKinney. Pandas: a foundational python library for data analysis and statistics. Python for High Performance and Scientific Computing, 14, 2011.
 [27] X. Meng, J. Bradley, et al. MLlib: Machine learning in Apache Spark. J. Mach. Learn. Res., 17(1):1235–1241, 2016.
 [28] K. P. Murphy. Machine learning: a probabilistic perspective. MIT Press, Cambridge, Mass., 2013.
 [29] T. Neumann. Efficiently Compiling Efficient Query Plans for Modern Hardware. PVLDB, 4(9):539–550, 2011.
 [30] H. Q. Ngo, E. Porat, C. Ré, and A. Rudra. Worstcase optimal join algorithms. In PODS, pages 37–48, 2012.
 [31] H. Q. Ngo, C. Ré, and A. Rudra. Skew strikes back: New developments in the theory of join algorithms. In SIGMOD Rec., pages 5–16, 2013.
 [32] M. Nikolic and D. Olteanu. Incremental view maintenance with triple lock factorization benefits. In SIGMOD, pages 365–380, 2018.
 [33] M. Nikolic, H. Zhang, A. Kara, and D. Olteanu. FIVM: learning over fastevolving relational data. In SIGMOD, pages 2773–2776, 2020.
 [34] D. Olteanu. The relational data borg is learning. Proc. VLDB Endow., 13(12):3502–3515, 2020.
 [35] D. Olteanu and M. Schleich. Factorized databases. SIGMOD Rec., 45(2):5–16, 2016.
 [36] D. Olteanu and J. Závodný. Size bounds for factorised representations of query results. ACM Trans. Database Syst., 40(1):2, 2015.

[37]
Y. Park, J. Qing, X. Shen, and B. Mozafari.
Blinkml: Efficient maximum likelihood estimation with probabilistic guarantees.
In SIGMOD, pages 1135–1152, 2019.  [38] F. Pedregosa, G. Varoquaux, A. Gramfort, and et al. Scikitlearn: Machine learning in python. J. Machine Learning Research, 12:2825–2830, 2011.
 [39] N. Polyzotis, S. Roy, S. E. Whang, and M. Zinkevich. Data management challenges in production machine learning. In SIGMOD, pages 1723–1726, 2017.
 [40] R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Stat. Comp., www.rproject.org, 2013.
 [41] M. Schleich and D. Olteanu. LMFAO: An engine for batches of groupby aggregates. PVLDB, 13(12):2945–2948, 2020.
 [42] M. Schleich, D. Olteanu, M. Abo Khamis, H. Q. Ngo, and X. Nguyen. A layered aggregate engine for analytics workloads. In SIGMOD, pages 1642–1659, 2019.
 [43] M. Schleich, D. Olteanu, and R. Ciucanu. Learning linear regression models over factorized joins. In SIGMOD, pages 3–18, 2016.
 [44] M. Schleich, D. Olteanu, M. A. Khamis, H. Q. Ngo, and X. Nguyen. Learning models over relational data: A brief tutorial. In SUM, pages 423–432, 2019.
 [45] A. Shaikhha, Y. Klonatos, and C. Koch. Building efficient query engines in a highlevel language. ACM Trans. Database Syst., 43(1):4:1–4:45, 2018.
 [46] A. Shaikhha, Y. Klonatos, L. Parreaux, L. Brown, M. Dashti, and C. Koch. How to architect a query compiler. In SIGMOD, pages 1907–1922, 2016.
 [47] A. Shaikhha, M. Schleich, A. Ghita, and D. Olteanu. Multilayer optimizations for endtoend data analytics. In CGO, pages 145–157, 2020.
 [48] A. Shaikhha, M. Schleich, and D. Olteanu. An intermediate representation for hybrid database and machine learning workloads. PVLDB, to appear, 2021.
 [49] B. van Geffen. QR decomposition of normalised relational data, 2018. MSc thesis, University of Oxford.
 [50] M. Zaharia, M. Chowdhury, et al. Resilient distributed datasets: A faulttolerant abstraction for inmemory cluster computing. In NSDI, pages 2–2, 2012.