1. Introduction
In this paper we report our ongoing work on the design and implementation of AC/DC, a gradient descent solver for a class of optimization problems including ridge linear regression, polynomial regression, and factorization machines. It extends our prior system F for factorized learning of linear regression models
(Schleich et al., 2016) to capture nonlinear models, categorical features, and model reparameterization under functional dependencies (FDs). Its design is but one fruit of our exploration of the design space for the AI engine currently under development at RelationalAI. It subscribes to a recent effort to bring analytics inside the database (Hellerstein and et al, 2012; Feng et al., 2012; Kumar et al., 2015; Schleich et al., 2016; Elgamal et al., 2017) and thereby avoid the nontrivial time spent on data import/export at the interface between database systems and statistical packages.AC/DC^{1}^{1}1AC/DC supports both categorical and continuous features and fast processing. Its name allures at the duality of alternating and discrete currents and at the fastpaced sound of a homonymous rock band.
solves optimization problems over design matrices defined by feature extraction queries over databases of possibly many relations. It is a unified approach for computing both the optimizations and the underlying database queries; the two tasks not only live in the same process space, they are intertwined in one execution plan with asymptotically lower complexity than that of either one in isolation. This is possible due to several key contributions.
First, AC/DC decomposes
a given optimization problem into a set of aggregates whose answers are fed into a gradient descent solver that iteratively approximates the solution to the problem until it reaches convergence. The aggregates capture the combinations of features in the input data, as required for computing the gradients of an objective function. They are groupby aggregates in case of combinations with at least one categorical feature and plain scalars for combinations of continuous features only. The former aggregates are grouped by the query variables with a categorical domain. Prior work on indatabase machine learning mostly considered continuous features and onehot encoded categorical features, e.g.,
(Feng et al., 2012; Kumar et al., 2015; Schleich et al., 2016; Elgamal et al., 2017). We avoid the expensive onehot encoding of categorical features by a sparse representation using groupby aggregates (Abo Khamis et al., 2018b). Several tools, e.g., libFM (Rendle, 2012; Fan et al., 2008) for factorization machines and LIBSVM (Chang and Lin, 2011)for support vector machines, employ sparse data representations that avoid the redundancy introduced by onehot encoding. These are computed on the result of the feature extraction query once it is exported out of the database.
Second, AC/DC factorizes the computation of these aggregates over the feature extraction query to achieve the lowest known complexity. We recently pinpointed the complexity of AC/DC (Abo Khamis et al., 2018b). The factorized computation of the queries obtained by decomposing optimization problems can be asymptotically faster than the computation of the underlying join query alone. This means that all machine learning approaches that work on a design matrix defined by the result of the database join are asymptotically suboptimal. The only other endtoend indatabase learning system (Kumar et al., 2015) that may outperform the underlying database join works for generalized linear models over keyforeign key joins, does not decompose the task to AC/DC’s granularity, and cannot recover the good complexity of AC/DC since it does not employ factorized computation.
Third, AC/DC massively shares computation across the aggregatejoin queries. These different queries use the same underlying join and their aggregates have similar structures.
Fourth, AC/DC exploits functional dependencies
(FDs) in the input database to reduce the dimensionality of the optimization problem. Prior work exploited FDs for Naïve Bayes classification and feature selection
(Kumar et al., 2016). AC/DC can reparameterize (non)linear regression models with nonlinear regularizers (Abo Khamis et al., 2018b). This reparameterization requires taking the inverses of matrices, which are sums of identity matrices and vector dot products. To achieve performance improvements by model reparameterization, AC/DC uses an interplay of its own data structure and the Eigen library for linear algebra (Guennebaud et al., 2010).In this paper we report on the performance of AC/DC against MADlib (Hellerstein and et al, 2012), R (R Core Team, 2013), libFM (Rendle, 2012), TensorFlow(Abadi et al., 2016) and our earlier prototype F (Schleich et al., 2016). We used a realworld dataset with five relations of 86M tuples in total and up to 154K features to train a ridge linear regression model, a polynomial regression model of degree two, and a factorization machine of degree two. AC/DC is the fastest system in our experiments. It is orders of magnitude faster than the competitors or can finish successfully when the others exceed memory limitation, 24hour timeout, or internal design limitations.
The performance gap is attributed to the optimizations of AC/DC, which none of our competitors support fully. TensorFlow, R, and libFM require the materialization of the feature extraction query, as well as exporting/importing the query result from the database system to their system. They also require a transformation of the data into a sparse representation of the onehot encoded training data before learning. MADlib does not materialize and export the query, but still requires an upfront onehot encoding of the input relations, which comes with higher asymptotic complexity and prohibitively large relations with lots of zero entries. None of these systems benefit from factorized computation nor exploit functional dependencies. F is designed for linear regression models. It uses factorized and shared computation of aggregates. It however does not exploit functional dependencies and requires the same onehot encoding of categorical features in the input relations as MADlib.
Our results confirm a counterintuitive theoretical result from (Abo Khamis et al., 2018b) stating that, under certain conditions, exploiting query structures and smart aggregation algorithms, one can train a model using batch gradient descent (BGD) faster
than scanning through the data once. In particular, this means BGD can be faster than one epoch of stochastic gradient descent (SGD), in contrast to the commonly accepted assumption that SGD is typically faster than BGD.
The paper is organized as follows. Section 2 presents the class of optimization problems supported by AC/DC. Section 3 overviews the foundations of AC/DC. Section 4 describes the data structures used by AC/DC, its optimizations for factorized and shared computation of aggregates. Section 5 discusses the reparameterization of optimization problems and their computation using the aggregates computed in a previous step. Section 6 reports on experiments.
2. Optimization Problems
We consider solving an optimization problem of a particular form inside a database; a more general problem formulation is presented in (Abo Khamis et al., 2018b). Suppose we have parameters and training dataset that contains tuples with features and response . The features come in two flavors: continuous, e.g., price, and qualitative/categorical, e.g., city. The former are encoded as scalars, e.g., the price is 10.5. The latter are onehot encoded as indicator vectors, e.g., if there were three cities then the vector indicates that the first city appears in the training record.
The “learning” phase in machine learning typically comes down to solving the optimization problem , where
is the loss function. We use the square loss and
regularizer:(1) 
The model is , where and
are parametermapping, and respectively featuremapping, functions that uniformly capture continuous and categorical variables. In the case of continuous features only,
and are vectorvalued functions and , where is a positive integer. Each component function of is a multivariate polynomial. Each component function of is a multivariate monomial. If there are categorical variables as well, the components of andbecome tensors. See
(Abo Khamis et al., 2018b) and Example 2.1 below for details.The training dataset is the result of a feature extraction query over an input database consisting of several relations. It is common for to be the natural join of the relations in and select the columns hosting the desired features. Furthermore, can be enhanced with additional selections and aggregates to construct new (intensional) features based on the input (extensional) ones.
We next show how Eq. (1) captures various regression models.
Example 2.1 ().
Consider a feature extraction query that returns tuples over the variables , where , are continuous and city, country are categorical.
A ridge linear regression (LR) model with response and features can be learned by optimizing (1) with the functions and . The entries in and for and are vectors, because the features are indicator vectors.
To learn a degree polynomial regression () model, we extend the function from the LR model with all pairwise combinations of features: , , , and . We do not include and because they encode the same information as the original indicator vectors. The function from the LR model is correspondingly extended with parameters for each interaction, e.g., the parameter vector for the vector of interactions .
Similarly, the degree rank factorization machines () model can be learned by extending the function from LR with all pairwise interactions of distinct features as for , yet without the interaction . In contrast to , the parameters corresponding to interactions are now factorized: The entry in corresponding to the interaction is .
3. Overview of AC/DC Foundations
AC/DC is a batch gradientdescent (BGD) solver that optimizes the objective over the training dataset defined by a feature extraction query over a database . Its highlevel structure is given in Algorithm 1. The inner loop repeatedly computes the loss function and its gradient . This can be sped up massively by factoring out the datadependent computation from the optimization loop (Schleich et al., 2016; Abo Khamis et al., 2018b). The former is cast as computing aggregates over joins, which benefit from recent algorithmic advances (Abo Khamis et al., 2016; Bakibayev et al., 2013).
From Optimization to Aggregates. Let us define the matrix , the vector , and the scalar by
(2)  
(3)  
(4) 
Then,
(5)  
(6) 
The quantity is a matrix, and is an matrix of sparse tensors. Statistically, is related to the covariance matrix, to the correlation between the response and the regressors, and
to the empirical second moment of the regressand. When all input features are continuous, each component function
is a monomial giving a scalar value. In realworld workloads there is always a mix of categorical and continuous features, where the monomials become tensor products, and so the quantities are also tensor products, represented by relational queries with groupby. The groupby variables for the aggregate query computing the sum of tensor products in are precisely the categorical variables occurring in the monomials defining the component functions and . For example, if and , where and are continuous features, then there is no groupby variable and the above sum of tensor products is expressed as follows in SQL:SELECT sum(A*B) from Q;
On the other hand, if both and are categorical variables, then the SQL encoding has two groupby variables and :
SELECT A, B, count(*) FROM Q GROUP BY A, B;
These aggregates exploit the sparsity of the representation of over categorical features to achieve succinct representation: The groupby clause ensures that only combinations of categories for the query variables and that exist in the training dataset are considered. The aggregates and are treated similarly.
The above rewriting allows us to compute the datadependent quantities , , and in the loss function and its gradient once for all iterations of AC/DC. They can be computed efficiently inside the database as aggregates over the query . Aggregates with different groupby clauses may require different evaluation strategies to attain the best known complexity (Bakibayev et al., 2013; Abo Khamis et al., 2016). AC/DC settles instead for one strategy for all aggregates, cf. Section 4. This has two benefits. First, the underlying join is only computed once for all aggregates. Second, the computation of the aggregates can be shared massively. These benefits may easily dwarf the gain of using specialised evaluation strategies for individual aggregates in case of very many aggregates (hundreds of millions) and large databases.
Reparameterization under Functional Dependencies (FDs). AC/DC exploits the FDs among variables in the feature extraction query to reduce the dimensionality of the optimization problem by eliminating functionally determined variables and reparameterizing the model. We thus only compute the quantities , , and on the subset of the features that are not functionally determined and solve the lowerdimensional optimization problem. The effect on the loss function and its gradient is immediate and uniform across all optimization problems in our class: We have less terms to compute since the functionally determined variables are dropped. The effect on the nonlinear penalty term is however nontrivial and depends on the model at hand (Abo Khamis et al., 2018b).
We next explain the reparameterization of the ridge linear regression from Example 2.1 under the FD (Abo Khamis et al., 2018b). The categorical features and are represented by indicator vectors and the latter can be recovered from the former using the mapping between values for city and country in the input database. We can extract this map that is a sparse representation of a matrix for which . The model becomes:
The parameters and are replaced by new parameters for the categorical features . This new form can be used in the loss function (1). We can further optimize out from the nonlinear penalty term by setting the partial derivative of with respect to to 0. Then, the new penalty term becomes:
where
is the identity matrix in the order of the active domain size of
city. A similar formulation of the penalty term holds for polynomial regression models.AC/DC thus exploits FDs to compute fewer aggregates at the cost of a more complex regularizer term that requires matrix inversion and multiplication. While these matrix operations are expressible as database queries and can be evaluated similarly to the other aggregates (Abo Khamis et al., 2018b), AC/DC uses instead the Eigen library for linear algebra to achieve significant speedups for model reparameterization over the strawman approach that does not exploit FDs.
aggregates (variable order , varMap, relation ranges ) 
; 
; 
4. Aggregate Computation in AC/DC
An immediate approach to computing the aggregates in , , and for a given optimization problem is to first materialize the result of the feature extraction query using an efficient query engine, e.g., a worstcase optimal join algorithm, and then compute the aggregates in one pass over the query result. This approach, however, is suboptimal, since the listing representation of the query result is highly redundant and not necessary for the computation of the aggregates. AC/DC avoids this redundancy by factorizing the computation of aggregates over joins, as detailed in Section 4.1. In a nutshell, this factorized approach unifies three powerful ideas: worstcase optimality for join processing, query plans defined by fractional hypertree decompositions of join queries, and an optimization that partially pushes aggregates past joins. AC/DC further exploits similarities across the aggregates to massively share their computation, as detailed in Section 4.2.
4.1. Factorized Computation of Aggregates
Factorized aggregate computation relies on a variable order for the query to avoid redundant computation. In this paper, we assume that we are given a variable order. Prior work discusses this query optimization problem (Olteanu and Schleich, 2016; Bakibayev et al., 2013; Abo Khamis et al., 2016).
Variable Orders. Stateoftheart query evaluation uses relationatatime query plans. We use variableatatime query plans, which we call variable orders. These are partial orders on the variables in the query, capture the join dependencies in the query, and dictate the order in which we solve each join variable. For each variable, we join all relations with that variable. Our choice is motivated by the complexity of join evaluation: Relationatatime query plans are provably suboptimal, whereas variableatatime query plans can be chosen to be optimal (Ngo et al., 2013).
Given a join query , a variable depends on a variable if both are in the schema of a relation in .
Definition 4.1 (adapted from (Olteanu and Závodný, 2015)).
A variable order for a join query is a pair , where is a rooted forest with one node per variable in , and is a function mapping each variable to a set of variables in . It satisfies the following constraints:

For each relation in , its variables lie along the same roottoleaf path in .

For each variable , is the subset of its ancestors in on which the variables in the subtree rooted at depend.
Without loss of generality, we use variables orders that are trees instead of forests. We can convert a forest into a tree by adding to each relation the same dummy join variable that takes a single value. For a variable in the variable order , is the set of all ancestor variables of in . The set of variables in (schema of a relation ) is denoted by ( respectively) and the variable at the root of is denoted by .
Example 4.2 ().
Figure 3(a) shows a variable order for the natural join of relations , , and . Then, and , i.e., has ancestors and , yet it only depends on . Given , the variables and are independent of each other. For queries with groupby variables, we choose a variable order where these variables sit above the other variables (Bakibayev et al., 2013).
Figure 1 presents the AC/DC algorithm for factorized computation of SQL aggregates over the feature extraction query . The backbone of the algorithm without the code in boxes explores the factorized join of the input relations over a variable order of . As it traverses in depthfirst preorder, it assigns values to the query variables. The assignments are kept in varMap and used to compute aggregates by the code in the boxes.
The relations are sorted following a depthfirst preorder traversal of . Each call takes a range of tuples in each relation . Initially, these ranges span the entire relations. Once the root variable in is assigned a value from the intersection of possible values from the input relations, these ranges are narrowed down to those tuples with value for .
To compute an aggregate over the variable order rooted at , we first initialize the aggregate to zeros. This is needed since the aggregates might have been used earlier for different assignments of ancestor variables in . We next check whether we previously computed the aggregate for the same assignments of variables in , denoted by context, and cached it in a map . Caching is useful when is strictly contained in , since this means that the aggregate computed at does not need to be recomputed for distinct assignments of variables in . In this case, we probe the cache using as key the assignments in varMap of the variables: . If we have already computed the aggregates over that assignment for , then we can just reuse the previously computed aggregates and avoid recomputation.
If is a groupby variable, then we compute a map from each value to a function of and aggregates computed at children of , if any. If is not a groupby variable, then we compute a map from the empty value to such a function; in this latter case, we could have just computed the aggregate instead of the map though we use the map for uniformity. In case there are groupby variables under , the computation at returns maps whose keys are tuples over all these groupby variables in .
Example 4.3 ().
Consider the query with the variable order in Figure 3(a). We first compute the assignments for as . For each assignment , we then find assignments for variables under within the narrow ranges of tuples that contain . The assignments for in the context of are given by . For each , the assignments for and are given by and . Since depends on and not on , the assignments for under a given are repeated for every occurrence of with assignments for . The assignments for given are computed as .
Consider the aggregate . The count at each variable is computed as the sum over all value assignments of of the product of the counts at the children of in ; if is a leaf in , the product at children is considered 1. For our variable order, this computation is captured by the following factorized expression:
(7) 
where is cached the first time we encounter the assignment for and reused for all subsequent occurrences of this assignment under assignments for .
Summing all values in the result of for a variable is done similarly, with the difference that at the variable in we compute the sum of the values of weighted by the product of the counts of their children. For instance, the aggregate is computed over our variable order by the following factorized expression:
(8) 
To compute the aggregate , we compute for each assignment for instead of marginalizing away . The result is a map from values to values of .
A good variable order may include variables that are not explicitly used in the optimization problem. This is the case of join variables whose presence in the variable order ensures a good factorization. For instance, if we remove the variable from the variable order in Figure 3(a), the variables are no longer independent and we cannot factorize the computation over and . AC/DC exploits the conditional independence enabled by , but computes no aggregate over if this is not required in the problem.
4.2. Shared Computation of Aggregates
Section 4.1 explains how to factorize the computation of one aggregate in , , and over the join of database relations. In this section we show how to share the computation across these aggregates.
Example 4.4 ().
Let us consider the factorized expression of the sum aggregates and over :
(9)  
(10) 
We can share computation across the expressions (7) to (10) since they are similar. For instance, given an assignment for , all these aggregates need . Similarly, for a given assignment for , the aggregates (8) and (10) can share the computation of the sum aggregate over . For assignments and , (8) and (9) can share the computation of the sum aggregate over .
AC/DC computes all aggregates together over a single variable order. It then shares as much computation as possible and significantly improves the data locality of the aggregate computation. AC/DC thus decidedly sacrifices the goal of achieving the lowest known complexity for individual aggregates for the sake of sharing as much computation as possible across these aggregates.
Aggregate Decomposition and Registration. For a model of degree and a set of variables , we have aggregates of the form , possibly with a groupby clause, such that , , and all categorical variables are turned into groupby variables. The reason for is due to the matrix used to compute the gradient of the loss function (6), which pairs any two features of degree up to . Each aggregate is thus defined uniquely by a monomial ; we may discard the variables with exponent 0. For instance, the monomial for is CE while for is ACE.
Aggregates can be decomposed into shareable components. Consider a variable order , with root and subtrees to . We can decompose any aggregate to be computed over into aggregates such that aggregate is for and aggregate is for . Then is computed as the product of its components. Each of these aggregates is defined by the projection of the monomial of onto or . The aggregate is then pushed down the variable order and computed over the subtree . If the projection of the monomial is empty, then the aggregate to be pushed down is , which computes the size of the join defined by . If several aggregates push the same aggregate to the subtree , this is computed only once for all of them.
The decomposed aggregates form a hierarchy whose structure is that of the underlying variable order . The aggregates at a variable are denoted by . All aggregates are to be computed at the root of , then fewer are computed at each of its children and so on. This structure is the same regardless of the input data and can be constructed before data processing. We therefore construct at compile time for each variable in an aggregate register that is an array of all aggregates to be computed over the subtree of rooted at . This register is used as an index structure to facilitate the computation of the actual aggregates. More precisely, an entry for an aggregate in the register of is labeled by the monomial of and holds an array of indices of the components of located in the registers at the children of in and in the local register of . Figure 2 depicts this construction.
The hierarchy of registers in Figure 3(b) forms an index structure that is used by AC/DC to compute the aggregates. This index structure is stored as one contiguous array in memory, where the entry for an aggregate in the register comes with an auxiliary array with the indices of ’s aggregate components. The aggregates are ordered in the register so that we increase sequential access, and thus cache locality, when updating them.
Example 4.5 ().
Let us compute a regression model of degree over a dataset defined by the join of the relations , and . We assume that and are categorical features, and all other variables are continuous. The quantities (,,) require the computation of the following aggregates: , for each variable , and for each pair of variables and .
Figure 3(a) depicts a variable order for the natural join of three relations, and Figure 3(b) illustrates the aggregate register that assigns a list of aggregates to each variable in . The aggregates are identified by their respective monomials (the names in the register entries). The categorical variables are shown in bold. Since they are treated as groupby variables, we do not need aggregates whose monomials include categorical variables with exponents higher than 1. Any such aggregate is equivalent to the aggregate whose monomial includes the categorical variable with degree 1 only.
The register for the root of has all aggregates needed to compute the model. The register has all aggregates from defined over the variables in the subtree of rooted at . The variables , , and are leaf nodes in , so the monomials for the aggregates in the registers , , and are the respective variables only. We use two additional registers and , which hold the aggregates corresponding to projections of the monomials of the aggregates in , and respectively , onto , respectively . For a leaf node , the registers and are the same.
A path between two register entries in Figure 3(b) indicates that the aggregate in the register above uses the result of the aggregate in the register below. For instance, each aggregate in is computed by the product of one aggregate from , , and . The fanin of a register entry thus denotes the amount of sharing of its aggregate: All aggregates from registers above with incoming edges to this aggregate share its computation. For instance, the aggregates with monomials AB, AC, and AD from share the computation of the aggregate with monomial A from as well as the count aggregate from . Their computation uses a sequential pass over the register . This improves performance and access locality as can be stored in cache and accessed to compute all these aggregates.
Aggregate Computation. Once the aggregate registers are in place, we can ingest the input database and compute the aggregates over the join of the database relations following the factorized structure given by a variable order. The algorithm in Figure 1 does precisely this. Section 4.1 explained the factorized computation of a single aggregate over the join. We explain here the case of several aggregates organized into the aggregate registers. This is stated by the pseudocode in the red boxes.
Each aggregate is uniformly stored as a map from tuples over their categorical variables to payloads that represent the sums over the projection of its monomial on all continuous variables. If the aggregate has no categorical variables, the key is the empty tuple.
For each possible value , we first compute the array that consists of the projections of the monomials of the aggregates onto . If is categorical, then we only need to compute the 0 and 1 powers of . If is continuous, we need to compute all powers of from 0 to . If is not a feature used in the model, then we only compute a trivial count aggregate.
We update the value of each aggregate using the index structure depicted in Figure 2 as we traverse the variable order bottom up. Assume we are at a variable in the variable order. In case is a leaf, the update is only a specific value in the local register . In case the variable has children in the variable order, the aggregate is updated with the Cartesian product of all its component aggregates, i.e., one value from and one aggregate for each child of . The update value can be expressed in SQL as follows. Assume the aggregate has groupby variables , which are partitioned across and its children. Assume also that ’s components are and . Recall that all aggregates are maps, which we may represent as relations with columns for keys and one column for payload. Then, the update to is:
Further Considerations. The auxiliary arrays that provide the precomputed indices of aggregate components within registers speed up the computation of the aggregates. Nevertheless, they still represent one extra level of indirection since each update to an aggregate would first need to fetch the indices and then use them to access the aggregate components in registers that may not be necessarily in the cache. We have been experimenting with an aggressive aggregate compilation approach that resolves all these indices at compile time and generates the specific code for each aggregate update. In experiments with linear regression, this compilation leads to a 4 performance improvements. However, the downside is that the AC/DC code gets much larger and the C++ compiler needs much more time to compile it. For higherdegree models, it can get into situations where the C++ compiler crashes. We are currently working on a hybrid approach that partially resolves the indices while maintaining a reasonable code size.
5. The inner loop of BGD
As shown in Section 3, the gradient descent solver repeatedly computes and , which require matrixvector and vectorvector multiplications over the quantities (). We discuss here the computation that involves the matrix.
It is possible that several entries map to the same aggregate query that is computed by AC/DC. Consider, for instance, the following scalar entries in the feature mapping vector : , , , , , and . By definition of , any pairwise product of these entries in corresponds to one entry in . The entries , , and (as well as their symmetric counterparts) all map to the same aggregate . To avoid this redundancy, we use a sparse representation of , which assigns to each distinct aggregate query a list of index pairs that contains one pair for each entry that maps to this query.
AC/DC operates directly over the sparse representation of . Consider the matrix vector product , and let be the root of the variable order . We compute by iterating over all aggregate maps , and for each index pair in that is assigned to , we add to the ’s entry in with the product of and ’s entry in . If , we also add to ’s entry in with the product of and ’s entry in .
Regularizer under FDs. Section 3 explains how to rewrite the regularizer for a ridge linear regression model under the FD . First, we need to construct the relation , and then compute the inverse of the matrix . Note that each entry in is , if two cities and are in the same . Otherwise, the entry is zero.
To facilitate the construction of the matrix , we construct as a map that groups the tuples of by country. Thus, we construct a mapping from country to the set of cities in this country that occur in the dataset, which can be computed during the computation of the factorized aggregates over the variable order.
We then use this representation of to iterate over the payload set, and for any two cities in the payload set, we increment the corresponding index in by one. We store as a sparse matrix in the format used by the Eigen linear algebra library, and then use Eigen’s Sparse Cholesky Decomposition to compute the inverse of and ultimately the solution for the regularizer.
6. Experiments
We report on the performance of learning regression models and factorization machines over a real dataset used in retail applications; cf. the extended technical report (Abo Khamis et al., 2018a) for further experiments.
Systems. We consider two variants of our system: The plain AC/DC and its extension AC/DC+FD that exploits functional dependencies. We also report on five competitors: F learns linear regression models and onehot encodes the categorical features (Schleich et al., 2016); MADlib (Hellerstein and et al, 2012) 1.8 uses ols to compute the closedform solution of polynomial regression models (MADlib also supports generalized linear models, but this is consistently slower than ols in our experiments and we do not report it here); R (R Core Team, 2013) 3.0.2 uses lm
(linear model) based on QRdecomposition
(Francis, 1961); libFM (Rendle, 2012) 1.4.2 supports factorization machines; and TensorFlow (Abadi et al., 2016)1.6 uses the LinearRegressor estimator with
ftrl optimization (McMahan and et al, 2013), which is based on the conventional SGD optimization algorithm.The competitors come with strong limitations. MADlib inherits the limitation of at most 1600 columns per relation from its PostgreSQL host. The MADlib onehot encoder transforms a categorical variable with distinct values into columns. Therefore, the number of distinct values across all categorical variables plus the number of continuous variables in the input data cannot exceed 1600. R limits the number of values in their data frames to . There exist R packages, e.g., ff, which work around this limitation by storing data structures on disk and mapping only chunks of data in main memory. The biglm package can compute the regression model by processing one ffchunk at a time. Chunking the data, however, can lead to rank deficiencies within chunks (feature interactions missing from chunks), which causes biglm to fail. Biglm fails in all our experiments due to this limitation, and, thus, we are unable to benchmark against it. LibFM requires as input a zerosuppressed encoding of the join result. Computing this representation is an expensive intermediary step between exporting the query result from the database system and importing the data. To compute the model, we used its more stable MCMC variant with a fixed number of runs (300); its SGD implementation requires a fixed learning rate and does not converge. AC/DC uses the adaptive learning rate from Algorithm 1 and runs until the parameters have converged with high accuracy (for FaMa, it uses 300 runs).
TensorFlow uses a userdefined iterator interface to load a batch of tuples from the training dataset at a time. This iterator defines a mapping from input tuples to (potentially onehot encoded) features and is called directly by the learning algorithm. Learning over batches requires a random shuffling of the input data, which in TensorFlow requires loading the entire dataset into memory. This failed for our experiments and we therefore report its performance without shuffling the input data. We benchmark TensorFlow for LR only as it does not provide functionality to create all pairwise interaction terms for PR and FaMa, thirdparty implementations of these models relied on python packages that failed to load our datasets. The optimal batch size for our experiments is 100,000 tuples. Smaller batch sizes require loading too many batches, very large batches cannot fit into memory. Since TensorFlow requires a fixed number of iterations, we report the times to do one epoch over the dataset (i.e., computing 840 batches). This means that the algorithm learned over each input tuple once. In practice, it is often necessary to optimize with several epochs to get a good model.
Experimental Setup. All experiments were performed on an Intel(R) Core(TM) i74770 3.40GHz/64bit/32GB with Linux 3.13.0 and g++4.8.4. We report wallclock times by running each system once and then reporting the average of four subsequent runs with warm cache. We do not report the times to load the database into memory for the join as they are orthogonal to this work. All relations are given sorted by their join attributes.
Dataset. We experimented with a realworld dataset in the retail domain for forecasting user demands and sales. It has five relations: Inventory (storing information about the inventory units for products (sku) in a store (locn), at a given date), Census (storing demographics information per zipcode such as population, median age, repartition per ethnicities, house units and how many are occupied, number of children per household, number of males, females, and families), Location (storing the zipcode for each store and distances to several other stores), Item (storing the price and category, subcategory, and categoryCluster for each products), and Weather (storing weather conditions such as mean temperature, and whether it rains, snows, or thunders for each store at different dates). The feature extraction query is the natural join of these five relations. It is acyclic and has 43 variables. We compute the join over the variable order: (locn (zip ((Census),(Location)), date(sku ((Item)),(Weather)))). The following 8 variables are categorical: zip, sku, category, subcategory, categoryCluster, snow, rain, thunder. The variables locn and date are not features in our models. We design fragments of our dataset with an increasing number of categorical features. v is a partition of the entire dataset that is specifically tailored to work within the limitations of R. It includes all categorical variables as features except for sku and zip. v computes the same model as v but over all rows in the data ( larger than v). v extends v with zip, v and v are designed to work within the limitations of MADlib. v to v have no functional dependency. Finally, v has all variables but zip and the functional dependency sku{category, subcategory, categoryCluster}.
We learned LR, , and models that predict the amount of inventory units based on all other features.
Summary of findings. Table 1 shows our findings. AC/DC+FD is the fastest system in our experiments. It needs up to 30 minutes and computes up to 46M aggregates. This is orders of magnitude faster than its competitors whenever they do not exceed memory limitation, 24hour timeout, or internal design limitations. The performance gap is due to the optimizations of AC/DC: (1) it avoids materializing the join and the exportimport step between database systems and statistical packages, which take longer than computing an endtoend LR model in AC/DC. Instead, AC/DC performs the join together with the aggregates using one execution plan; (2) it factorizes the computation of the aggregates and the underlying join, which comes with a 20 compression factor; (3) it massively shares the computation of large (up to 46M for ) sets of distinct nonzero aggregates, which makes their computation up to 16K
faster than computing them individually; (5) it decouples the computation of the aggregates on the input data from the parameter convergence step and thus avoids scanning the join result for each of the up to 400 iterations; (6) it avoids the upfront onehot encoding that comes with higher asymptotic complexity and prohibitively large covariance matrices by only computing nonidentical, nonzero matrix entries. For
and our dataset v, this leads to a 259 reduction factor in the number of aggregates to compute; (7) it exploits the FD in the input data to reduce the number of features of the model, which leads to a 3.5x improvement factor.v  v  v  v  
Join Representation  Listing  774M  3.614G  3.614G  3.614G 
(#values)  Factorized  37M  169M  169M  169M 
Compression  Fact/List  20.9  21.4  21.4  21.4 
Join Computation (PSQL) for R, TensorFlow, libFM  50.63  216.56  216.56  216.56  
Factorized Computation of 43 Counts over Join  8.02  34.15  34.15  34.15  
Linear regression  
Features  without FDs  33 + 55  33+55  33+1340  33+3702 
(continuous+categorical)  with FDs  same as above, there are no FDs  33+3653  
Aggregates  without FDs  595+2,418  595+2,421  595+111,549  595+157,735 
(scalar+groupby)  with FDs  same as above, there are no FDs  595+144,589  
MADLib (ols)  Learn  1,898.35  8,855.11  –  
R (QR)  Export/Import  308.83  –  –  – 
Learn  490.13  –  –  –  
TensorFlow (FTLR)  Export/Import  74.72  372.70  372.70  372.70 
(1 epoch, batch size 100K)  Learn  2,762.50  11,866.37  11,808.66  11,817.05 
F  Aggregate  93.31  424.81  OOM  OOM 
Converge (runs)  0.01 (359)  0.01 (359)  
AC/DC  Aggregate  25.51  116.64  117.94  895.22 
Converge (runs)  0.02 (343)  0.02 (367)  0.42 (337)  0.66 (365)  
AC/DC+FD  Aggregate  same as AC  380.31  
Converge (runs)  there are no FDs  8.82 (366)  
Speedup of AC/DC+FD over  MADlib  74.36  75.91  
R  33.28  
TensorFlow  113.12  106.77  104.75  31.88  
F  3.65  3.64  
AC/DC  same as AC/DC, there are no FDs  2.30  
Polynomial regression degree  
Features  without FDs  562+2,363  562+2,366  562+110,209  562+154,033 
(continuous+categorical)  with FDs  same as above, there are no FDs  562+140,936  
Aggregates  without FDs  158k+742k  158k+746k  158k+65,875k  158k+46,113k 
(scalar+groupby)  with FDs  same as above, there are no FDs  158k+36,712k  
MADlib (ols)  Learn  –  
AC/DC  Aggregate  131.86  512.00  820.57  7,012.84 
Converge (runs)  2.01 (211)  2.04 (214)  208.87 (247)  115.65 (200)  
AC/DC+FD  Aggregate  same as AC/DC  1,819.80  
Converge (runs)  there are no FDs  219.51 (180)  
Speedup of AC/DC+FD over  MADlib  
AC/DC  same as AC/DC, there are no FDs  3.50  
Factorization machine degree rank  
Features  without FDs  530+2,363  530+2,366  530+110,209  530+154,033 
(continuous+categorical)  with FDs  same as above, there are no FDs  562+140,936  
Aggregates  without FDs  140k+740k  140k+744k  140k+65,832k  140k+45,995k 
(scalar+groupby)  with FDs  same as above, there are no FDs  140k+36,595k  
libFM (MCMC)  Export/Import  412.84  1,462.54  3,096.90  3,368.06 
Learn (runs)  19,692.90 (300)  (300)  (300)  (300)  
AC/DC  Aggregate  128.97  498.79  772.42  6,869.47 
Converge (runs)  3.03 (300)  3.05 (300)  262.54 (300)  166.60 (300)  
AC/DC+FD  Aggregate  same as AC/DC  1,672.83  
Converge (runs)  there are no FDs  144.07 (300)  
Speedup of AC/DC+FD over  libFM  152.70  ¿175.51  ¿86.68  ¿49.53 
AC/DC  same as AC/DC, there are no FDs  3.87 
Categorical features. As we move from v/v to v, we increase the number of categorical features by approx. for LR (from 55 to 2.7K) and for and (from 2.4K to 154K). For LR, this increase only led to a decrease in performance of AC/DC and at least for MADlib (we stopped MADlib after 24 hours). For , this yields a performance decrease for AC/DC. This behavior remains the same for AC/DC’s aggregate computation step with or without the convergence step, since the latter is dominated by the former by up to three orders of magnitude. This sublinear behavior is partly explained by the ability of AC/DC to process many aggregates much faster in bulk than individually: it takes 34 seconds for 43 count aggregates, one per variable, but only 1819 seconds for 37M sum aggregates! It is also explained by the sameorder increase in the number of aggregates: () more distinct nonzero aggregates in v vs v for LR (resp. and ).
The performance of TensorFlow is largely invariant to the increase in the number of categorical features, since its internal mapping from tuples in the training dataset to the sparse representation of the features vector remains of similar size. Nevertheless, our system is consistently orders of magnitudes faster than computing only a single epoch in TensorFlow.
Increasing database size. A increase in database size and join result from v to v leads to a similar decrease factor in performance for F and AC/DC on all models, since the number of features and aggregates stay roughly the same and the join is acyclic and processed in linear time. The performance of MADlib, TensorFlow, and libFM follows the same trend for LR and FaMa. MADlib runs out of time (24 hours) for both datasets for models. R cannot cope with the size increase due to internal design limitations.
Onehot encoding vs. sparse representations with groupby aggregates. Onehot encoding categorical features leads to a large number of zero and/or redundant entries in the matrix. For instance, for v and , the number of features is , and then the upper half of would have entries! Most of these are either zero or repeating. In contrast, AC/DC’s sparse representation only considers 46M nonzero and distinct aggregates. The number of aggregates is reduced by x!
Our competitors require the data be onehot encoded before learning. The static onehot encoding took (in seconds): 28.42 for R on v; 9.41 for F on v and v; 2 for MADlib on v to v; and slightly more than an hour for libFM, due to the expensive zerosuppression step. TensorFlow onehot encodes on the fly during the learning phase and cannot be reported separately.
Functional dependencies. The FD in our dataset v has a twofold effect on AC/DC (all other systems do not exploit FDs): it effectively reduces the number of features and aggregates, which leads to better performance of the indatabase precomputation step; yet it requires a more elaborate convergence step due to the more complex regularizer. For LR, the aggregate step becomes faster, while the convergence step increases . Nevertheless, the convergence step takes at most 2% of the overall compute time in this case. For degree2 models, the FD brings an improvement by a factor of 3.5 for , and 3.87 for . This is due to a 10% decrease in the number of categorical features, which leads to a 20% decrease in the number of groupby aggregates.
Acknowledgments. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 682588. XN is supported in part by grants NSF CAREER DMS1351362, NSF CNS1409303 and the Margaret and Herman Sokol Faculty Award.
References
 (1)
 Abadi et al. (2016) Martín Abadi et al. 2016. TensorFlow: LargeScale Machine Learning on Heterogeneous Distributed Systems. CoRR abs/1603.04467 (2016). http://arxiv.org/abs/1603.04467
 Abo Khamis et al. (2018a) Mahmoud Abo Khamis, Hung Ngo, XuanLong Nguyen, Dan Olteanu, and Maximilian Schleich. 2018a. AC/DC: InDatabase Learning Thunderstruck. CoRR abs/1803.07480 (2018). https://arxiv.org/abs/1803.07480
 Abo Khamis et al. (2018b) Mahmoud Abo Khamis, Hung Ngo, XuanLong Nguyen, Dan Olteanu, and Maximilian Schleich. 2018b. InDatabase Learning with Sparse Tensors. In PODS.
 Abo Khamis et al. (2016) Mahmoud Abo Khamis, Hung Q. Ngo, and Atri Rudra. 2016. FAQ: Questions Asked Frequently. In PODS. 13–28.
 Bakibayev et al. (2013) Nurzhan Bakibayev, Tomás Kociský, Dan Olteanu, and Jakub Závodný. 2013. Aggregation and Ordering in Factorised Databases. PVLDB 6, 14 (2013), 1990–2001.
Comments
There are no comments yet.