1. Introduction
Query optimization is not a solved problem, and existing database management systems (DBMSs) still choose poor execution plans for some queries (Leis:15, )
. Because query optimization must be efficient in time and resources, existing DBMSs implement a key step of cardinality estimation by making simplifying assumptions about the data (e.g., inclusion principle, uniformity or independence assumptions)
(Kiefer:17, ; Leis:15, ). Additionally, while research papers have shown their benefits, optimizers shy away from using multidimensional histograms and sampling due to the increased overhead and complexity they bring (Eavis:07, ; Wu:16, ). As a result, in data sets with correlations and nonuniform data distributions, cardinality estimation errors are frequent, leading to suboptimal plan selections (Leis:17, ).Recently, thanks to dropping hardware costs and growing datasets available for training, deep learning has successfully been applied to solving computationally intensive learning tasks in other domains. The advantage of these type of models comes from their ability to learn unique patterns and features of the data that are difficult to manually find or design (Goodfellow:16, ).
In this paper, we explore the idea of training a deep learning model to predict query cardinalities. Instead of relying entirely on basic statistics to estimate costs, we train a model to automatically learn important properties of the data to more accurately infer these estimates. This representation can result in better predictions than using handdesigned feature selection
(Goodfellow:16, ). They can automatically learn to retain distinguishing properties of the data (Lesort:18, ), which in turn can help estimate cardinalities. As of today, there are few studies that have used deep learning techniques to solve database problems, although some have started to raise awareness for the potential of this method in our field (Wang:16_2, ). Now is the time to explore this space, since we have the computational capabilities to run these models. A key challenge of this approach is how to represent the queries and data in the model. If we build a model that takes as input a feature vector representing a complex query on a database, this model would essentially need to learn to predict cardinalities for
all possible queries that could run on the data. Such a model would be complex and would require impractical numbers of training examples.To address the above problem, as a first contribution (Section 3), we develop an approach that trains a deep learning model that learns to incrementally generate a succinct representation of each subquery’s intermediate results: The model takes as input a subquery and a new operation to predict the resulting subquery’s properties. These properties can serve to derive the subquery’s cardinality.
As a second contribution (Section 4), we present our initial approach to using these representation to improve query plan enumeration through reinforcement learning. Reinforcement learning is a general purpose framework used for decisionmaking in contexts where a system must make stepbystep decisions to reach an ultimate goal and collect a reward. In our case, we propose to use this approach to incrementally build a query plan by modeling it as a Markov process, where each decision is based on the properties of each state.
In Figure 1, we show an example that forms the basis of our deep reinforcement learning approach. Given a query and a database, the model incrementally executes a query plan through a series of state transitions. In the initial state in the figure, the system begins with a representation of the entire database. Given an action selected using reinforcement learning, the model transitions to a new state at . Each action represents a query operation and each state captures a representation of the subquery’s intermediate results. We train a state transition function (a neural network), , to generate this representation. is a recursive function that takes as input a previous subquery representation as well as an action at time , to produce the subquery representation for time .
Let us now motivate the setup that is laid out in Figure 1. Consider the dynamics of a query plan that is executed one operation (action) at a time. At any stage of the query plan execution, let’s say a subquery has been executed; let , the state at be represented by an dimensional real vector. Applying the next action, to this current database state leads to the next state, . The mapping, is called the state transition function.
In most applications of reinforcement learning, the state as well as the state transition function are known. For example, in the game of Go, each possible board position is a state and the process of moving from one board position to the next (the transition) is welldefined.
Unfortunately, in the case of query plan enumeration, we cannot easily anticipate the state. The crux of our approach is to represent each state by using a finite dimensional real vector and learn the state transition function using a deep learning model. To guide the training process for this network, we use input signals and context defined from observed variables that are intimately associated with the state of the database. For example, throughout this work, we use the cardinality of each subquery at any stage of the plan as an observed variable. If a state is represented succinctly with the right amount of information, then we should be able to learn a function, , which maps this state to predicted cardinalities at stage . We show both and in Figure 2.
As we train this model, the parameters of the networks will adjust accordingly based on longer sequences of query operations. With this model, each state will learn to accurately capture a representation. Once trained, we can fix this model and apply reinforcement learning to design an optimal action policy, leading to optimal query plans.
Before describing our approach in more detail, we first introduce fundamental concepts about deep learning and reinforcement learning in the following section.
2. Background
Deep Learning Deep learning models, also known as feedforward neural networks, are able to approximate a nonlinear function, (Goodfellow:16, ). These networks define a mapping from an input to an output , through a set of learned parameters across several layers, . During training, the behavior of the inner layers are not defined by the input data, instead these models must learn how to use the layers to produce the correct output. Since there is no direct interaction between the layers and the input training data, these layers are called hidden layers (Goodfellow:16, ).
These feedforward networks are critical in the context of representation learning. While training to meet some objective function, a neural network’s hidden layers can indirectly learn a representation, which could then be used for other tasks (Goodfellow:16, ). In representation learning, there is a tradeoff between preserving as much information as possible and learning useful properties about the data. Depending on the output of the network, the context of these representations can vary. It is up to the user to provide the network with enough hints and prior beliefs about the data to help guide the learning (Goodfellow:16, ).
Reinforcement Learning
Reinforcement learning models are able to map scenarios to appropriate actions, with the goal of maximizing a cumulative reward. Unlike supervised learning, the learner (the
) is not explicitly shown which action is best. Instead the agent must discover the best action through trial and error by either exploiting current knowledge or exploring unknown states (Sutton:16, ). At each timestep, , the agent will observe a state of the environment, and will select an action, . The action the agent selects depends on the policy, . This policy can reenact several types of behaviors. As an example, it can either act greedily or balance between exploration and exploitation through an greedy (or better) approach. The policy is driven by the expected rewards of each state, which the model must learn. Given the action selected, the model will arrive at a new state, . At each step, the environment sends the agent a reward, , which signals the “goodness” of the action selected. The agent’s goal is to maximize this total reward (Sutton:16, ). One approach is to use a valuebased iteration technique, where the model records stateaction values, . These values specify the longterm desirability of the state by taking into account the rewards for the states that are likely to follow (Sutton:16, ).3. Learning a Query Representation
Given as input a database and a query , the first component of our approach is to apply deep learning to derive compact, yet informative representations of queries and the relations they produce. To ensure that these representations are informative, we focus on training these representations to predict subquery cardinalities.
3.1. Approach
There are two approaches that we could take. In the first approach, we could transform into a feature vector and train a deep network to take such vectors as input and output a cardinality value. As discussed in the introduction, the problem with this approach is that the size of the feature vector would have to grow with the complexity of databases and queries. This would result in very long, sparse vectors, which would require large training datasets.
Instead, we take a different approach, a recursive approach: We train a model to predict the cardinality of a query consisting of a single relational operation applied to a subquery as illustrated in Figure 2. This model takes as input a pair , where is a vector representation of a subquery, while is a single relational operation on . Importantly, is not a manually specified feature vector, but it is the latent representation that the model learns itself. The function generates these representations by adjusting the weights based on feedback from the function. This function learns to map a subquery representation to predict a set of observed variables. As we train this model, we use back propagation to adjust the weights for both functions. In this work, we only focus on predicting cardinalities, but we could extend the model to learn representations that enable us to capture additional properties such as more detailed value distributions or features about query execution plans, such as their memory footprint or runtimes.
Before using the recursive model, we must learn an additional function, , as shown in Figure 3. takes as input (,), where is a vector that captures the properties of the database and is a single relational operator. The model outputs the cardinality of the subquery that executes the operation encoded in on . We define the vector, to represent simple properties of the database, . The list of properties we provide next is not definitive and more features can certainly be added. Currently, for each attribute in the dataset , we use the following features to define : the value, the value, the number of distinct values, and a representation of a one dimensional histogram.
As shown in the figure, we then include the recursive model, , that takes as input and predicts the observed variables of the subqueries as well as the representation, of the new subquery. We combine these models to train them together. During training, the weights are adjusted based on the combined loss from observed variable predictions. Essentially, we want to learn an representation that captures not only enough information to predict the cardinality of that subquery directly but of other subqueries built by extending it.
3.2. Preliminary Results
We use the publicly available Internet Movie Data Base (IMDB) data set from the Join Order Benchmark (JOB) (Leis:15, )
. Unlike TPCH and TPCDS, the IMDB data set is real and includes skew and correlations across columns
(Leis:15, ). In our experiments, we use Python Tensorflow to implement our approach
(tensorflow2015whitepaper, ).Training : As a first experiment, we initialize with properties of the IMDB dataset and train to learn . represents a conjunctive selection operation over attributes from the relation. We generate 20k queries, where 15k are used for training the model and the rest are used for testing.
contains 50 hidden nodes in the hidden layer. We update the model via stochastic gradient descent with a loss based on relative error and a learning rate of .01.
In (a), we show the cardinality estimation results for selection queries where
. On the xaxis, we show the number of epochs used during training and on the yaxis we show the relative error with the error bars representing the standard deviation. We compare our approach
to estimates from SQL Server (mssql, ). We use a commercial engine to ensure a strong baseline. With fewer epochs (less training) the ’s cardinality predictions result in significant errors, but at the 6th epoch, the model performs similarly to SQL Server and then it starts to outperform the latter.In (b), we increase the number of columns in the selection to . In general, we have observed that takes longer to converge once more columns are introduced. This is expected, as
must learn about more joint distributions across more columns. Nevertheless, the model still manages to improve on SQL Server’s estimations by the 9th epoch.
Training and : In the previous experiment, we only trained the model for selection queries over base data. For this next experiment, we predict the cardinality of a query containing both a selection and join operation by using the combined model. Here, represents the selection, while the subsequent action represents the join. Through this combined model, we can ensure that (the hidden state for the selection) captures enough information to be able to predict the cardinality after the join. In Figure 5, we show the cardinality prediction for and
. In these scatter plots, the xaxis shows the real cardinality, while the yaxis shows the predicted cardinality from the model. Although there is some variance,
was able to hold enough information about the underlying data to make reasonable predictions for .4. Query Plan Enumeration with Reinforcement Learning
In this section, we present and discuss our design to leverage the subquery representations from the section above, not only to estimate cardinalities, but to build query plans. Given a query, , we seek to identify a good query plan by combining our query representations from with reinforcement learning.
We assume a modelfree environment, where transition probabilities between states are not known. At
, the model only knows about and . At this initial state, no query operations have yet taken place. The reinforcement learning agent transitions to a new state by selecting an operation from query . At each state, we encode an additional contextual vector, , which expresses the operations that remain to be done for . We now describe how to initialize the vector at time :Given database , we have a set of relations , where each contains a set of attributes . The vector represents a fixed set of equijoin predicates and onedimensional selection predicates, . We set the th coordinate in accordingly if the corresponding predicate exists in the query . For example, could represent the following equijoin predicate, . If this predicate exists in we encode it in by updating the value of to 1, otherwise we set it to 0. For selections, we track onedimensional ranged selections of the following form: . For now, we allow each attribute to have at most one ranged filter in . If the selection predicate exists in , we place the value in the corresponding element in . Once the reinforcement agent selects an action (query operation), , we can update by setting the corresponding element to 0.
To select good query plans, we need to provide the model with a reward. That reward must either be given at each state or once the entire query plan has been constructed. We have different options. One option is to use the negative cost estimate (computed by the underlying, traditional query optimizer), as the reward for the plan. The system would then learn to mimic that traditional optimizer with the caveat that we currently build only leftdeep plans. A better option that we are currently experimenting with is to use the negative of our system’s cardinality estimates at each step. The limitation, of course, is that this approach only optimizes logical query plans. We plan to extend the reward function in future work to also capture physical query plan properties. In particular, one approach is to use the negative of the query execution time as the reward.
Ultimately, the goal of the agent is to discover an optimal policy, . The policy determines which action the agent will take given the state. As the agent explores the states, the model can update the stateaction values, the function , through Qlearning. Qlearning is an offpolicy algorithm, where it uses two different policies to converge the stateaction values (Sutton:16, ; silver:15, ; szepesvari:09, ). One is a behavior policy which determines which actions to select next. In practice, this is usually an greedy policy (silver:15, ; Mnih:15, ), but other policies can be used as well. The other is the target policy, usually a greedy strategy, which determines how values should be updated.
Initially, all stateaction pairs are random values. At each timestep, the agent selects an action (usually based on an greedy policy) and observes the reward, at state . As the agent explores the search space, these stateaction pairs will converge to represent the expected reward of the states in future timesteps.
At each state transition, each is updated as follows:
(1) 
Where the represents the maximum value from given the target greedy policy. We compute the subsequent state given the state transition function, .
Open Problems: Many open problems remain for the above design. As we indicated above, the first open problem is the choice of reward function and its impact on query plan selection. Another open problem is that the statespace is large even when we only consider selections and join operators as possible actions. Thus, the Qlearning algorithm as initially described is impractical as the stateaction values are estimated separately for each unique subquery (Mnih:15, ). In other words, for each query that we train, it is unlikely that we will run into the same exact series of states for a separate query. Thus, a better approach is to consider approximate solutions to find values for . We can learn a function, to approximate given parameter weights . This allows the model to generalize the value of a stateaction pairs given previous experience with different (but similar) states. This function could either represent a linear function or even a neural network (silver:15, ).
5. Related Work
To correct optimizer errors, previous work has used adaptive query processing techniques. Eddies (Avnur:00, ) gets rid of the optimizer altogether and instead of building query plan trees, uses an eddy to determine the sequence of operators based on a policy. Tzoumas et al. (tzoumas:08, ) took this a step further and transformed it into a reinforcement learning problem where each state represents a tuple along with metadata about which operators still need to be applied and each action represents which operator to run next.
Leo (Stillger:01, ), was one of the first approaches to automatically adjust an optimizer’s estimates. It introduces a feedback loop, allowing the optimizer to learn from past mistakes. This requires successive runs of similar queries to make adjustments. Liu et al. (Liu:15, ) uses neural networks to solve the cardinality estimation problem, but primarily focuses on cardinality predictions for selection queries only. Work by Kraska et al. (Kraska:17, ) uses a mixture of neural networks to learn the distribution of an attribute to build fast indexes. Instead, our goal is to learn the correlation across several columns and to build query plans. More recently, work by Marcus et al. (Marcus:18, ) uses a deep reinforcement learning technique to determine join order for a fixed database. Each state also represents a subquery, but our approach models each state as a latent vector that is learned through a neural network and is propagated to other subsequent states. Their approach uses a policy gradient to determine the best action, while our technique proposes to use a valuebased iteration approach.
6. Conclusion
In this work, we described a model that uses deep reinforcement learning for query optimization. By encoding basic information about the data, we use deep neural networks to incrementally learn state representations of subqueries. As future work, we propose to use these state representations in conjunction with a reinforcement learning model to learn optimal plans.
Acknowledgements This project was supported in part by NSF grants IIS1247469 and Teradata.
References

[1]
Martín Abadi et al.
TensorFlow: Largescale machine learning on heterogeneous systems, 2015.
Software available from tensorflow.org.  [2] Ron Avnur et al. Eddies: Continuously adaptive query processing. SIGMOD Record, 2000.
 [3] Todd Eavis et al. Rkhist: An Rtree based histogram for multidimensional selectivity estimation. In CIKM ’07.
 [4] Ian Goodfellow et al. Deep Learning. MIT Press, 2016. http://www.deeplearningbook.org.
 [5] Martin Kiefer et al. Estimating join selectivities using bandwidthoptimized kernel density models. VLDB, 2017.
 [6] Tim Kraska et al. The case for learned index structures. CoRR, 2017.
 [7] Viktor Leis et al. Cardinality estimation done right: Indexbased join sampling. In CIDR 2017.
 [8] Viktor Leis et al. How good are query optimizers, really? Proc. VLDB Endow., 2015.
 [9] Timothée Lesort et al. State representation learning for control: An overview. CoRR, 2018.
 [10] Henry Liu et al. Cardinality estimation using neural networks. In CASCON 2015.
 [11] Ryan Marcus et al. Deep reinforcement learning for join order enumeration. CoRR, 2018.
 [12] Volodymyr Mnih et al. Humanlevel control through deep reinforcement learning. Nature, 2015.
 [13] SQL Server. https://www.microsoft.com/enus/sqlserver/sqlserver2017.
 [14] David Silver. UCL Course on Reinforcement Learning, 2015.
 [15] Michael Stillger et al. Leo  db2’s learning optimizer. In VLDB 2001.
 [16] Richard S. Sutton et al. Reinforcement learning I: Introduction, 2016.
 [17] Csaba Szepesvari. Algorithms for reinforcement learning. Morgan and Claypool Publishers, 2009.
 [18] Kostas Tzoumas et al. A reinforcement learning approach for adaptive query processing. In A DB Technical Report, 2008.
 [19] Wei Wang et al. Database Meets Deep Learning: Challenges and Opportunities. SIGMOD Record, 2016.
 [20] Wentao Wu et al. Samplingbased query reoptimization. SIGMOD 2016.
Comments
There are no comments yet.