Modern relational database systems (rdbms) are incredibly ubiquitous today – they underlie technology used by most people every day if not every hour. As a consequence, the database system course is widely offered in major universities around the world as part of the undergraduate computer science degree program. A core component of this course is the design and implementation of the query optimizer module. Specifically, a rdbms employs it to automatically identify the most efficient strategies for executing the declarative sql queries submitted by users. The query optimization process produces a query execution plan (qep) which represents an execution strategy for the query. Optimization is a mandatory process in a rdbms since the difference between the costs of the best execution plan, and a random choice, could be in orders of magnitude.
Unfortunately, query optimization is traditionally considered as a difficult component to fathom at an undergraduate-level database course. Given a sql query, a student would typically like to understand how it is executed on the underlying rdbms by studying the associated qep. However, every commercial database vendor has its own secret sauce for the implementation of the query optimizer. Consequently, comprehension of a qep demands not only deep knowledge of various query optimization-related concepts but also vendor-specific implementation details. We advocate that this is an unrealistic expectation from undergraduate students learning database systems for the first time. They may be familiar with the syntax and semantics of sql but not necessarily with dbms-specific implementation details. Consider the following example scenario.
Example 1.1 ().
Bob is an undergraduate sophomore student majoring in computer science in a reputed university. Currently, he is enrolled in a database course, which uses PostgreSQL 9.6 to teach fundamental concepts related to relational databases. Bob is a keen learner and is excited about learning the underlying technology behind relational database systems. Specifically, he is comfortable in writing sql queries and is now trying to learn about the query optimization module. To this end, he wishes to understand the qep of the sql query in Figure 1 on a tpc-h benchmark dataset111http://www.tpc.org.. Figure 2 (partially) depicts the qep generated by PostgreSQL for this query. Bob observes that the textual description of the qep is not only verbose and lengthy but also consists of unfamiliar terms (e.g., hash semijoin, bucket, width). That is, it is not concisely described in a way that can be understandable by him.
In order to have a better comprehension, he switches to the visual tree representation of the qep as shown in Figure 3. Although relatively succinct visually, it simply depicts the sequence of operators (e.g., hash hash semi join sort aggregate limit) used for processing the query, hiding additional details about query execution. In fact, Bob needs to manually delve into details associated with each node for further implementation-related information.
Clearly, an easy and intuitive natural language-based interface can greatly enhance Bob’s comprehension of qeps for sql queries. In fact, natural language interfaces for rdbms have been explored by the database research community for decades (LJ14a, ; LJ14b, ; SF+16, ; KV+12, ). Majority of these efforts have focused on translating natural language sentences to sql queries or narrating sql queries in natural language to naïve users. Scant attention has been paid in the literature for natural language understanding of query execution plans of sql queries.
In this demonstration, we present a novel framework called neuron (Natural LanguagE Understanding of QueRy ExecutiOn PlaN) for natural language interaction with qeps in PostgreSQL. Given the qep of a sql query, neuron analyzes it to automatically generate a simplified natural language-based description (both text and voice form) of key steps undertaken by the underlying rdbms to execute the query. Furthermore, it supports a question-answering system that allows a user to seek answers to a variety of concepts and features associated with the qep in natural language.
We believe that neuron can be used as a tool for pedagogical support by database instructors and students. Specifically, it can facilitate understanding of various physical query plan-related concepts employed by a rdbms in executing sql queries. Furthermore, its benefit is not confined to pedagogy. It can also facilitate database application developers to understand query execution strategies employed by sql queries without requiring them to be knowledgeable of the syntax and semantics of rdbms-specific physical query plans. Note that application developers may have programming and debugging expertise to formulate declarative sql queries but may not necessarily possess knowledge to comprehend syntax and semantics of rdbms-specific qeps.
In this demo, we will first present a walk-through of the neuron tool, and explain how it provides natural language interface to understand query execution plans of modern rdbms. We will then show how it can be used to facilitate understanding of various concepts related to qeps through natural language-based question answering framework. For example, an end user may ask questions such as “What is a hash semi join?”, “How many tuples left after Step 5?”, and “What is the most expensive operation?”. Finally, we will highlight how neuron have important implications in database education.
2. System Overview
neuron is implemented using Python on top of PostgreSQL 9.6. Figure 4 depicts the architecture of neuron and mainly consists of the following modules.
The GUI module. Figure 5(a) is a screenshot of the visual interface of neuron. It consists of five panels. Panel 1 enables a user to connect to the underlying relational database. Panel 2 shows the schema of the underlying database. A user formulates a sql query (which may include aggregation, nesting, joins, among other things) in textual format on this database in Panel 3. When the Generate button is clicked, the query is executed and the corresponding execution plan in natural language is generated and displayed in Panel 4. Note that neuron generates both textual as well as vocal form of the execution plan using the Plan-to-Text Generator and Vocalizer modules, respectively. A user can click on the Pause or Replay buttons to interact with the vocalized form of the plan. Clicking on the View Plan button, retrieves the original qep as generated by PostgreSQL. Panel 5 allows a user to pose questions related to the query execution plan in natural language by leveraging on the Question Processor and Answer Generator modules.
The Parser module. The goal of this module is to parse and transform the qep of a sql query into an operator tree that will be exploited by subsequent modules. Once a user formulates and executes a sql query in Panel 3, it first invokes the PostgreSQL api (using the Psycopg adapter) to obtain the corresponding qep in json format. Then, the plan is parsed and an operator tree is constructed. Specifically, each node in the operator tree contains relevant information associated with the plan such as the operator type (e.g., hash join), name of the relation being processed by the node, the alias given to intermediate results (e.g., subqueries), column(s) used for grouping or sorting, the name of the index being processed by the node, subplan id generated by PostgreSQL, the condition used for searching the hash table, the filtering condition used during a join or table scan, conditions used for index-based search, and the number of rows left after an operation. These information will be subsequently utilized to generate natural language-based description of the qep as well as to support the question-answering framework. Note that this module ignores all information in the original qep that are not useful for realizing the neuron framework such as plan width or whether a node is parallel aware.
The Plan-to-Text Generator module. The objective of this module is to take the operator tree as input and generate a textual description of the qep represented by a sequence of steps (e.g., Panel 4 in Figure 5). At first glance, it may seem that we may simply perform a postorder traversal on the operator tree and transform the information contained in each node into natural language format. However, this naïve approach may generate verbose description of a qep containing irrelevant and redundant information. This is because some nodes in an operator tree may not carry meaningful information as far as textual description of a qep is concerned. For instance, the node Result is used in PostgreSQL to represent intermediate relation for storing temporary results. Although it is an important step for executing a query, it is unnecessary to show it as an individual step in our output. Hence, this module first removes Result nodes from the operator tree.
The modified operator tree contains now two categories of nodes, namely critical and non-critical nodes. The former nodes represent important operations (e.g., hash join, sort) in a qep and may contain a large amount of information. On the hand, the latter nodes are located near critical nodes (e.g., parent, child) but do not carry important information on its own in comparison to the critical ones. Hence, we reduce the modified operator tree further by merging the non-critical nodes with corresponding critical nodes. Some examples of such merge operation are as follows.
The Hash Join node and its child Hash are merged.
The Merge Join node and its children Sort are merged.
The Bitmap Heap Scan node and its child Bitmap Index Scan are merged.
The Aggregate node and its child Sort are merged.
The Unique node and its child Sort are merged.
An important issue to address while generating a natural language representation of a qep is the handling of subqueries in a sql query. PostgreSQL creates a corresponding subplan for each subquery in the qep whose return value can be referred to from other parts of the plan. It assigns a temporary name to this subplan for future referral. However, this name should not appear in the natural language representation of the qep. Thus, we use a dictionary to keep track of the subplan names and their corresponding relation names so that when other steps mention the output of the subquery, the referred name will be replaced by the corresponding relation name(s).
Based on the aforementioned strategies, this module generates the natural language representation of a qep from the reduced operator tree as follows. It traverses the tree in postorder fashion to generate a sequence of steps (identified by step id) describing the qep. Each node in the reduced operator tree generates a step and each step is represented as a text description of the node’s content based on its type. Specifically, we leverage different natural language templates for different node types to generate meaningful statements. In this context, each intermediate result is assigned an identifier. This allows a clear reference from a parent operator to its children’s result without any ambiguity. Filter and join conditions are parsed and converted to human readable natural language representation. For example, an Index Scan node is converted to the following step: “Perform index scan on table X (and filtering on X.b = 1) to get intermediate table A”. Figure 5 depicts an example of the output of this module (in Panel 4) for the qep in Figure 2.
It is worth noting that the textual description of the qep generated by this module is richer in implementation-specific information of a query compared to textual narrative generated from a declarative sql query by tools like Logos (KV+12, ). This is because execution-specific details (e.g., type of join, type of scan) of a sql query cannot be simply gleaned from its declarative statement.
The Vocalizer module. The goal of this module is to vocalize the natural language description of the qep generated by the Plan-to-Text Generator module. Specifically, the text to speech conversion is performed utilizing Google’s Text-to-Speech (gtts) api and played using the Pygame package (https://www.pygame.org/).
The Indexer module. This module is exploited by the question-answering (qa) framework of neuron. The qa subsystem accepts a user query as input and returns an answer as output (Panel 5). Note that not all queries related to a qep can be answered by analyzing the qep. For example, “what is a bitmap heap scan?” cannot be answered simply by analyzing the qep. To address this challenge, this module first extracts definitions of sql keywords and query plan operators from relevant Web sources222https://www.postgresql.org/docs/10/static/sql-commands.html, http://use-the-index-luke.com/sql/explain-plan/postgresql/operations, https://email@example.com as well as comments associated with source code of PostgreSQL333https://github.com/postgres/postgres/blob/master/src/include/nodes/plannodes.h. Then a set of documents containing these definitions are indexed using an inverted index (we use the Whoosh Python library (https://pypi.python.org/pypi/Whoosh/)). where each document contains the definition of a single sql keyword or query operator. The words in documents are lemmatized and stop words are removed during this process.
The Question Processor module. Once a user enters a question related to the qep through Panel 5, the goal of this module is to classify the question, and extract the part-of-speech (pos) tags and keywords in the question. Consequently, it consists of three submodules, namely, the question classifier, the part-of-speech (POS) tagger, and the keyword extractor submodules. We elaborate on them in turn.
The Question Classifier submodule. The current implementation of neuron supports five categories of questions: (a) definitions of various sql keywords and query plan operators; (b) the number of tuples generated at a specific step; (c) list of operators used to evaluate the query; (d) the amount of time taken by specific step(s) in the qep; and (e) finding the dominant (i.e., most expensive) operator in the qep
. Hence, given a user’s question, its category needs to be identified first before it can be answered. The goal of this submodule is to classify a user’s question in one of these five categories. To this end, it adopts the Naive Bayes, a learning-based classification method. A set of training questions were prepared manually together with their true categories. As there are five categories, it is not necessary to generate a very large number of training questions (we use 67 questions for training). The features used for the classification is the bag of words. Our experiments show that this strategy is effective in classifying different questions accurately.
Given a user’s question, the bag of words feature is generated for the question and the question category is obtained from the classifier.
The Part-of-speech (POS) Tagger submodule. This submodule extracts the part-of-speech (pos) tags in a question444Our implementation uses the TextBlob Python library (https://pypi.python.org/pypi/textblob) using the Penn Treebank corpus. It is a high-level NLP toolkit in Python built on top of NLTK.. pos tags are used to find the step id (i.e., id of a step in Panel 4) inside a question related to Categories (b) and (d).
The Keyword extractor submodule. To answer questions related to Category (a), it is paramount to identify the keywords in the question so that we know what is being asked. This submodule extracts the keywords by first removing stop words. The list of English stop words is obtained from the nltk Python library (http://www.nltk.org/). The word only is excluded as it is one of the keywords for query operators (e.g., Index Only Scan). The remaining words are lemmatized and duplicate words are eliminated.
The Answer Generator module. Given a question, the Question Processor module identifies its category, relevant keywords and step id. The Answer Generator module aims to retrieve the correct answer based on the question category. As there are five categories of questions, different submodules are designed to handle them.
The Concept Definition submodule. If the question belongs to Category (a) then it uses keywords extracted from it to retrieve the relevant document containing the definition using the index.
The Row Count submodule. To answer questions regarding the number of rows after a certain step (Category (b)), the step id must be supplied in the question. Note that questions in the form of “number of rows left after joining relations A and B” (i.e., without step id) are not supported. This is because it is possible that two or more joins on the same relations but different columns may be performed in a single query, leading to ambiguity.
The submodule extracts the step id by finding word with the pos tag CD (cardinal number) in the question. After that, the operator tree is traversed to find the node the step id belongs to. The number of rows is retrieved from the Actual Rows element associated with the query plan node.
The Operator List submodule. To retrieve the operators used in a qep (Category (c)), the operator tree is traversed. Duplicate operators are removed and the final list is returned to Panel 5.
The Total Time submodule. To answer questions regarding Category (d), similar to Category (b) questions, the step id must be supplied in the question. It traverses the operator tree to retrieve the total time of a specific step, which is calculated based on the Actual Total Time element of the node itself and its children. The returned answer includes the actual time spent on the queried step.
The Dominant Operator submodule. To find the most expensive operator in the qep (Category (e)), neuron computes the total time taken by each operator and returns the one with longest time.
Note that the answers are formatted using natural language templates to generate meaningful statements. Figure 6 depicts example screenshots of several types of questions supported by the qa subsystem of neuron.
3. Related Systems and Novelty
Query optimizers have been extensively studied since the inception of relational databases. Several interesting features of query optimizers have been demonstrated in conference venues as well. For example, picasso (picasso, ) is a visualization tool for graphically profiling and analyzing the behavior of database query optimizers. QE3D (SDB15, ) is another query plan visualization tool that provides holistic view of distributed query plans executed by the sap hana database management system. Stethoscope (GK12, ) is an interactive visual tool to analyze plans for a columnar database. However, to the best of our knowledge, there has been no prior work on natural language understanding of query plans.
Natural language interfaces to databases have been studied for several decades. Such interfaces enable users easy access to data, without the need to learn a complex query languages, such as sql. Specifically, there have been natural language interfaces for relational databases (LJ14a, ; LJ14b, ; SF+16, ; PEK03, ; BH+18, ), video databases (ECC08, ), xml (LC+07, ), and graph-structured data (ZC+17, ). Given a logically complex English language sentence as query input, the goal of majority of these work is to translate them to the underlying query language such as sql. On the other hand, frameworks such as Logos (KV+12, ) explain sql queries to naive users using natural language. neuron compliments these efforts by providing a natural language explanation of the query execution plan of a given sql query. It further supports a natural language-based question answering framework that enables users to ask questions related to the plan.
4. Demonstration Objectives
Our demonstration will be loaded with tpc-h benchmark (we use the tpc-h v2.17.3 at http://www.tpc.org/tpc_documents_current_versions/current_specifications.asp) and dblp datasets. For dblp, we download the xml snapshot of the data and then store them in 10 relations. Example sql queries on these datasets will be presented. Users can also write their own ad-hoc queries through our gui.
One of the key objectives of the demo is to enable the audience to interactively experience the benefits of this novel natural language interface for query execution plans in real-time. The audience will be requested to formulate a sql query or select one from the list of benchmark queries using the neuron gui. Upon execution of the query, one will be able to view as well as hear the natural language description of the qep (through the Plan-to-Text Generator and Vocalizer modules). She may pause and replay the natural language description as she wishes. By clicking on the View Plan button, one can view the original qep generated by PostgreSQL and appreciate the difficulty in perusing and comprehending the details of the plan, highlighting the benefits of natural language interaction brought by neuron. Lastly, the audience can pose the aforementioned types of questions related to a qep through the neuron gui and get accurate answers in real-time. Such qa session aims to facilitate further natural language-based clarification regarding the execution strategy deployed by the underlying query engine.
5. Illustration of Example Use Case
A short video to illustrate the aforementioned features of neuron using an example use case on tpc-h benchmark data is available at https://youtu.be/wRIWuYbU2F0. Specifically, it emphasizes the ease with which a user can interaction with neuron, natural language description of the qep of an example query, and interactive question-answering sessions demonstrating the five categories of questions related to the qep.
- (2) F. Basik, B. Hättasch, et al. DBPal: A Learned NL-Interface for Databases. In SIGMOD, 2018.
- (3) G. Erozel, N. K. Cicekli, I. Cicekli. Natural language querying for video databases. Inf. Sci., 178(12), 2008.
- (4) M. Gawade, M. L. Kersten. Stethoscope: A platform for interactive visual analysis of query execution plans. In PVLDB, 5(12), 2012.
- (5) J. R. Haritsa. The Picasso Database Query Optimizer Visualizer. In PVLDB, 3(2), 2010.
- (6) A. Kokkalis, P. Vagenas, A. Zervakis, A. Simitsis, G. Koutrika, Y. E. Ioannidis. Logos: a system for translating queries into narratives. In SIGMOD, 2012.
- (7) F. Li, H. V. Jagadish. NaLIR: an interactive natural language interface for querying relational databases. In SIGMOD, 2014.
- (8) F. Li, H. V. Jagadish. Constructing an Interactive Natural Language Interface for Relational Databases. PVLDB, 8(1), 2014.
- (9) Y. Li, I. Chaudhuri, H. Yang, S. P. Singh, H. V. Jagadish. DaNaLIX: a domain-adaptive natural language interface for querying XML. In SIGMOD, 2007.
- (10) J. Lin, Y. Liu, J. Guo, J. Cleland-Huang, W. Goss, W. Liu, S. Lohar, N. Monaikul, A. Rasin. TiQi: a natural language interface for querying software project data. In ASE, 2017.
- (11) A.-M. Popescu, O. Etzioni, H. A. Kautz. Towards a theory of natural language interfaces to databases. In IUI, 2003.
- (12) D. Saha, A. Floratou, et al. ATHENA: An Ontology-Driven System for Natural Language Querying over Relational Data Stores. In PVLDB, 9(12), 2016.
- (13) D. Scheibli, C. Dinse, A. Boehm. QE3D: Interactive Visualization and Exploration of Complex, Distributed Query Plans. In SIGMOD, 2015.
W. Zheng, H. Cheng, L. Zou, J. X. Yu, K. Zhao. Natural Language Question/Answering: Let Users Talk With The Knowledge Graph.In CIKM, 2017.