One DSL to Rule Them All: IDE-Assisted Code Generation for Agile Data Analysis

04/18/2019 ∙ by Artur Andrzejak, et al. ∙ University of Heidelberg 0

Data analysis is at the core of scientific studies, a prominent task that researchers and practitioners typically undertake by programming their own set of automated scripts. While there is no shortage of tools and languages available for designing data analysis pipelines, users spend substantial effort in learning the specifics of such languages/tools and often design solutions too project specific to be reused in future studies. Furthermore, users need to put further effort into making their code scalable, as parallel implementations are typically more complex. We address these problems by proposing an advanced code recommendation tool which facilitates developing data science scripts. Users formulate their intentions in a human-readable Domain Specific Language (DSL) for dataframe manipulation and analysis. The DSL statements can be converted into executable Python code during editing. To avoid the need to learn the DSL and increase user-friendliness, our tool supports code completion in mainstream IDEs and editors. Moreover, DSL statements can generate executable code for different data analysis frameworks (currently we support Pandas and PySpark). Overall, our approach attempts to accelerate programming of common data analysis tasks and to facilitate the conversion of the implementations between frameworks. In a preliminary assessment based on a popular data processing tutorial, our tool was able to fully cover 9 out of 14 processing steps for Pandas and 10 out of 16 for PySpark, while partially covering 4 processing steps for each of the frameworks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Data analysis studies involve many steps of data manipulation and analysis. Typically, software tools with fixed processing workflows are not flexible enough to cover a large variety of scenarios encountered in research, and so some form of programming is mandatory. Consequently, users frequently create project-specific analysis pipelines using various frameworks and tools. These range from data-flow programming environments like KNIME to interactive data processing frameworks like JupyterLab (with Pandas, Scikit-Learn, TensorFlow etc.) to frameworks capable of massively parallel data processing like Apache Spark.

Most approaches are characterized by an inherent trade-off between ease-of-use and flexibility and/or performance. For example, due to the low development effort, Matlab is popular in industry and science to prototype data-centric applications. However, for the production scenario or to process massive data sets, such prototypes need to be rewritten from scratch into C/C++ or distributed computing frameworks like Apache Spark. In general, researchers and practitioners are still faced by multiple challenges at the intersection of data science and software engineering:

Programming barrier. Proprietary data formats and specific requirements can imply a substantial amount of project-specific script programming. This seriously increases the duration and cost of data analysis projects, and makes it substantially harder for domain specialists (frequently with only limited programming skills) to interact with the data directly.

Reuse problem. A majority of the above-mentioned programming effort is put into adjusting code to the specifics of the project and its data sets. New and more widely applicable algorithms and libraries are frequently result of an additional coding effort performed only as a “side-effect” of a project. This creates an unfavorable overall ratio between reusable (library-like) and project-specific code bases.

Scalability problem. Implementing sequential versions of algorithms and software pipelines can be already quite challenging, yet their scalable (massively parallel) versions for large data sets are typically significantly more complex. Moreover, scalable versions require other data structures, libraries, and even programming paradigms (such as DAG operation graphs in Apache Spark), and so significant costs are needed (typically complete re-implementation) to provide scalable versions of software pipelines. Therefore, users frequently work with sequential scripts even if parallel processing would be beneficial, which can limit the considered data sizes and slow down the execution.

Multiple efforts have been undertaken to address these challenges, including novel human-in-the-loop approaches like Predictive Interaction (Heer et al., 2015), development of new programming languages like Julia (Bezanson, 2015), and research on advanced methods like program synthesis for data analysis (Gulwani, 2016), (Barowy et al., 2015), (Jin et al., 2017). Nevertheless, many of these techniques address only selected special cases, are still in the research phase, or might require to migrate to a new platform or a programming environments. To our knowledge, for a majority of application scenarios there are still no comprehensive and effective solutions, or solutions which are compatible with "legacy" software (i.e. frameworks, libraries, and programming languages), which would complement or support existing software ecosystems instead of trying to replace them.

We propose an approach which supports the development of data science scripts during the editing process. In essence, our approach can be understood as advanced code recommendations, where users formulate their intention in an abstract and user-friendly Domain Specific Language (DSL) for dataframe (table) manipulation and analysis. Such DSL statements are directly translated into executable Python code while DSL can remain in the scripts as comments.

Two elements of our solution are essential in addressing the above-mentioned challenges. First, we provide intelligent editing support for the DSL (code completion). Together with a self-explaining design of our DSL this substantially lowers the initial effort to command the DSL and reduces the adoption barrier. Furthermore, the very same DSL can generate - according to user settings - code for various frameworks (“targets”). Our prototype currently offers code generation for Pandas (a popular Python library for in-memory processing and analysis of time series and tables), and for Apache PySpark (Python bindings for Spark, a framework for distributed processing of massive data sets). Generating code for multiple targets supports converting of Pandas scripts to Spark in scenarios where users first experiment and prototype on small data sets (using Pandas) but later need a scalable solution based on Spark (or vice-versa).

DSLs are becoming increasingly popular in software solutions (Fowler, 2010), (Kats and Visser, 2010), (Campagne, 2016), (Bettini, 2016), (Dejanović et al., 2017), as they raise the abstraction level of code and facilitate communication with domain experts. So-called external DSLs are completely independent languages, with code typically not mixed with other languages. While such DSLs allow high syntactic flexibility, adjustment to the target domain, and IDE support, the effort of implementing functionality beyond the original DSL intention can be substantial. Typically, functionality extensions are only possible via custom User Defined Functions (UDFs) or by embedding the DSL into a general-purpose language via strings or separate files (similarly to how SQL is used from C++ or Java). In such cases, developers must learn and use additional APIs to interact between two languages. In addition, debugging as well as (static) code analysis become cumbersome.

Another approach is to use internal DSLs which are essentially libraries written in general-purpose programming languages with flexible syntax, e.g. Ruby, Scala, Kotlin, or F#. Such DSLs are easier to implement and avoid the interoperability issues, but have only a constrained syntax and no IDE support (Intellisense etc.). In all cases, using a DSL can lead to a lock-in effect and might be a barrier to new developers on a project.

In our solution we use an external DSL yet attempt to circumvent the above-mentioned issues for such languages by allowing a “no-barrier” coexistence of the DSL and the generated code. First, developers can complement and modify the generated code directly during editing, which prevents the problem of lacking DSL functionality. Our DSL is intended to provide support only for common, frequently used operations, while more specific requirements are implement in the target language. Furthermore, developers are free to decide on how to use our DSL: it can serve only as “active help” during editing; it can be used as comments/explanations for the generated code; or it can serve as primary source code for simpler scenarios. We believe that this flexibility can be helpful in the adoption of our solution and reduce the lock-in effect. An obvious disadvantage of our proposal is that the DSL code and the generated code might become out-of-sync during editing, in worst case leading to a misleading code description (similarly to an outdated documentation). We will address this issue in further research, e.g. by automatically flagging DSL code which no longer fits to the generated target code.

This paper is organized as follows. Section 2 describes the details of the approach. Section 3 outlines the implementation. We present the preliminary evaluation in Section 4 and discuss related work in Section 5. Section 6 describes conclusions and future work.

2. Approach

Figure 1. Screenshot of an exemplary editing session.

Our approach supports script developers via an IDE-supported DSL for generating Python code for common data science tasks. Users enter DSL statements as pseudo-comments while being assisted by code completions (Figure 1), and can choose to insert executable code generated from these DSL statements. It is also possible to specify the type of generated code, or the type of the target framework (currently Pandas or PySpark). Apart from the changed editing experience (noticeable only in lines starting with the DSL prefix code, here ##

), there is no difference to a normal development and execution process. By placing DSL code in comments, normal execution and testing flow is not influenced and does not need to be changed. As noted in the introduction, users can deploy our tool in any editor/IDE supporting the Language Server Protocol, which currently covers all major IDEs. Consequently, we estimate the barrier to the adoption of the tool to be low.

The utility of this approach is largely determined by the design of the DSL, in particular, the power of the DSL (or its“expressiveness”) in terms of common operations for the target frameworks. To ensure a high level of DSL coverage, we have analyzed several popular “cheat-sheets” for Pandas and PySpark, as well as some tutorials for these frameworks. Based on this analysis we designed a DSL which attempts to cover most of the elements found in these sources, see Table 1. It should be noted that we purposefully do not attempt to cover all of the functionality in our DSL. This would largely increase the effort of the implementation (DSL grammar design and code generation), and would make the DSL fragile to changes in the targeted frameworks. Instead, we assume that developers will implement more specific functions directly in the Python code.

2.1. DSL for dataframe operations

Our DSL attempts to be easy-to-understand (or, in the best case, even self-explanatory) yet concise. Thanks to code completion, long keywords are acceptable, and so we preferred better readability in the DSL design than compact but ambiguous keywords. The hurdle of learning and understanding the DSL is further reduced by explanations of the commands provided in the list of suggestions (see Figure 1).

Table 1 gives an overview of the essential parts of our DSL. It covers functionality for data I/O, simple selection, deleting rows or columns, aggregation, multiple types of data transformations, and essential data inspection. In addition, we provide operations which are suitable only for Spark code generation (in Pandas mode, these emit empty code). The DSL has also a meta-command target_code spark | pandas which specifies for which framework code should be generated.

Many modern data science libraries allow the Fluid API coding style, i.e. chaining of method calls like in var.func_A().func_B() to avoid need for intermediate variables (in R, similar effect is achieved by the “pipeline” operator ’%>%’). Our DSL also allows this style, with “:” chosen as the “pipeline” operator (this can be easily changed). Furthermore, to specify the dataframe on which the operation chain is to be performed, we use on dataframe syntax. In Table 1, this is used in several examples (e.g. line 2).

Category DSL examples (prefix ## is omitted)
I/O operations result = load as json ’some_path.json
on df : save as csv tosome_path.csv
Selection result = on df : select_cols a, b, c
select_rows col1 == m or col2 < 3
select_rows col1 > 0 and col3 in [v1, v2, v3]
Deletion result = on df : drop_cols x, y, z
drop_rows col1 > 0 and col2 not in [v1, v2]
Aggregation result = on df : group_by col1 apply sum
group_by col1, col2 apply min
Transform. result = on df : on_missing fill_with value
on_missing drop_rows
replace old_value by new_value
apply_fun function on cols
apply_fun function on rows
append_col col_name
sort_by col_name
drop_duplicates
rename_cols c1 to p, c2 to q
Inspection on df : show
on df : describe
return_top_N 10
select_rows col1 == m : count
Spark only start_session named ’session name’
stop_session
s = schema col1 of int, col2 of str
result = load ’some_path.txt’ with_schema s
Pandas only append_row col_name default default_val
Options target_code = spark
target_code = pandas
Table 1. DSL overview. Keywords are in bold, choices from a list are underlined, and other parameters are emphasized.

2.2. Generated code

As explained in Section 3.1, DSL code is translated into executable (Python) code by our tool when a user requests a code completion action and the current DSL statement is syntactically correct. We show in Table 2 some examples of generated code. Typically, the DSL code and the target code have similar structure, e.g. order of parameters. Therefore, mapping of the DSL code to the target code is relatively easy. For example, in most cases we can map a DSL syntax node (or grammar rule) to the target code without considering other syntax nodes.

Type Code (for DSL, prefix ## is omitted)
DSL x = load as csv some_path
P x = pd.read_csv(some_path)
DSL x = load as csv some_path with_schema S
S x = spark.read.csv(some_path, schema=S)
DSL x = on y : select_cols a, b, c : count
P x = y[[’a’, ’b’, ’c’]].count()
S x = y.select(’a’, ’b’, ’c’).count()
DSL x = on y : select_rows col1 == m and col3 in [v1, v2, v3]
P x = y[(y.col1 == m) & (y.col3.isin([v1, v2, v3]))]
S x = y.filter((y.col1 == m) & (y.col3.isin([v1, v2, v3])))
DSL x = on y : rename_cols c1 to p, c2 to q
P x = y.rename(columns={’c1’: ’p’, ’c2’: ’q’})
S x = y.withColumnRenamed(’c1’, ’p’).with…(’c2’, ’q’)
Table 2. Examples of generated target code. In the column Type, ’P’ indicates Pandas code, and ’S’ Spark code.

3. Implementation

3.1. Architecture

Figure 2. Architecture of our prototype.

Figure 2 outlines the architecture of our prototypical implementation. To reach developers with various preferred code editors or IDEs, we use a Language Server Protocol (LSP) (Corp., 2019). LSP decouples a particular editor/IDE from the “coding services” (including code completion, linting and basic refactoring) via a JSON-RPC interface. Editors/IDEs supporting LSP (clients) include Visual Studio (Code), Jetbrain products (PyCharm, Ryder, Intellij IDEA,…), Eclipse (Che), Vim, Emacs, and others. Similarly, there exist a large number of language servers for major programming languages.

Our tool emulates a Language Server via a thin layer (“Hub”) which is responsible to dispatching client requests to sub-modules, and forwarding responses from these sub-modules to the client. The dispatch mechanism essentially analyzes a code completion request (in LSP, textDocument/completion request) whether it has been issued in a code line containing DSL-code, or not. In the DSL case, a request is handled by our DSL-based Recommender (see Figure 2), otherwise forwarded (unchanged) to a regular language server (for Python, our prototype uses a server by Palantir Technologies111https://github.com/palantir/python-language-server). Few other LSP requests are duplicated to all sub-modules to ensure coherency of workspace/file caching, in particular DidChangeTextDocument.

The DSL-based Recommender uses textX (Dejanović et al., 2017) and textX-LS222https://github.com/textX/textX-LS to implement code completion suggestions for our DSL grammar. textX is a Python library for defining and implementing DSLs, and textX-ls is a generic language server implementation which provides syntax checking and code completion for any language defined with textX. A response from textX-ls is a list of ranked recommendations as shown in Figure 1. We add to this list a preview of the generated target code (if DSL syntax is valid) with high rank, i.e. at position 1. In this way, a user can insert the generated code without breaking the editing flow.

3.2. Code generation

Figure 3. textX-generated Python objects for the DSL code “x = on y : select_cols a, b, c : group_by b apply unique : show”.

Code generation is implemented as a set of few classes on top of the textX library. Upon start of our tool, the DSL definition is parsed and remains in memory as a meta-model (note that contrary to other DSL construction tools like Xtext or MPS, grammar is parsed dynamically, without generating code for a parser/lexer). For each LSP request with a DSL in the current editor line we use textX to parse this line with the prepared meta-model. Upon success, textX returns a tree of Python objects corresponding to an Abstract Syntax Tree (AST).

To generate code, we essentially parse this tree using recursion, iteration and if/else statements, and collect the generated code using a string buffer. Figure 3 shows an example AST for the DSL “x = on y : select_cols a, b, c : group_by b apply unique : show”. A Python method processes the AST-node LineOperation by inspecting a corresponding Python object obj dynamically generated by textX. For example, if a property assignment of obj is not null, we add the variable name stored in property assignment.name (on the r.h.s. in the DSL) and “=” to the generated code. Further on, if the property chainOps of obj is not null, then we iterate over a list stored in this property. For each list element we call a method to recursively handle each sub-type of a ChainOperation AST node.

After acquiring some familiarity with the textX framework it becomes a relatively straightforward tasks to implement new DSL elements. Also the required (Python) code has reasonable size. In detail, code generator for our current DSL comprises about 240 LOC (Lines of Code, including comments) and 3 Python classes.

Different target frameworks are handled as sub-classes of a class for framework-agnostic code generation. Such sub-classes are typically small, which ensures that code generation for additional frameworks can be easily implemented. In our case, a subclass for Pandas code generation has 55 LOC and 12 short methods, and the subclass for Spark has 51 LOC and 11 methods.

4. Preliminary Evaluation

A proper evaluation of the usefulness of our approach would require a well-designed user study with a sufficient number of participants. Due to time constrains, we refrain from performing such analysis and focus instead on assessing the coverage of our initial DSL implementation. To that aim, in this preliminary evaluation we answer the following question: Given a real-usage scenario, what fraction of the analysis and data processing steps are covered by our current DSL design?

We first need to select a scenario that represents a typical data analysis task. Due to the popularity and demand for data analysis in current scientific landscape, tutorials are very common in the web. For our evaluation, we select a popular DataCamp tutorial “Apache Spark Tutorial: ML with PySpark”333https://www.datacamp.com/community/tutorials/apache-spark-tutorial-machine-learning

that provides a high-quality use-case for using PySpark dataframes for data processing and data analysis. This tutorial has multiple topics, ranging from Spark installation to the application of conventional machine-learning methods for Big Data. For this initial assessment, we focus only on the topics within the scope targeted by our DSL: data exploration and data pre-processing. The two topics in this tutorial contain a total of 16

processing steps. We define a processing step loosely as the smallest part of the code that can be executed on its own (typically single-liners), or in a single block of code in this tutorial.

To evaluate our DSL in context of Spark, we use unchanged source code from the tutorial. For Pandas, we manually translate each processing step into Pandas code, which yields only 14 processing steps, as some steps (such as creating RDDs to populate dataframes) are specific to PySpark. We do not use (another) tutorial directly for Pandas as it was impossible to find a tutorial with similar purpose/scenario as to the one for Spark. Moreover, many introductory tutorials for Pandas explain functions related to selecting individual values (e.g. .loc and .iloc), and functions related to the (row) index data structure of Pandas. Such functions make Pandas code fundamentally difficult to translate to other programming paradigms or frameworks (e.g. SQL, Spark), and typically make the code non-scalable. In our scenario, we assume that a developer considers the scenario of massive data sets and will avoid such functions right from the onset.

In our evaluation we attempt to express a basic processing step with our DSL. For each such step, we estimate whether it can be expressed in the DSL completely, whether DSL needs additional (Python) code, whether we need to substantially change the generated code, or if our DSL cannot express this step at all. We present in Table 3 the aggregated results of this initial assessment. Overall, our DSL fully covers 64.2% of Pandas processing steps and 62.5% of PySpark processing steps in the tutorial. In 4 processing steps for Pandas and PySpark, the translated code did not completely match the expected goal and users would have to add Python code (either as new parameters or function call) to fulfill their initial intention. We find no case where the generated code needs to be rewritten but three processing steps (one in Pandas and two in PySpark) could not be expresed or translated by our DSL (see NS in Table 3).

Category Pandas PySpark
Fully translated (FT) 9/14 (64.2%) 10/16 (62.5%)
Code added (CA) 4/14 (28.5%) 4/16 (25.0%)
Code modified (CM) 0/14 (0.0%) 0/16 (0.0%)
DSL not suitable (NS) 1/14 (7.1%) 2/16 (12.5%)
Table 3. Results of our DSL coverage assessment for Pandas and PySpark code translation.

In Table 4, we detail the processing steps our DSL fail to cover in its completion (CA) or could not support at all (NS). From the 11 partially failed steps, 6 were only covered by a similarly equivalent function (L1). In this particular case, our DSL generated the code to retrieve the top N rows in a dataframe through the head function call, while in the tutorial the show function is preferred. Both functions are similar in practice, but show prints the top N rows in the standard output, while head returns top N rows as a new dataframe. In another case (L2), when sorting a dataframe, our DSL assumes the sorting in the ascending order, hence omitting the optional parameter for descending sorting (ascending=False). The L3 evidences one limitation of our DSL, the lack of context awareness. In the current implementation, the DSL does not index or complete code from functions defined by the programmer and hence should not be used as a code recommender. The last case (L4), is the support for lambda functions not yet implemented in our tool.

ID Cat. Limitation DSL code (## is omitted) Generated code Expected generated code #
L1 CA Target function not supported on df: return top_N 10 df.head(10) df.show(10) 6
L2 CA Parameter not supported on df : sort_by col df.sort(’col’) df.sort(’col’, ascending=False) 2
L3 NS Custom user-functions not supported df = convertColumn(df, columns, FloatType()) 2
L4 NS Lambda functions not supported df = rdd.apply(lambda x: x / 10).toDF() 1
Table 4. Examples of DSL limitation in fully translating our studied tutorial into PySpark code. The “Cat” column references the categories of Table 4. The expected code written in red and bold is currently not supported by our DSL.

5. Related Work

Domains relevant to our work are low-code data analysis, accelerated scripting and coding, and Domain Specific Languages.

Low-code data analysis. Multiple research fields tackle the challenge of making the process of data analysis and transformation more user-friendly and accelerating scripting and automation of processing. The essential directions are: visual analytics (MacInnes et al., 2010), mixed-initiative systems (Horvitz, 1999), (Makonin et al., 2016), facilitating user involvement in the data analysis or processing activities (Doan et al., 2017), (Rahman et al., 2018), learning data transformations by examples (Le and Gulwani, 2014), (Smith and Albarghouthi, 2016), (Jin et al., 2017), (Raza and Gulwani, 2017), and data wrangling in various flavors (Raman and Hellerstein, 2001), (Kandel et al., 2011), (Stonebraker et al., 2013), (Heer et al., 2015), (Rahman et al., 2018).

Data wrangling (or data munging) refers to the process of interactive data transformations on structured or unstructured data. The most mature tool in this domain is Wrangler (Kandel et al., 2011) which was commercialized by Trifacta (Machlis, 2016), and recently offered by Google as the Google Cloud service (Gogle, 2017). Another popular tool is OpenRefine (Verborgh and De Wilde, 2013) (originally GoogleRefine) which allows batch processing of tabular data by menu selection and a Domain Specific Language (DSL) named GREL. Albeit similar to Wrangler, it has more restricted functionality and offers no support for very large data sets.

The concept behind these tools is Predictive Interaction (Heer et al., 2015). Its key elements are: (i) real-time preview of effects of code on a processed data set, (ii) code recommendations based on the context of user interactions and data, and (iii) a DSL to describe data transformations in a way easily accessible to users. A disadvantage of the Wrangler tool is the fact that it is a closed (and commercial) eco-system. This creates a serious barrier for interoperability with mainstream libraries or frameworks. Moreover, its DSL has a limited expressiveness (focusing on data preparation only), and extending this DSL requires developing User Defined Functions.

Learning data transformations by examples (Le and Gulwani, 2014), (Smith and Albarghouthi, 2016), (Jin et al., 2017), (Raza and Gulwani, 2017) is a special case of the program synthesis techniques. Such approaches (while still immature) offer a promise to greatly facilitate complex data analysis, especially for users with no or little programming skills. The (quite sophisticated) methods here include constraint-based program synthesis, program sketching (Solar-Lezama, 2008), version space algebra (Le and Gulwani, 2014; Gulwani, 2011), or searching in a state space graph (Jin et al., 2017). In context of data extraction, transformation, and analysis, several interesting applications have been proposed (Gulwani, 2016), including extracting relations from spreadsheets (Barowy et al., 2015), data transformations (Jin et al., 2017), or synthesizing regular expressions. So far, only the FlashFill approach (Gulwani, 2015) has been practically relevant and available to many users (as a component of Excel 2013 and ConvertFrom-String cmdlet in PowerShell). We consider program synthesis as a possible extension of our work.

Accelerating scripting and coding (and as a special case, end-user development/end-user programming) comprises a multitude of approaches from software engineering. The most visible progress in this category stems from novel programming languages, introduction of software processes such as Scrum, proliferation of software testing, and advances in development tools (including Intelligent Development Environments, or syntax/error checkers). Nevertheless, the impact of the individual measures on programmers’ productivity is hard to measure. Other noteworthy approaches include program synthesis (discussed above), visual programming via dataflow languages (Johnston et al., 2004), block programming languages (Bau et al., 2017), and Domain Specific Languages (DSLs) (Fowler, 2010; Kosar et al., 2016) (discussed below), or more generally Language-Oriented Programming (Felleisen et al., 2015).

In the context of data analysis, dataflow languages (Johnston et al., 2004; Götze and Sattler, 2016) have gained some popularity via tools such as (201, 2018) or KNIME (Berthold et al., 2008). Such approaches can greatly accelerate creation of small data processing pipelines and have also proven suitable for educational purposes. On the other hand, they tend to slow down more experienced programmers (even smallest operation like “+” must be selected from menu/palette and connected to other blocks), do not provide an intuitive support for controls structures, and lack interoperability with other tools or libraries. For these reasons, they are rarely used in larger projects.

Domain Specific Languages (DSLs) (Fowler, 2010), (Kats and Visser, 2010), (Dejanović et al., 2017), have proven useful in a multitude of medium to large-scale projects by introducing highly readable and concise code with support for higher-level operations. While the underlying “theory” and scientific interest is still modest (Kats and Visser, 2010), (Felleisen et al., 2015), (Gupta, 2015), DSLs are becoming increasingly popular in industry (for example, the industrial-grade database management system SAP HANA uses internally over 200 DSLs).

A particular flavor of DSLs are internal or embedded DSLs which can seamlessly inter-operate with the underlying (typically general-purpose) language. However, internal DSLs offer only limited range of syntax and are typically not supported by IDEs. Contrary to this, external DSLs admit almost any syntax, and modern DSL engineering tools (like MPS (Campagne, 2016), Xtest (Bettini, 2016), textX (Dejanović et al., 2017), or Spoofax (Kats and Visser, 2010)) provide “automatic” editing support tools (syntax checking and code completion) for them. The disadvantage of external DSLs is the difficulty of interaction with (general-purpose) languages. Paired with this is increased development effort in a scenario where DSL capabilities are not sufficient, and e.g. writing project-specific User Defined Functions become necessary.

In our approach we generate code for a general-purpose language from an external DSL during the editing process, which largely eliminates the interoperability barrier. We also implemented a special Language Server to provide coding assistance to both our DSL and the “embedding” general-purpose language (here Python).

6. Conclusions and Future Work

We proposed a DSL-based approach to support data scientists in writing code for common tasks related to table analysis and processing. We use external DSL to express such operations in a human-readable form, and generate executable Python code directly during editing. In this way we circumvent the frequently encountered problem of insufficient expressiveness of a DSL, since developers can directly use Python code to address more special cases (outside the power of the DSL). Our prototype works with a large number of IDEs and editors (all supporting the Language Server Protocol), and provides editing support (code recommendations) for the DSL.

Moreover, users can generate code for Pandas “data wrangling” library, or for Apache Spark. This facilitates a transition from low-effort yet typically non-scalable scripts (in Pandas) suitable for smaller data sets to highly scalable scripts in Spark. Our preliminary evaluation shows that for typical data pre-processing tasks, our DSL is capable of generating complete code in 10 out of 16 cases for Apache Spark, and in 9 out of 14 cases for Pandas.

Despite of these promising results, there is still a lot of work to be done to understand and to address the challenges of our approach, and to provide tools of practical value. Our future work will include a controlled user study with interviews in order to identify such challenges, and verify our hypotheses on user behavior. We will also implement more code generation targets for our DSL, including R (with dplyr/tidyR) and Matlab. Another option is to provide and evaluate a DSL for deep learning frameworks like TensorFlow, Microsoft Cognitive Toolkit, or PyTorch.

Further work related to the editing support will include mechanisms for synchronizing DSL and the generated code, e.g. by marking DSL code which is no longer in-sync with the Python code. Another option here is to add support for “type providers” known from .NET languages, i.e. editor/compiler recommendations for column names and types of the actual data sets processed in a script.

References

  • (1)
  • 201 (2018) 2018. Top 21 Self Service Data Preparation Software - Compare Reviews, Features, Pricing in 2019. https://www.predictiveanalyticstoday.com/data-preparation-tools-and-platforms/
  • Barowy et al. (2015) Daniel W. Barowy, Sumit Gulwani, Ted Hart, and Benjamin Zorn. 2015. FlashRelate: Extracting relational data from semi-structured spreadsheets using examples. In Proceedings of the 36th ACM SIGPLAN Conference on Programming Language Design and Implementation. ACM, 218–228. http://dl.acm.org/citation.cfm?id=2737952
  • Bau et al. (2017) David Bau, Jeff Gray, Caitlin Kelleher, Josh Sheldon, and Franklyn Turbak. 2017. Learnable Programming: Blocks and Beyond. Commun. ACM 60, 6 (May 2017), 72–80. https://doi.org/10.1145/3015455
  • Berthold et al. (2008) Michael R. Berthold, Nicolas Cebron, Fabian Dill, Thomas R. Gabriel, Tobias Kötter, Thorsten Meinl, Peter Ohl, Christoph Sieb, Kilian Thiel, and Bernd Wiswedel. 2008. KNIME: The Konstanz Information Miner. In Data Analysis, Machine Learning and Applications. Springer, Berlin, Heidelberg, 319–326. https://doi.org/10.1007/978-3-540-78246-9_38
  • Bettini (2016) Lorenzo Bettini. 2016. Implementing Domain Specific Languages with Xtext and Xtend - Second Edition (2nd ed.). Packt Publishing.
  • Bezanson (2015) Jeffrey Werner Bezanson. 2015. Abstraction in technical computing [Julia language]. Thesis. Massachusetts Institute of Technology. http://dspace.mit.edu/handle/1721.1/99811
  • Campagne (2016) Fabien Campagne. 2016. The MPS Language Workbench Volume I: The Meta Programming System (Volume 1) (3rd ed.). CreateSpace Independent Publishing Platform, USA.
  • Corp. (2019) Microsoft Corp. 2019. Language Server Protocol Specification. https://microsoft.github.io/language-server-protocol/specification
  • Dejanović et al. (2017) I. Dejanović, R. Vaderna, G. Milosavljević, and Ž. Vuković. 2017. TextX: A Python tool for Domain-Specific Languages implementation. Knowledge-Based Systems 115 (Jan. 2017), 1–4. https://doi.org/10.1016/j.knosys.2016.10.023
  • Doan et al. (2017) AnHai Doan, Adel Ardalan, Jeffrey Ballard, Sanjib Das, Yash Govind, Pradap Konda, Han Li, Sidharth Mudgal, Erik Paulson, G. C. Paul Suganthan, and Haojun Zhang. 2017. Human-in-the-Loop Challenges for Entity Matching: A Midterm Report. In Proceedings of the 2Nd Workshop on Human-In-the-Loop Data Analytics (HILDA’17). ACM, New York, NY, USA, 12:1–12:6. https://doi.org/10.1145/3077257.3077268
  • Felleisen et al. (2015) Matthias Felleisen, Robert Bruce Findler, Matthew Flatt, Shriram Krishnamurthi, Eli Barzilay, Jay McCarthy, Sam Tobin-Hochstadt, and Marc Herbstritt. 2015. The Racket Manifesto. Technical Report. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik GmbH, Wadern/Saarbruecken, Germany. – pages. https://doi.org/10.4230/LIPIcs.SNAPL.2015.113
  • Fowler (2010) Martin Fowler. 2010. Domain Specific Languages (1st ed.). Addison-Wesley Professional. 00681.
  • Gogle (2017) Gogle. 2017. Cloud Dataprep - Data Preparation and Data Cleansing. https://cloud.google.com/dataprep/
  • Götze and Sattler (2016) Philipp Götze and Kai-Uwe Sattler. 2016. Rewriting and Code Generation for Dataflow Programs. GvD (2016), 6.
  • Gulwani (2011) Sumit Gulwani. 2011. Automating string processing in spreadsheets using input-output examples. In ACM SIGPLAN Notices, Vol. 46. ACM, 317–330. http://dl.acm.org/citation.cfm?id=1926423
  • Gulwani (2015) Sumit Gulwani. 2015. Automating Repetitive Tasks for the Masses. In Proceedings of the 42Nd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL ’15). ACM, New York, NY, USA, 1–2. https://doi.org/10.1145/2676726.2682621
  • Gulwani (2016) Sumit Gulwani. 2016. Programming by Examples (and its Applications in Data Wrangling). In Verification and Synthesis of Correct and Secure Systems. IOS Press. https://www.microsoft.com/en-us/research/publication/programming-examples-applications-data-wrangling/
  • Gupta (2015) Gopal Gupta. 2015. Language-based Software Engineering. Sci. Comput. Program. 97, P1 (Jan. 2015), 37–40. https://doi.org/10.1016/j.scico.2014.02.010
  • Heer et al. (2015) Jeffrey Heer, Joseph Hellerstein, and Sean Kandel. 2015. Predictive Interaction for Data Transformation. In Conference on Innovative Data Systems Research (CIDR). http://idl.cs.washington.edu/papers/predictive-interaction
  • Horvitz (1999) Eric Horvitz. 1999. Principles of Mixed-initiative User Interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’99). ACM, New York, NY, USA, 159–166. https://doi.org/10.1145/302979.303030
  • Jin et al. (2017) Zhongjun Jin, Michael R. Anderson, Michael Cafarella, and H. V. Jagadish. 2017. Foofah: Transforming Data By Example. ACM Press, 683–698. https://doi.org/10.1145/3035918.3064034
  • Johnston et al. (2004) Wesley M. Johnston, J. R. Paul Hanna, and Richard J. Millar. 2004. Advances in Dataflow Programming Languages. ACM Comput. Surv. 36, 1 (March 2004), 1–34. https://doi.org/10.1145/1013208.1013209
  • Kandel et al. (2011) Sean Kandel, Andreas Paepcke, Joseph Hellerstein, and Jeffrey Heer. 2011. Wrangler: Interactive Visual Specification of Data Transformation Scripts. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’11). ACM, New York, NY, USA, 3363–3372. https://doi.org/10.1145/1978942.1979444
  • Kats and Visser (2010) Lennart C.L. Kats and Eelco Visser. 2010. The Spoofax Language Workbench: Rules for Declarative Specification of Languages and IDEs. In Proceedings of the ACM International Conference on Object Oriented Programming Systems Languages and Applications (OOPSLA ’10). ACM, New York, NY, USA, 444–463. https://doi.org/10.1145/1869459.1869497 event-place: Reno/Tahoe, Nevada, USA.
  • Kosar et al. (2016) Tomaž Kosar, Sudev Bohra, and Marjan Mernik. 2016. Domain-Specific Languages: A Systematic Mapping Study. Information and Software Technology 71 (March 2016), 77–91. https://doi.org/10.1016/j.infsof.2015.11.001
  • Le and Gulwani (2014) Vu Le and Sumit Gulwani. 2014. FlashExtract: A Framework for Data Extraction by Examples. In Proceedings of the 35th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI ’14). ACM, New York, NY, USA, 542–553. https://doi.org/10.1145/2594291.2594333
  • Machlis (2016) Sharon Machlis. 2016. Data wrangling tool Trifacta aims to ease analysis pain. http://www.computerworld.com/article/3104769/data-analytics/data-wrangling-tool-trifacta-aims-to-ease-analysis-pain.html
  • MacInnes et al. (2010) Joseph MacInnes, Stephanie Santosa, and William Wright. 2010. Visual Classification: Expert Knowledge Guides Machine Learning. IEEE Comput. Graph. Appl. 30, 1 (Jan. 2010), 8–14. https://doi.org/10.1109/MCG.2010.18
  • Makonin et al. (2016) Stephen Makonin, Daniel McVeigh, Wolfgang Stuerzlinger, Khoa Tran, and Fred Popowich. 2016. Mixed-Initiative for Big Data: The Intersection of Human + Visual Analytics + Prediction. IEEE, 1427–1436. https://doi.org/10.1109/HICSS.2016.181
  • Rahman et al. (2018) Protiva Rahman, Courtney Hebert, and Arnab Nandi. 2018. ICARUS: Minimizing Human Effort in Iterative Data Completion. Proc. VLDB Endow. 11, 13 (Sept. 2018), 2263–2276. https://doi.org/10.14778/3275366.3284970
  • Raman and Hellerstein (2001) Vijayshankar Raman and Joseph M. Hellerstein. 2001. Potter’s Wheel: An Interactive Data Cleaning System. In Proceedings of the 27th International Conference on Very Large Data Bases (VLDB ’01). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 381–390. http://dl.acm.org/citation.cfm?id=645927.672045 00411.
  • Raza and Gulwani (2017) Mohammad Raza and Sumit Gulwani. 2017. Automated Data Extraction Using Predictive Program Synthesis. In

    Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA.

    , Satinder P. Singh and Shaul Markovitch (Eds.). AAAI Press, 882–890.
    http://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/15034
  • Smith and Albarghouthi (2016) Calvin Smith and Aws Albarghouthi. 2016. MapReduce Program Synthesis. In Proceedings of the 37th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI ’16). ACM, New York, NY, USA, 326–340. https://doi.org/10.1145/2908080.2908102
  • Solar-Lezama (2008) Armando Solar-Lezama. 2008. Program Synthesis by Sketching. PhD Thesis. University of California at Berkeley, Berkeley, CA, USA.
  • Stonebraker et al. (2013) Michael Stonebraker, Ihab F Ilyas, Stan Zdonik, George Beskales, and Alexander Pagan. 2013. Data Curation at Scale: The Data Tamer System. 6th Biennial Conference on Innovative Data Systems Research (2013).
  • Verborgh and De Wilde (2013) Ruben Verborgh and Max De Wilde. 2013. Using OpenRefine (1st new edition edition ed.). Packt Publishing. http://openrefine.org/