TigerGraph: A Native MPP Graph Database

01/24/2019 ∙ by Alin Deutsch, et al. ∙ TigerGraph University of California, San Diego 0

We present TigerGraph, a graph database system built from the ground up to support massively parallel computation of queries and analytics. TigerGraph's high-level query language, GSQL, is designed for compatibility with SQL, while simultaneously allowing NoSQL programmers to continue thinking in Bulk-Synchronous Processing (BSP) terms and reap the benefits of high-level specification. GSQL is sufficiently high-level to allow declarative SQL-style programming, yet sufficiently expressive to concisely specify the sophisticated iterative algorithms required by modern graph analytics and traditionally coded in general-purpose programming languages like C++ and Java. We report very strong scale-up and scale-out performance over a benchmark we published on GitHub for full reproducibility.



There are no comments yet.


This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Graph database technology is among the fastest-growing segments in today’s data management industry. Since seeing early adoption by companies including Twitter, Facebook and Google, graph databases have evolved into a mainstream technology used today by enterprises across industries, complementing (and sometimes replacing) both traditional RDBMSs and newer NoSQL big-data products. Maturing beyond social networks, the technology is disrupting an increasing number of areas, such as supply chain management, e-commerce recommendations, cybersecurity, fraud detection, power grid monitoring, and many other areas in advanced data analytics.

While research on the graph data model (with associated query languages and academic prototypes) dates back to the late 1980s, in recent years we have witnessed the rise of several products offered by commercial software companies like Neo Technologies (supporting the graph query language Cypher (Technologies, 2018)) and DataStax (Enterprise, 2018) (supporting Gremlin (TinkerPop, 2018)). These languages are also supported by the commercial offerings of many other companies (Amazon Neptune (Amazon, [n. d.]), IBM Compose for JanusGraph (IBM, [n. d.]), Microsoft Azure CosmosDB (Microsoft, 2018), etc.).

We introduce TigerGraph, a new graph database product by the homonymous company. TigerGraph is a native parallel graph database, in the sense that its proprietary storage is designed from the ground up to store graph nodes, edges and their attributes in a way that supports an engine that computes queries and analytics in massively parallel processing (MPP) fashion for significant scale-up and scale-out performance.

TigerGraph allows developers to express queries and sophisticated graph analytics using a high-level language called GSQL. We subscribe to the requirements for a modern query language listed in the G-Core manifesto (Angles et al., 2018). To these, we add

  • facilitating adoption by the largest query developer community in existence, namely SQL developers. GSQL was designed for full compatibility with SQL in the sense that if no graph-specific primitives are mentioned, the queries become pure standard SQL. The graph-specifc primitives include the flexible regular path expression-based patterns advocated in (Angles et al., 2018).

  • the support, beyond querying, of classical multi-pass and iterative algorithms as required by modern graph analytics (such as PageRank, weakly-connected components, shortest-paths, recommender systems, etc., all GSQL-expressible). This is achieved while staying declarative and high-level by introducing only two primitives: loops and accumulators.

  • allowing developers with NoSQL background (which typically espouses a low-level and imperative programming style) to preserve their Map/Reduce or graph Bulk-Synchronous Parallel (SP) (Valiant, 2011) mentality while reaping the benefits of high-level declarative query specification. GSQL admits natural Map/Reduce and graph BSP interpretations and is actually implemented accordingly to support parallel processing.

A free TigerGraph developer edition can be downloaded from the company Web site 111http://tigergraph.com, together with documentation, an e-book, white papers, a series of representative analytics examples reflecting real-life customer needs, as well as the results of a benchmark comparing our engine to other commercial graph products.

Paper Organization.  The remainder of the paper is organized as follows. Section 2 overviews TigerGraph’s key design choices, while its architecture is described in Section 3. We present the DDL and DML in Sections 4 and 5, respectively. We discuss evaluation complexity in Section 6, BSP interpretations in Section 7, and we conclude in Section 8. In Appendix B, we report on an experimental evaluation using a benchmark published for reproducibility in TigerGraph’s GitHub repository.

2. Overview of TigerGraph’s Native Parallel Graph Design

We overview here the main ingredients of TigerGraph’s design, showing how they work together to achieve speed and scalability.

A Native Distributed Graph.  TigerGraph was designed from the ground up as a native graph database. Its proprietary data store holds nodes, edges, and their attributes. We decided to avoid the more facile solution of building a wrapper on top of a more generic NoSQL data store because this virtual graph strategy incurs a double performance penalty. First, the translation from virtual graph manipulations to physical storage operations introduces overhead. Second, the underlying structure is not optimized for graph operations.

Compact Storage with Fast Access.  TigerGraph is not an in-memory database, because holding all data in memory is a preference but not a requirement (this is true for the enterprise edition, but not the free developer edition). Users can set parameters that specify how much of the available memory may be used for holding the graph. If the full graph does not fit in memory, then the excess is spilled to disk.

Data values are stored in encoded formats that effectively compress the data. The compression factor varies with the graph structure and data, but typical compression factors are between 2x and 10x. Compression reduces not only the memory footprint and thus the cost to users, but also CPU cache misses, speeding up overall query performance. Decompression/decoding is efficient and transparent to end users, so the benefits of compression outweigh the small time delay for compression/decompression. In general, the encoding is homomorphic (Roth and Horn, 1993), that is decompression is needed only for displaying the data. When values are used internally, often they may remain encoded and compressed. Internally hash indices are used to reference nodes and edges. For this reason, accessing a particular node or edge in the graph is fast, and stays fast even as the graph grows in size. Moreover, maintaining the index as new nodes and edges are added to the graph is also very fast.

Parallelism and Shared Values.  TigerGraph was built for parallel execution, employing a design that supports massively parallel processing (MPP) in all aspects of its architecture. TigerGraph exploits the fact that graph queries are compatible with parallel computation. Indeed, the nature of graph queries is to follow the edges between nodes, traversing multiple distinct paths in the graph. These traversals are a natural fit for parallel/multithreaded execution. Various graph algorithms require these traversals to proceed according to certain disciplines, for instance in a breadth-first manner, keeping book of visited nodes and pruning traversals upon encountering them. The standard solution is to assign a temporary variable to each node, marking whether it has already been visited. While such marker-manipulating operations may suggest low-level, general-purpose programming languages, one can actually express complex graph traversals in a few lines of code (shorter than this paragraph) using TigerGraph’s high-level query language.

Storage and Processing Engines Written in C++.  TigerGraph’s storage engine and processing engine are implemented in C++. A key reason for this choice is the fine-grained control of memory management offered by C++. Careful memory management contributes to TigerGraph’s ability to traverse many edges simultaneously in a single query. While an alternative implementation using an already provided memory management (like in the Java Virtual Machine) would be convenient, it would make it difficult for the programmer to optimize memory usage.

Automatic Partitioning.  In today’s big data world, enterprises need their database solutions to be able to scale out to multiple machines, because their data may grow too large to be stored economically on a single server. TigerGraph is designed to automatically partition the graph data across a cluster of servers to preserve high performance. The hash index is used to determine not only the within-server but also the which-server data location. All the edges that connect out from a given node are stored on the same server.

Distributed Computing.  TigerGraph supports a distributed computation mode that significantly improves performance for analytical queries that traverse a large portion of the graph. In distributed query mode, all servers are asked to work on the query; each server’s actual participation is on an as-needed basis. When a traversal path crosses from server A to server B, the minimal amount of information that server B needs to know is passed to it. Since server B already knows about the overall query request, it can easily fit in its contribution. In a benchmark study, we tested the commonly used PageRank algorithm. This algorithm is a severe test of a graph’s computational and communication speed because it traverses every edge, computes a score for every node, and repeats this traverse-and-compute for several iterations. When the graph was distributed across eight servers compared to a single-server, the PageRank query completed nearly seven times faster (details in Appendix B.3.3). This attests to TigerGraph’s efficient use of distributed infrastructure.

Programming Abstraction: an MPP Computation Model. 
The low-level programming abstraction offered by TigerGraph integrates and extends the two classical graph programming paradigms of think-like-a-vertex (a la Pregel (Malewicz et al., 2010), GraphLab (Low et al., 2012) and Giraph (Giraph, [n. d.])) and think-like-an-edge (PowerGraph (Gonzalez et al., 2012)). Conceptually, each node or edge acts simultaneously as a parallel unit of storage and computation, being associated with a compute function programmed by the user. The graph becomes a massively parallel computational mesh that implements the query/analytics engine.

GSQL, a High-Level Graph Query Language.  TigerGraph offers its own high-level graph querying and update language, GSQL. The core of most GSQL queries is the SELECT-FROM-WHERE block, modeled closely after SQL. GSQL queries also feature two key extensions that support efficient parallel computation: a novel ACCUM clause that specifies node and edge compute functions, and accumulator variables that aggregate inputs produced by parallel executions of these compute functions. GSQL queries have a declarative semantics that abstracts from the underlying infrastructure and is compatible with SQL. In addition, GSQL queries admit equivalent MPP-aware interpretations that appeal to NoSQL developers and are exploited in implementation.

3. System Architecture

Figure 1. TigerGraph System Architecture

TigerGraph’s architecture is depicted by Figure 1 (in the blue boxes). The system vertical stack is structured in three layers: the top layer comprises the user programming interface, which includes the GSQL compiler, the GraphStudio visual interface, and REST APIs for other languages to connect to TigerGraph; the middle layer contains the standard built-in and user defined functions (UDFs); the bottom layer includes the graph storage engine (GSE) and the graph processing engine (GPE). We elaborate on each component next.

The GSQL compiler is responsible for query plan generation. Downstream it can send the query plan to a query interpreter, or it can generate native machine code based on the query plan. The compiler modules perform type checks, semantic checks, query transformations, plan generation, code-generation and catalog management. The GSQL compiler supports syntax for the Data Definition Language (DDL), Data Loading Language (DLL), and Data Manipulation Language (DML). A GSQL query text can be input either by typing into the GSQL shell console or via a REST API call. Once a query plan is generated by the compiler, depending on the user’s choice it can be sent to an interpreter to evaluate the query in real-time. Alternatively, the plan can be sent to the code generation module, where C++ UDFs and a REST endpoint are generated and compiled into a dynamic link library. If the compiled mode is chosen, the GSQL query is automatically deployed as a service that is invokable with different parameters via the REST API or the GSQL shell.

The GraphStudio visual SDK is a simple yet powerful graphical user interface. GraphStudio integrates all the phases of graph data analytics into one easy-to-use graphical web-browser user interface. GraphStudio is useful for ad hoc, interactive analytics and for learning to use the TigerGraph platform via drag-and-drop query building. Its components include a schema builder, a visual data loader, a graph explorer and the GSQL visual IDE. These components (except for the graph explorer) talk directly to the GSQL compiler. The graph explorer talks to the underlying engine via TigerGraph system built-in standard UDFs.

The REST APIs are the interfaces for other third party processes talking to TigerGraph system. The TigerGraph system includes a REST server programmed and optimized for high-throughput REST calls. All user defined queries, topology CRUD (create, read, update, delete) standard operations, and data ingestion loading jobs are installed as REST endpoints. The REST server will process all the http requests, going through request validation, dispatching, and relaying JSON-formatted responses to the http client.

The Standard and Custom UDFs are C++ functions encoding application logic. These UDFs serve as a bridge between the upper layer and the bottom layer. UDFs can reside in static library form, being linked at engine compile time. Alternatively, they can be a dynamic linked library hookable at the engine runtime. Many standard graph operations and generic graph algorithms are coded in pre-built static libraries and linked with the engine at each system release. Ad hoc user queries are generated at GSQL compile time and linked to the engine at runtime.

GSE is the storage engine. It is responsible for ID management, meta data management, and topology storage management in different layers (cache, memory and disk) of the hardware. ID management involves allocating or de-allocating internal ids for graph elements (vertex and edge). Meta data management stores catalog information, persists system states and synchronizes different system components to collaboratively finish a task. Topology storage management focuses on minimizing the memory footprint of the graph topology description while maximizing the parallel efficiency by slicing and compressing the graph into ideal parallel compute and storage units. It also provides kernel APIs to answer CRUD requests on the three data sets that we just mentioned.

GPE is the parallel engine. It processes UDFs in a bulk synchronous parallel (BSP) fashion. GPE manages the system resources automatically, including memory, cores, and machine partitions if TigerGraph is deployed in a cluster environment (this is supported in the Enterprise Edition). Besides resource management, GPE is mainly responsible for parallel processing of tasks from the task queue. A task could be a custom UDF, a standard UDF or a CRUD operation. GPE synchronizes, schedules and processes all these tasks to satisfy ACID transaction semantics while maximizing query throughput. Details can be found online 222 https://docs.tigergraph.com/dev/gsql-ref/querying/distributed-query-mode.

Components Not Depicted.  There are other miscellaneous components that are not depicted in the architecture diagram, such as a Kafka message service for buffering query requests and data streams, a control console named GAdmin for the system admin to invoke kernel function calls, backup and restore, etc.

4. Gsql Ddl

In addition to the type-free setting, GSQL also supports an SQL-like mode with a strong type system. This may seem surprising given that prior work on graph query languages traditionally touts schema freedom as a desirable feature (such data was not accidentally called “semi-structured”). However, this feature was historically motivated by the driving application at the time, namely integrating numerous heterogeneous, third-party-owned but individually relatively small data sources from the Web.

In contrast, TigerGraph targets enterprise applications, where the number and ownership of sources is not a concern, but their sheer size and resulting performance challenges are. In this setting, vertices and edges that model the same domain-specific entity tend to be uniformly structured and advance knowledge of their type is expected. Failing to exploit it for performance would be a missed opportunity.

GSQL’s Data Definition Language (DDL) shares SQL’s philosophy of defining in the same CREATE statement a persistent data container as well as its type. This is in contrast to typical programming languages which define types as stand-alone entities, separate from their instantiations.

GSQL’s CREATE statements can define vertex containers, edge containers, and graphs consisting of these containers. The attributes for the vertices/edges populating the container are declared using syntax borrowed from SQL’s CREATE TABLE command. In TigerGraph’s model, a graph may contain both directed and undirected edges. The same vertex/edge container may be shared by multiple graphs.

Example 1 (Ddl).

Consider the following DDL statements declaring two graphs: LinkedIn and Twitter. Notice how symmetric relationships (such as LinkedIn connections) are modeled as undirected edges, while asymmetric relationships (such as Twitter’s following or posting) correspond to directed edges. Edge types specify the types of source and target vertices, as well as optional edge attributes (see the since and end attributes below).

For vertices, one can declare primary key attributes with the same meaning as in SQL (see the email attribute of Person vertices).

   (email STRING PRIMARY KEY, name STRING, dob DATE)
   (id INT PRIMARY KEY, text STRING,timestamp DATE)
   (FROM Person, TO Tweet)
   (FROM Person, TO Person, since DATE)
   (FROM Person, TO Person, since DATE,endDATE)
   (Person, Connected)
   (Person, Tweet, Posts, Follows)

By default, the primary key of an edge type is the composite key comprising the primary keys of its endpoint vertex types.

Edge Discriminators.  Multiple parallel edges of the same type between the same endpoints are allowed. To distinguish among them, one can declare discriminator attributes which complete the pair of endpoint vertex keys to uniquely identify the edge. This is analogous to the concept of weak entity set discriminators in the Entity-Relationship Model (Chen, 1976). For instance, one could use the dates of employment to discriminate between multiple edges modeling recurring employments of a LinkedIn user at the same company.

   (FROM Company, TO Person, start DATE, end DATE)

Reverse Edges.  The GSQL data model includes the concept of edges being inverses of each other, analogous to the notion of inverse relationships from the ODMG ODL standard (Cattell et al., 2000).

Consider a graph of fund transfers between bank accounts, with a directed Debit edge from account A to account B signifying the debiting of A in favor of B (the amount and timestamp would be modeled as edge attributes). The debiting action corresponds to a crediting action in the opposite sense, from B to A. If the application needs to explicitly keep track of both credit and debit vocabulary terms, a natural modeling consists in introducing for each Debit edge a reverse Credit edge for the same endpoints, with both edges sharing the values of the attributes, as in the following example:

   (number int PRIMARY KEY, balance FLOAT, ...)
   (FROM Account, TO Account, amount float, ...)

5. Gsql Dml

The guiding principle behind the design of GSQL was to facilitate adoption by SQL programmers while simultaneously flattening the learning curve for novices, especially for adopters of the BSP programming paradigm (Valiant, 2011).

To this end, GSQL’s design starts from SQL, extending its syntax and semantics parsimoniously, i.e. avoiding the introduction of new keywords for concepts that already have an SQL counterpart. We first summarize the key additional primitives before detailing them.

Graph Patterns in the FROM Clause.  GSQL extends SQL’s FROM clause to allow the specification of patterns. Patterns specify constraints on paths in the graph, and they also contain variables, bound to vertices or edges of the matching paths. In the remaining query clauses, these variables are treated just like standard SQL tuple variables.

Accumulators.  The data found along a path matched by a pattern can be collected and aggregated into accumulators. Accumulators support multiple simultaneous aggregations of the same data according to distinct grouping criteria. The aggregation results can be distributed across vertices, to support multi-pass and, in conjunction with loops, even iterative graph algorithms implemented in MPP fashion.

Loops.  GSQL includes control flow primitives, in particular loops, which are essential to support standard iterative graph analytics (e.g. PageRank (Brin et al., 1998), shortest-paths (Gibbons, 1985), weakly connected components (Gibbons, 1985), recommender systems, etc.).

Direction-Aware Regular Path Expressions (DARPEs). 
GSQL’s FROM clause patterns contain path expressions that specify constrained reachability queries in the graph. GSQL path expressions start from the de facto standard of two-way regular path expressions (Calvanese et al., 2000) which is the culmination of a long line of works on graph query languages, including reference languages like WebSQL (Mendelzon et al., 1996), StruQL (Fernandez et al., 1997) and Lorel (Abiteboul et al., 1997). Since two-way regular path expressions were developed for directed graphs, GSQL extends them to support both directed and undirected edges in the same graph. We call the resulting path expressions Direction-Aware Regular Path Expressions (DARPEs).

5.1. Graph Patterns in the FROM Clause

GSQL’s FROM clause extends SQL’s basic FROM clause syntax to also allow atoms of general form

GraphName AS? pattern

where the AS keyword is optional, GraphName is the name of a graph, and ¡pattern¿ is a pattern given by a regular path expression with variables.

This is in analogy to standard SQL, in which a FROM clause atom

TableName AS? Alias

specifies a collection (a bag of tuples) to the left of the AS keyword and introduces an alias to the right. This alias can be viewed as a simple pattern that introduces a single tuple variable. In the graph setting, the collection is the graph and the pattern may introduce several variables. We show more complex patterns in Section 5.4 but illustrate first with the following simple-pattern example.

Example 2 (Seamless Querying of Graphs and Relational Tables).

Assume Company ACME maintains a human resource database stored in an RDBMS containing a relational table “Employee”. It also has access to the “LinkedIn” graph from Example 1 containing the professional network of LinkedIn users.

The query in Figure 2 joins relational HR employee data with LinkedIn graph data to find the employees who have made the most LinkedIn connections outside the company since 2016:

    SELECT e.email, e.name, count (outsider)
    FROM   Employee AS e,
           LinkedIn AS Person: p -(Connected: c)- Person: outsider
    WHERE  e.email = p.email and
           outsider.currentCompany NOT LIKE ’ACME’ and
           c.since >= 2016
    GROUP BY e.email, e.name
Figure 2. Query for Example 2, Joining Relational Table and Graph

Notice the pattern Person:p -(Connected:c)- Person:outsider to be matched against the “LinkedIn“ graph. The pattern variables are “p”, “c” and “outsider”, binding respectively to a “Person” vertex, a “Connected” edge and a “Person” vertex. Once the pattern is matched, its variables can be used just like standard SQL tuple aliases. Notice that neither the WHERE clause nor the SELECT clause syntax discriminate among aliases, regardless of whether they range over tuples, vertices or edges.

The lack of an arrowhead accompanying the edge subpattern -(Connected: c)- requires the matched “Connected” edge to be undirected.

To support cross-graph joins, the FROM clause allows the mention of multiple graphs, analogously to how the standard SQL FROM clause may mention multiple tables.

Example 3 (Cross-Graph and -Table Joins).

Assume we wish to gather information on employees, including how many tweets about their company and how many LinkedIn connections they have. The employee info resides in a relational table “Employee”, the LinkedIn data is in the graph named “LinkedIn” and the tweets are in the graph named “Twitter”. The query is shown in Figure 3. Notice the join across the two graphs and the relational table.

      SELECT e.email, e.name, e.salary, count (other), count (t)
      FROM   Employee AS e,
             LinkedIn AS Person: p -(Connected)- Person: other,
             Twitter AS User: u -(Posts>)- Tweet: t
      WHERE  e.email = p.email and p.email = u.email and
             t.text CONTAINS e.company
      GROUP BY e.email, e.name, e.salary
Figure 3. Query for Example 3, Joining Across Two Graphs

Also notice the arrowhead in the edge subpattern ranging over the Twitter graph, , which matches only directed edges of type “Posts“, pointing from the “User” vertex to the “Tweet” vertex.

5.2. Accumulators

We next introduce the concept of accumulators , i.e. data containers that store an internal value and take inputs that are aggregated into this internal value using a binary operation. Accumulators support the concise specification of multiple simultaneous aggregations by distinct grouping criteria, and the computation of vertex-stored side effects to support multipass and iterative algorithms.

The accumulator abstraction was introduced in the GreenMarl system (Hong et al., 2012) and it was adapted as high-level first-class citizen in GSQL to distinguish among two flavors:

  • Vertex accumulators are attached to vertices, with each vertex storing its own local accumulator instance. They are useful in aggregating data encountered during the traversal of path patterns and in storing the result distributed over the visited vertices.

  • Global accumulators have a single instance and are useful in computing global aggregates.

Accumulators are polymorphic, being parameterized by the type of the internal value , the type of the inputs , and the binary combiner operation

Accumulators implement two assignment operators. Denoting with the internal value of accumulator ,

  • sets to the provided input ;

  • aggregates the input into using the combiner, i.e. sets to .

For a comprehensive documentation on GSQL accumulators, see the developer’s guide at http://docs.tigergraph.com. Here, we explain accumulators by example.

Example 4 (Multiple Aggregations by Distinct Grouping Criteria).

Consider a graph named “SalesGraph” in which the sale of a product to a customer is modeled by a directed “Bought”-edge from the “Customer”-vertex modeling to the “Product”-vertex modeling . The number of product units bought, as well as the discount at which they were offered are recorded as attributes of the edge. The list price of the product is stored as attribute of the corresponding “Product” vertex.

We wish to simultaneously compute the sales revenue per product from the “toy” category, the toy sales revenue per customer, and the overall total toy sales revenue. 333 Note that writing this query in standard SQL is cumbersome. It requires performing two GROUP BY operations, one by customer and one by product. Alternatively, one can use the window functions’ OVER - PARTITION BY clause, that can perform the groupings independently but whose output repeats the customer revenue for each product bought by the customer, and the product revenue for each customer buying the product. Besides yielding an unnecessarily large result, this solution then requires two post-processing SQL queries to separate the two aggregates.

We define a vertex accumulator type for each kind of revenue. The revenue for toy product will be aggregated at the vertex modeling by vertex accumulator revenuePerToy, while the revenue for customer will be aggregated at the vertex modeling by the vertex accumulator revenuePerCust. The total toy sales revenue will be aggregated in a global accumulator called totalRevenue. With these accumulators, the multi-grouping query is concisely expressible (Figure 4).

          SumAccum<float> @revenuePerToy, @revenuePerCust, @@totalRevenue
          SELECT c
          FROM   SalesGraph AS Customer: c -(Bought>: b)- Product:p
          WHERE  p.category = ’toys’
          ACCUM  float salesPrice = b.quantity * p.listPrice * (100 - b.percentDiscount)/100.0,
                 c.@revenuePerCust += salesPrice,
                 p.@revenuePerToy += salesPrice,
                 @@totalRevenue += salesPrice;
Figure 4. Multi-Aggregating Query for Example 4

Note the definition of the accumulators using the WITH clause in the spirit of standard SQL definitions. Here,
SumAccum<float> denotes the type of accumulators that hold an internal floating point scalar value and aggregate inputs using the floating point addition operation. Accumulator names prefixed by a single @ symbol denote vertex accumulators (one instance per vertex) while accumulator names prefixed by @@ denote a global accumulator (a single shared instance).

Also note the novel ACCUM clause, which specifies the generation of inputs to the accumulators. Its first line introduces a local variable “salesPrice”, whose value depends on attributes found in both the “Bought” edge and the “Product” vertex. This value is aggregated into each accumulator using the “+=” operator. c.@revenuePerCust refers to the vertex accumulator instance located at the vertex denoted by vertex variable .

Multi-Output SELECT Clause.  GSQL’s accumulators allow the simultaneous specification of multiple aggregations of the same data. To take full advantage of this capability, GSQL complements it with the ability to concisely specify simultaneous outputs into multiple tables for data obtained by the same query body. This can be thought of as evaluating multiple independent SELECT clauses.

Example 5 (Multi-Output SELECT).

While the query in Example 4 outputs the customer vertex ids, in that example we were interested in its side effect of annotating vertices with the aggregated revenue values and of computing the total revenue. If instead we wished to create two tables, one associating customer names with their revenue, and one associating toy names with theirs, we would employ GSQL’s multi-output SELECT clause as follows (preserving the FROM, WHERE and ACCUM clauses of Example 4).

  SELECT c.name, c.@revenuePerCust INTO PerCust;
         t.name, t.@revenuePerToy INTO PerToy

Notice the semicolon, which separates the two simultaneous outputs.

Semantics.  The semantics of GSQL queries can be given in a declarative fashion analogous to SQL semantics: for each distinct match of the FROM clause pattern that satisfies the WHERE clause condition, the ACCUM clause is executed precisely once. After the ACCUM clause executions complete, the multi-output SELECT clause executes each of its semicolon-separated individual fragments independently, as standard SQL clauses. Note that we do not specify the order in which matches are found and consequently the order of ACCUM clause applications. We leave this to the engine implementation to support optimization. The result is well-defined (input-order-invariant) whenever the accumulator’s binary aggregation operation is commutative and associative. This is certainly the case for addition, which is used in Example 4, and for most of GSQL’s built-in accumulators.

Extensible Accumulator Library.  GSQL offers a list of built-in accumulator types. TigerGraph’s experience with the deployment of GSQL has yielded the short list from Section A, that covers most use cases we have encountered in customer engagements. In addition, GSQL allows users to define their own accumulators by implementing a simple C++ interface that declares the binary combiner operation used for aggregation of inputs into the stored value. This leads to an extensible query language, facilitating the development of accumulator libraries.

Accumulator Support for Multi-pass Algorithms.  The scope of the accumulator declaration may cover a sequence of query blocks, in which case the accumulated values computed by a block can be read (and further modified) by subsequent blocks, thus achieving powerful composition effects. These are particularly useful in multi-pass algorithms.

Example 6 (Two-Pass Recommender Query).

Assume we wish to write a simple toy recommendation system for a customer given as parameter to the query. The recommendations are ranked in the classical manner: each recommended toy’s rank is a weighted sum of the likes by other customers.

Each like by an other customer is weighted by the similarity of to customer

. In this example, similarity is the standard log-cosine similarity 

(Singhal, 2001), which reflects how many toys customers and like in common. Given two customers x and y, their log-cosine similarity is defined as .

    CREATE QUERY TopKToys (vertex<Customer> c, int k) FOR GRAPH SalesGraph {
      SumAccum<float> @lc, @inCommon, @rank;
      SELECT DISTINCT o INTO OthersWithCommonLikes
      FROM   Customer:c -(Likes>)- Product:t -(<Likes)- Customer:o
      WHERE  o <> c and t.category = ’Toys’
      ACCUM  o.@inCommon += 1
      POST_ACCUM o.@lc = log (1 + o.@inCommon);
      SELECT t.name, t.@rank AS rank INTO Recommended
      FROM   OthersWithCommonLikes:o -(Likes>)- Product:t
      WHERE  t.category = ’Toy’ and c <> o
      ACCUM  t.@rank += o.@lc
      ORDER BY t.@rank DESC
      LIMIT  k;
      RETURN Recommended;
Figure 5. Recommender Query for Example 6

The query is shown in Figure 5. The query header declares the name of the query and its parameters (the vertex of type “Customer” , and the integer value of desired recommendations). The header also declares the graph for which the query is meant, thus freeing the programmer from repeating the name in the FROM clauses. Notice also that the accumulators are not declared in a WITH clause. In such cases, the GSQL convention is that the accumulator scope spans all query blocks. Query TopKToys consists of two blocks.

The first query block computes for each other customer their log-cosine similarity to customer , storing it in ’s vertex accumulator @lc. To this end, the ACCUM clause first counts the toys liked in common by aggregating for each such toy the value into ’s vertex accumulator @inCommon. The POST_ACCUM clause then computes the logarithm and stores it in ’s vertex accumulator @lc.

Next, the second query block computes the rank of each toy by adding up the similarities of all other customers who like . It outputs the top recommendations into table Recommended, which is returned by the query.

Notice the input-output composition due to the second query block’s FROM clause referring to the set of vertices OthersWithCommonLikes (represented as a single-
column table) computed by the first query block. Also notice the side-effect composition due to the second block’s ACCUM clause referring to the
@lc vertex accumulators computed by the first block. Finally, notice how the SELECT clause outputs vertex accumulator values (t.@rank) analogously to how it outputs vertex attributes (t.name).

Example 6 introduces the POST_ACCUM clause, which is a convenient way to post-process accumulator values after the ACCUM clause finishes computing their new aggregate value.

5.3. Loops

GSQL includes a while loop primitive, which, when combined with accumulators, supports iterative graph algorithms. We illustrate for the classic PageRank (Brin et al., 1998) algorithm.

Example 7 (PageRank).

Figure 6 shows a GSQL query implementing a simple version of PageRank.

CREATE QUERY PageRank (float maxChange, int maxIteration, float dampingFactor) {
  MaxAccum<float> @@maxDifference;        //maxscore changeinan iteration
  SumAccum<float> @received_score;        //sumof scores receivedfromneighbors
  SumAccum<float> @score = 1;             // initial score for everyvertexis 1.
  AllV = {Page.*};                        // start withallvertices of type Page
  WHILE @@maxDifference > maxChange LIMIT maxIteration DO
     @@maxDifference = 0;
     S = SELECT      v
         FROM        AllV:v -(LinkTo>)- Page:n
         ACCUM       n.@received_score += v.@score/v.outdegree()
         POST-ACCUM  v.@score = 1-dampingFactor + dampingFactor * v.@received_score,
                     v.@received_score = 0,
                     @@maxDifference += abs(v.@score - v.@score’);
Figure 6. PageRank Query for Example 7

Notice the while loop that runs a maximum number of iterations provided as parameter maxIteration. Each vertex is equipped with a @score accumulator that computes the rank at each iteration, based on the current score at and the sum of fractions of previous-iteration scores of ’s neighbors (denoted by vertex variable ). v.@score’ refers to the value of this accumulator at the previous iteration.

According to the ACCUM clause, at every iteration each vertex contributes to its neighbor ’s score a fraction of ’s current score, spread over ’s outdegree. The score fractions contributed by the neighbors are summed up in the vertex accumulator @received_score.

As per the POST_ACCUM clause, once the sum of score fractions is computed at , it is combined linearly with ’s current score based on the parameter dampingFactor, yielding a new score for .

The loop terminates early if the maximum difference over all vertices between the previous iteration’s score (accessible as v.@score’) and the new score (now available in v.@score) is within a threshold given by parameter maxChange. This maximum is computed in the @@maxDifference global accumulator, which receives as inputs the absolute differences computed by the POST_ACCUM clause instantiations for every value of vertex variable .

5.4. DARPEs

We follow the tradition instituted by a line of classic work on querying graph (a.k.a. semi-structured) data which yielded such reference query languages as WebSQL (Mendelzon et al., 1996), StruQL (Fernandez et al., 1997) and Lorel (Abiteboul et al., 1997). Common to all these languages is a primitive that allows the programmer to specify traversals along paths whose structure is constrained by a regular path expression.

Regular path expressions (RPEs) are regular expressions over the alphabet of edge types. They conform to the context-free grammar

where EdgeType and are terminal symbols representing respectively the name of an edge type and a natural number. The wildcard symbol “_” denotes any edge type, “.” denotes the concatenation of its pattern arguments, and “” their disjunction. The ‘*’ terminal symbol is the standard Kleene star specifying several (possibly or unboundedly many) repetitions of its RPE argument. The optional bounds can specify a minimum and a maximum number of repetitions (to the left and right of the “..” symbol, respectively).

A path in the graph is said to satisfy an RPE if the sequence of edge types read from the source vertex of to the target vertex of spells out a word in the language accepted by when interpreted as a standard regular expression over the alphabet of edge type names.

DARPEs.  Since GSQL’s data model allows for the existence of both directed and undirected edges in the same graph, we refine the RPE formalism, proposing Direction-Aware RPEs (DARPEs). These allow one to also specify the orientation of directed edges in the path. To this end, we extend the alphabet to include for each edge type the symbols

  • , denoting a hop along an undirected -edge,

  • , denoting a hop along an outgoing -edge (from source to target vertex), and

  • , denoting a hop along an incoming -edge (from target to source vertex).

Now the notion of satisfaction of a DARPE by a path extends classical RPE satisfaction in the natural way.

DARPEs enable the free mix of edge directions in regular path expressions. For instance, the pattern

matches paths starting with a hop along an outgoing -edge, followed by a sequence of zero or more hops along either outgoing -edges or incoming -edges, next by a hop along an incoming -edge and finally ending in a hop along an undirected -edge.

5.4.1. DARPE Semantics

A well-known semantic issue arises from the tension between RPE expressivity and well-definedness. Regarding expressivity, applications need to sometimes specify reachability in the graph via RPEs comprising unbounded (Kleene) repetitions of a path shape (e.g. to find which target users are influenced by source users on Twitter, we seek the paths connecting users directly or indirectly via a sequence of tweets or retweets). Applications also need to compute various aggregate statistics over the graph, many of which are multiplicity-sensitive (e.g. count, sum, average). Therefore, pattern matches must preserve multiplicities, being interpreted under bag semantics. That is, a pattern :s -(RPE)- :t should have as many matches of variables

to a given pair of vertices as there are distinct paths from to satisfying the RPE. In other words, the count of these paths is the multiplicity of the in the bag of matches.

The two requirements conflict with well-definedness: when the RPE contains Kleene stars, cycles in the graph can yield an infinity of distinct paths satisfying the RPE (one for each number of times the path wraps around the cycle), thus yielding infinite multiplicities in the query output. Consider for example the pattern
Person: p1 -(Knows*)- Person: p2 in a social network with cycles involving the “Knows” edges.

Legal Paths.  Traditional solutions limit the kind of paths that are considered legal, so as to yield a finite number in any graph. Two popular approaches allow only paths with non-repeated vertices/edges. 444 Gremlin’s (TinkerPop, 2018) default semantics allows all unrestricted paths (and therefore possibly non-terminating graph traversals), but virtually all the documentation and tutorial examples involving unbounded traversal use non-repeated-vertex semantics (by explicitly invoking a built-in simplePath predicate). By default, Cypher (Technologies, 2018) adopts the non-repeated-edge semantics. However under these definitions of path legality the evaluation of RPEs is in general notoriously intractable: even checking existence of legal paths that satisfy the RPE (without counting them) has worst-case NP-hard data complexity (i.e. in the size of the graph (Mendelzon and Wood, 1995; Libkin et al., 2016)). As for the process of counting such paths, it is #P-complete. This worst-case complexity does not scale to large graphs.

In contrast, GSQL adopts the all-shortest-paths legality criterion. That is, among the paths from to satisfying a given DARPE, GSQL considers legal all the shortest ones. Checking existence of a shortest path that satisfies a DARPE, and even counting all such shortest paths is tractable (has polynomial data complexity).

For completeness, we recall here also the semantics adopted by the SparQL standard (Group, 2018) (SparQL is the W3C-standardized query language for RDF graphs): SparQL regular path expressions that are Kleene-starred are interpreted as boolean tests whether such a path exists, without counting the paths connecting a pair of endpoints. This yields a multiplicity of on the pair of path endpoints, which does not align with our goal of maintaining bag semantics for aggregation purposes.

Example 8 (Contrasting Path Semantics).
Figure 7. Graph for Example 8

To contrast the various path legality flavors, consider the graph in Figure 7, assuming that all edges are typed “E”. Among all paths from source vertex 1 to target vertex 5 that satisfy the DARPE “”, there are

  • Infinitely many unrestricted paths, depending on how many times they wrap around the 3-7-8-3 cycle;

  • Three non-repeated-vertex paths (1-2-3-4-5, 1-2-6-4-5, and 1-2-9-10-11-12-4-5);

  • Four non-repeated-edge paths (1-2-3-4-5, 1-2-6-4-5, 1-2-9-10-11-12-4-5, and 1-2-3-7-8-3-4-5);

  • Two shortest paths (1-2-3-4-5 and 1-2-6-4-5).

Therefore, pattern will return the binding with multiplicity 3, 4, or 2 under the non-repeated-vertex, non-repeated-edge respectively shortest-path legality criterion. In addition, under SparQL semantics, the multiplicity is 1.

Figure 8. Graph for Example 8

While in this example the shortest paths are a subset of the non-repeated-vertex paths, which in turn are a subset of the non-repeated-edge paths, this inclusion does not hold in general, and the different classes are incomparable. Consider Graph from Figure 8, and the pattern

and note that it does not match any path from vertex 1 to vertex 4 under non-repeated vertex or edge semantics, while it does match one such path under shortest-path semantics (1-2-3-5-6-2-3-4).

5.5. Updates

GSQL supports vertex-, edge- as well as attribute-level modifications (insertions, deletions and updates), with a syntax inspired by SQL (detailed in the online documentation at https://docs.tigergraph.com/dev/gsql-ref).

6. GSQL Evaluation Complexity

Due to its versatile control flow and user-defined accumulators, GSQL is evidently a Turing-complete language and therefore it does not make sense to discuss query evaluation complexity for the unrestricted language.

However, there are natural restrictions for which such considerations are meaningful, also allowing comparisons to other languages: if we rule out loops, accumulators and negation, but allow DARPEs, we are left essentially with a language that corresponds to the class of aggregating conjunctive two-way regular path queries (Wood, 2012).

The evaluation of queries in this restricted GSQL language fragment has polynomial data complexity. This is due to the shortest-path semantics and to the fact that, in this GSQL fragment, one need not enumerate the shortest paths as it suffices to count them in order to maintain the multiplicities of bindings. While enumerating paths (even only shortest) may lead to exponential result size in the size of the input graph, counting shortest paths is tractable (i.e. has polynomial data complexity).

Alternate Designs Causing Intractability.  If we change the path legality criterion (to the defaults from languages such as Gremlin and Cypher), tractability is no longer given. An additional obstacle to tractability is the primitive of binding a variable to the entire path (supported in both Gremlin and Cypher but not in GSQL), which may lead to results of size exponential in the input graph size, even under shortest-paths semantics.

Emulating the Intractable (and Any Other) Path Legality Variants.  GSQL accumulators and loops constitute a powerful combination that can implement both flavors of the intractable path semantics (non-repeated vertex or edge legality), and beyond, as syntactic sugar. That is, without requiring further extensions to GSQL.

To this end, the programmer can split the pattern into a sequence of query blocks, one for each single hop in the path, explicitly materializing legal paths in vertex accumulators whose contents is transferred from source to target of the hop. In the process, the paths materialized in the source vertex accumulator are extended with the current hop, and the extensions are stored in the target’s vertex accumulator.

For instance, a query block with the FROM clause pattern S:s -(E.F)- T:t is implemented by a sequence of two blocks, one with FROM pattern S:s-(E)- _:x, followed by one with FROM pattern _:x -(F)- T:t. Similar transformations can eventually translate arbitrarily complex DARPEs into sequences of query blocks whose patterns specify single hops (while loops are needed to implement Kleene repetitions).

For each single-hop block, whenever a pattern S:s -(E)- T:t is matched, the paths leading to the vertex denoted by are kept in a vertex accumulator, say s.@paths. These paths are extended with an additional hop to the vertex denoted by , the extensions are filtered by the desired legality criterion, and the qualifying extensions are added to t.@paths.

Note that this translation can be performed automatically, thus allowing programmers to specify the desired evaluation semantics on a per-query basis and even on a per-DARPE basis within the same query. The current GSQL release does not include this automatic translation, but future releases might do so given sufficient customer demand (which is yet to materialize). Absent such demand, we believe that there is value in making programmers work harder (and therefore think harder whether they really want) to specify intractable semantics.

7. BSP Interpretation of GSQL

GSQL is a high-level language that admits a semantics that is compatible with SQL and hence declarative and agnostic of the computation model. This presentation appeals to SQL experts as well as to novice users who wish to abstract away computation model details.

However, GSQL was designed from the get-go for full compatibility with the classical programming paradigms for BSP programming embraced by developers of graph analytics and more generally, NoSQL Map/Reduce jobs. We give here alternate interpretations to GSQL, which allow developers to preserve their Map/Reduce or graph BSP mentality while gaining the benefits of high-level specification when writing GSQL queries.

7.1. Think-Like-A-Vertex/Edge Interpretation

The TigerGraph graph can be viewed both as a data model and a computation model. As a computation model, the programming abstraction offered by TigerGraph integrates and extends the two classical graph programming paradigms of think-like-a-vertex (a la Pregel (Malewicz et al., 2010), GraphLab (Low et al., 2012) and Giraph (Giraph, [n. d.])) and think-like-an-edge (PowerGraph (Gonzalez et al., 2012)). A key difference between TigerGraph’s computation model and that of the above-mentioned systems is that these allow the programmer to specify only one function that uniformly aggregates messages/inputs received at a vertex, while in GSQL one can declare an unbounded number of accumulators of different types.

Conceptually, each vertex or edge acts as a parallel unit of storage and computation simultaneously, with the graph constituting a massively parallel computational mesh. Each vertex and edge can be associated with a compute function, specified by the ACCUM clause (vertex compute functions can also be specified by the POST_ACCUM clause).

The compute function is instantiated for each edge/vertex, and the instantiations execute in parallel, generating accumulator inputs (acc-inputs for short). We refer to this computation as the acc-input generation phase. During this phase, the compute function instantiations work under snapshot semantics: the effect of incorporating generated inputs into accumulator values is not visible to the individual executions until all of them finish. This guarantees that all function instantiations can execute in parallel, starting from the same accumulator snapshot, without interference as there are no dependencies between them.

Once the acc-input generation phase completes, the aggregation phase ensues. Here, inputs are aggregated into their accumulators using the appropriate binary combiners. Note that vertex accumulators located at different vertices can work in parallel, without interfering with each other.

This semantics requires synchronous execution, with the aggregation phase waiting at a synchronization barrier until the acc-input generation phase completes (which in turn waits at a synchronization barrier until the preceding query block’s aggregation phase completes, if such a block exists). It is therefore an instantiation of the Bulk-Synchronous-Parallel processing model (Valiant, 2011).

7.2. Map/Reduce Interpretation

GSQL also admits an interpretation that appeals to NoSQL practitioners who are not specialized on graphs and are trained on the classical Map/Reduce paradigm.

This interpretation is more evident on a normal form of GSQL queries, in which the FROM clause contains a single pattern that specifies a one-hop DARPE: S:s -(E:e)- T:t, with E a single edge type or a disjunction thereof. It is easy to see that all GSQL queries can be normalized in this way (though such normalization may involve the introduction of accumulators to implement the flow of data along multi-hop traversals, and of loops to implement Kleene repetitions). Once normalized, all GSQL queries can be interpreted as follows.

In this interpretation, the FROM, WHERE and ACCUM clauses together specify an edge map (EM) function, which is mapped over all edges in the graph. For those edges that satisfy the FROM clause pattern and the WHERE condition, the EM function evaluates the ACCUM clause and outputs a set of key-value pairs, where the key identifies an accumulator and the value an acc-input. The sending to accumulator of all inputs destined for corresponds to the shuffle phase in the classical map/reduce paradigm, with playing the role of a reducer. Since for vertex accumulators the aggregation happens at the vertices, we refer to this phase as the vertex reduce (VR) function.

Each GSQL query block thus specifies a variation of Map/Reduce jobs which we call EM/VR jobs. As opposed to standard map/reduce jobs, which define a single map/reduce function pair, a GSQL EM/VR job can specify several reducer types at once by defining multiple accumulators.

8. Conclusions

TigerGraph’s design is pervaded by MPP-awareness in all aspects, ranging from (i) the native storage with its customized partitioning, compression and layout schemes to (ii) the execution engine architected as a multi-server cluster that minimizes cross-boundary communication and exploits multi-threading within each server, to (iii) the low-level graph BSP programming abstractions and to (iv) the high-level query language admitting BSP interpretations.

This design yields noteworthy scale-up and scale-out performance whose experimental evaluation is reported in Appendix B for a benchmark that we made avaiable on GitHub for full reproducibility 555https://github.com/tigergraph/ecosys/tree/benchmark/benchmark/tigergraph. We used this benchmark also for comparison with other leading graph database systems
(ArangoDB (ArangoDB, 2018), Azure CosmosDB (Microsoft, 2018), JanusGraph (Foundation, 2018), Neo4j (Technologies, 2018), Amazon Neptune (Amazon, [n. d.])), against which TigerGraph compares favorably, as detailed in an online report 666https://www.tigergraph.com/benchmark/. The scripts for these systems are also published on GitHub 777https://github.com/tigergraph/ecosys/tree/benchmark/benchmark.

GSQL represents a sweet spot in the trade-off between abstraction level and expressivity: it is sufficiently high-level to allow declarative SQL-style programming, yet sufficiently expressive to specify sophisticated iterative graph algorithms and configurable DARPE semantics. These are traditionally coded in general-purpose languages like C++ and Java and available only as built-in library functions in other graph query languages such as Gremlin and Cypher, with the drawback that advanced programming expertise is required for customization.

The GSQL query language shows that the choice between declarative SQL-style and NoSQL-style programming over graph data is a false choice, as the two are eminently compatible. GSQL also shows a way to unify the querying of relational tabular and graph data.

GSQL is still evolving, in response to our experience with customer deployment. We are also responding to the experiences of the graph developer community at large, as TigerGraph is a participant in current ANSI standardization working groups for graph query languages and graph query extensions for SQL.


  • (1)
  • Abiteboul et al. (1997) Serge Abiteboul, Dallan Quass, Jason McHugh, Jennifer Widom, and Janet Wiener. 1997. The Lorel Query Language for Semistructured Data. Int. J. on Digital Libraries 1, 1 (1997), 68–88.
  • Amazon ([n. d.]) Amazon. [n. d.]. Amazon Neptune. https://aws.amazon.com/neptune/.
  • Angles et al. (2018) Renzo Angles, Marcelo Arenas, Pablo Barceló, Peter A. Boncz, George H. L. Fletcher, Claudio Gutierrez, Tobias Lindaaker, Marcus Paradies, Stefan Plantikow, Juan F. Sequeda, Oskar van Rest, and Hannes Voigt. 2018. G-CORE: A Core for Future Graph Query Languages. In Proceedings of the 2018 International Conference on Management of Data, SIGMOD Conference 2018, Houston, TX, USA, June 10-15, 2018. 1421–1432. https://doi.org/10.1145/3183713.3190654
  • ArangoDB (2018) ArangoDB. 2018. ArangoDB. https://www.arangodb.com/.
  • Benchmark (2018) The Graph Database Benchmark. 2018. TigerGraph. https://github.com/tigergraph/ecosys/tree/benchmark/benchmark/tigergraph.
  • Brin et al. (1998) Sergey Brin, Rajeev Motwani, Lawrence Page, and Terry Winograd. 1998. What can you do with a Web in your Pocket? IEEE Data Eng. Bull. 21, 2 (1998), 37–47. http://sites.computer.org/debull/98june/webbase.ps
  • Calvanese et al. (2000) Diego Calvanese, Giuseppe De Giacomo, Maurizio Lenzerini, and Moshe Y. Vardi. 2000. Containment of Conjunctive Regular Path Queries with Inverse. In KR 2000, Principles of Knowledge Representation and Reasoning Proceedings of the Seventh International Conference, Breckenridge, Colorado, USA, April 11-15, 2000. 176–185.
  • Cattell et al. (2000) R. G.G. Cattell, Douglas K. Barry, Mark Berler, Jeff Eastman, David Jordan, Craig Russell, Olaf Schadow, Torsten Stanienda, and Fernando Velez (Eds.). 2000. The Object Data Management Standard: ODMG 3.0. Morgan Kaufmann.
  • Chen (1976) Peter Chen. 1976. The Entity-Relationship Model - Toward a Unified View of Data. ACM Transactions on Database Systems 1, 1 (March 1976), 9–36.
  • Enterprise (2018) DataStax Enterprise. 2018. DataStax. https://www.datastax.com/.
  • Fernandez et al. (1997) Mary F. Fernandez, Daniela Florescu, Alon Y. Levy, and Dan Suciu. 1997. A Query Language for a Web-Site Management System. ACM SIGMOD Record 26, 3 (1997), 4–11.
  • Foundation (2018) The Linux Foundation. 2018. JanusGraph. http://janusgraph.org/.
  • Gibbons (1985) A. Gibbons. 1985. Algorithmic Graph Theory. Cambridge University Press.
  • Giraph ([n. d.]) Apache Giraph. [n. d.]. Apache Giraph. https://giraph.apache.org/.
  • Gonzalez et al. (2012) Joseph E. Gonzalez, Yucheng Low, Haijie Gu, Danny Bickson, and Carlos Guestrin. 2012. PowerGraph:Distributed Graph-Parallel Computation on Natural Graphs. In USENIX OSDI.
  • Group (2018) W3C SparQL Working Group. 2018. SparQL.
  • Hong et al. (2012) Sungpack Hong, Hassan Chafi, Eric Sedlar, and Kunle Olukotun. 2012. Green-Marl: a DSL for easy and efficient graph analysis. In Proceedings of the 17th International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS 2012, London, UK, March 3-7, 2012. 349–362. https://doi.org/10.1145/2150976.2151013
  • Hopcroft et al. (2003) John E. Hopcroft, Rajeev Motwani, and Jeffrey D. Ullman. 2003. Introduction to automata theory, languages, and computation - international edition (2. ed). Addison-Wesley.
  • IBM ([n. d.]) IBM. [n. d.]. Compose for JanusGraph. https://www.ibm.com/cloud/compose/janusgraph.
  • Libkin et al. (2016) Leonid Libkin, Wim Martens, and Domagoj Vrgoc. 2016. Querying Graphs with Data. J. ACM 63, 2 (2016), 14:1–14:53. https://doi.org/10.1145/2850413
  • Low et al. (2012) Yucheng Low, Joseph Gonzalez, Aapo Kyrola, Danny Bickson, Carlos Guestrin, and Joseph M. Hellerstein. 2012.

    Distributed GraphLab: A Framework for Machine Learning in the Cloud.

    PVLDB 5, 8 (2012), 716–727.
  • Malewicz et al. (2010) Grzegorz Malewicz, Matthew H. Austern, Aart J. C. Bik, James C. Dehnert, IIan Horn, Naty Leiser, and Grzegorz Czajkowski. 2010. Pregel: A System for Large-Scale Graph Processing. In SIGMOD’10.
  • Mendelzon et al. (1996) Alberto O. Mendelzon, George A. Mihaila, and Tova Milo. 1996. Querying the World Wide Web. In PDIS. 80–91.
  • Mendelzon and Wood (1995) A. O. Mendelzon and P. T. Wood. 1995. Finding regular simple paths in graph databases. SIAM J. Comput. 24, 6 (December 1995), 1235–1258.
  • Microsoft (2018) Microsoft. 2018. Azure Cosmos DB. https://azure.microsoft.com/en-us/services/cosmos-db/.
  • Roth and Horn (1993) Mark A. Roth and Scott J. Van Horn. 1993. Database compression. ACM SIGMOD Record 22, 3 (Sept 1993), 31–39.
  • Singhal (2001) Amit Singhal. 2001. Modern Information Retrieval: A Brief Overview. IEEE Data Eng. Bull. 24, 4 (2001), 35–43. http://sites.computer.org/debull/A01DEC-CD.pdf
  • Technologies (2018) Neo Technologies. 2018. Neo4j. https://www.neo4j.com/.
  • TinkerPop (2018) Apache TinkerPop. 2018. The Gremlin Graph Traversal Machine and Language. https://tinkerpop.apache.org/gremlin.html.
  • Valiant (2011) Leslie G. Valiant. 2011. A bridging model for multi-core computing. J. Comput. Syst. Sci. 77, 1 (2011), 154–166. https://doi.org/10.1016/j.jcss.2010.06.012
  • Wood (2012) Peter Wood. 2012. Query Languages for Graph Databases. ACM SIGMOD Record 41, 1 (March 2012).

Appendix A GSQL’s Main Built-In Accumulator Types

GSQL comes with a list of pre-defined accumulators, some of which we detail here. For details on GSQL’s accumulators and more supported types, see the online documentation at https://docs.tigergraph.com/dev/gsql-ref).

SumAccum<N>, where N is a numeric type. This accumulator holds an internal value of type N, accepts inputs of type N and aggregates them into the internal value using addition.

MinAccum<O>, where O is an ordered type. It computes the minimum value of its inputs of type O.

MaxAccum<O>, as above, swapping max for min aggregation.

AvgAccum<N>, where N is a numeric type. This accumulator computes the average of its inputs of type N. It is implemented in an order-invariant way by internally maintaining both the sum and the count of the inputs seen so far.

OrAccum, which aggregates its boolean inputs using logical disjunction.

AndAccum, which aggregates its boolean inputs using logical conjunction.

MapAccum<K,V> stores an internal value of map type, where K is the type of keys and V the type of values. V can itself be an accumulator type, thus specifying how to aggregate values mapped to the same key.

HeapAccum<T>(capacity, field_1 [ASC|DESC], field_2 [ASC|DESC], …, field_n [ASC|DESC]) implements a priority queue where T is a tuple type whose fields include field_1 through field_n, each of ordered type, capacity is the integer size of the priority queue, and the remaining arguments specify a lexicographic order for sorting the tuples in the priority queue (each field may be used in ascending or descending order).

Appendix B Benchmark

To evaluate TigerGraph experimentally, we designed a benchmark for MPP graph databases that allows us to explore the mult-cores/single-machine setting as well as the scale-out performance for computing and storage obtained from a cluster of machines.

In this section, we examine the data loading, query performance, and their associated resource usage for TigerGraph. For results on running the same experiments for other graph database systems, see (Benchmark, 2018).

The experiments test the performance of data loading and querying.

Data Loading.  To study loading performance, we measure

  • Loading time;

  • Storage size of loaded data; and

  • Peak CPU and memory usage on data loading.

The data loading test will help us understand the loading speed, TigerGraph’s data storage compression effect, and the loading resource requirements.

Querying.  We study performance for the following representative queries over the schema available on GitHub.

  • Q1. K-hop neighbor count (code available here). We measure the query response time and throughput.

  • Q2. Weakly-Connected Components (here). We measure the query response time.

  • Q3. PageRank (here). We measure the query response time for 10 iterations.

  • Peak CPU and memory usage for each of the above query.

The query test will reveal the performance of the interactive query workload (Q1) and the analytic query workload (Q2 and Q3). All queries are first benchmarked on a single EC2 machine, followed by Q1 employed to test TigerGraph’s scale-up capability on EC2 R4 family instances, and Q3 used to test the scale-out capability on clusters of various sizes.

With these tests, we were able to show the following properties of TigerGraph.

  • High performance of loading: ¿100GB/machine/hour.

  • Deep-link traversal capability: ¿10 hops traversal on a billion-edge scale real social graph.

  • Linear scale-up query performance: the query throughput linearly increases with the number of cores on a single machine.

  • Linear scale-out query performance: the analytic query response time linearly decreases with the number of machines.

  • Low and constant memory consumption and consistently high CPU utilization.

b.1. Experimental Setup

Name Description Vertices Edges
graph500 Synthetic Kronecker graph111http://graph500.org 2.4M 67M
twitter Twitter user-follower directed graph222http://an.kaist.ac.kr/traces/WWW2010.html 41.6M 1.47B
Table 1. Datasets
r4.2xlarge 8 61G 200G Ubuntu 14.04
r4.4xlarge 16 122G 200G Ubuntu 14.04
r4.8xlarge 32 244G 200G Ubuntu 14.04
Table 2. Cloud hardware and OS

Datasets. The experiment uses the two data sets described in Table 1. One synthetic and one real. For each graph, the raw data are formatted as a single tab-separated edge list. Each row contains two columns, representing the source and the target vertex id, respectively. Vertices do not have attributes, so there is no need for a separate vertex list.

Software. For the single-machine experiment, we used the freely available TigerGraph Developer Edition 2.1.4. For the multi-machine experiment, we used TigerGraph Enterprise Edition 2.1.6. All queries are written in GSQL. The graph database engine is written in C++ and can be run on Linux/Unix or container environments. For resource profiling, we used Tsar888https://github.com/alibaba/tsar.

Hardware. We ran the single-machine experiment on an Amazon EC2 r4.8xlarge instance type. For the single machine scale-up experiment, we used r4.2xlarge, r4.4xlarge and r4.8xlarge instances. The multi-machine experiment used r4.2xlarge instances to form different-sized clusters.

b.2. Data Loading Test

Methodology. For both data sets, we used GSQL DDL to create a graph schema containing one vertex type and one edge type. The edge type is a directed edge connecting the vertex type to itself. A declarative loading job was written in the GSQL loading language. There is no pre-(e.g. extracting unique vertex ids) or post-processing (e.g. index building).

Name Raw Size TigerGraph Size Duration
graph500 967M 482M 56s
twitter 24,375M 9,500M 813s
Table 3. Loading Results

Results Discussion. TigerGraph loaded the twitter data at the speed of 100G/Hour/Machine (Table 3). TigerGraph automatically encodes and compresses the raw data to less than half its original size–Twitter (2.57X compression), Graph500(2X compression). The size of the loaded data is an important consideration for system cost and performance. All else being equal, a compactly stored database can store more data on a given machine and has faster access times because it gets more cache and memory page hits. The measured peak memory usage for loading was 2.5% for graph500 and 7.08% for Twitter, while CPU peak usage was 48.4% for graph500 and 60.3% for Twitter.

b.3. Querying Test

b.3.1. Q1

A. Single-machine k-hop test.  The k-hop-path neighbor count query, which asks for the total count of the vertices which have a k-length simple path from a starting vertex is a good stress test for graph traversal performance.

Methodology.  For each data set, we count all k-hop-path endpoint vertices for 300 fixed random seeds sequentially. By “fixed random seed”, we mean we make a one-time random selection of N vertices from the graph and save this list as repeatable input condition for our tests. We measure the average query response time for k=1,2, 3,6,9,12 respectively.

Results Discussion.  For both data sets, TigerGaph can answer deep-link k-hop queries as shown in Tables 4 and 5. For graph500, it can answer 12-hop queries in circa 4 seconds. For the billion-edge Twitter graph, it can answer 12-hop queries in under 3 minutes. We note the significance of this accomplishment by pointing out that starting from 6 hops and above, the average number of k-hop-path neighbors per query is around 35 million. In contrast, we have benchmarked 5 other top commercial-grade property graph databases on the same tests, and all of them started failing on some 3-hop queries, while failing on all 6-hop queries (Benchmark, 2018). Also, note that the peak memory usage stays constant regardless of k. The constant memory footprint enables TigerGraphto follow links to unlimited depth to perform what we term “deep-link analytics”. CPU utilization increases with the hop count. This is expected as each hop can discover new neighbors at an exponential growth rate, thus leading to more CPU usage.

K-hop Avg RESP Avg N CPU MEM
1 5.95ms 5128 4.3% 3.8%
2 69.94ms 488,723 72.3% 3.8%
3 409.89ms 1,358,948 84.7% 3.7%
6 1.69s 1,524,521 88.1% 3.7%
9 2.98s 1,524,972 87.9% 3.7%
12 4.25s 1,524,300 89.0% 3.7%
Table 4. Graph500 - K-hop Query on r4.8xlarge
K-hop Avg RESP Avg N CPU MEM
1 22.60ms 106,362 9.8% 5.6%
2 442.70ms 3,245,538 78.6% 5.7%
3 6.64s 18,835,570 99.9% 5.8%
6 56.12s 34,949,492 100% 7.6%
9 109.34s 35,016,028 100% 7.7%
12 163.00s 35,016,133 100% 7.7%
Table 5. Twitter - K-hop Query on r4.8xlarge Instance

B. Single machine k-hop scale-up test In this test, we study TigerGraph’s scale-up ability, i.e. the ability to increase performance with increasing number of cores on a single machine.

Methodology. The same query workload was tested on machines with different number of cores: for a given k and a data set, 22 client threads keep sending the 300 k-hop queries to a TigerGraph server concurrently. When a query responds, the client thread responsible for that query will move on to send the next unprocessed query. We tested the same k-hop query workloads on three different machines on the R4 family: r4.2xlarge (8vCPUs), r4.4xlarge (16vCPUs), and r4.8xlarge (32vCPUs).

Results Discussion. As shown by Figure 9, TigerGraph linearly scales up the 3-hop query throughput test with the number of cores: 300 queries on the Twitter data set take 5774.3s on a 8-core machine, 2725.3s on a 16-core machine, and 1416.1s on a 32-core machine. These machines belong to the same EC2 R4 instance family, all having the same memory bandwidth- DDR4 memory, 32K L1, 256K L2, and 46M L3 caches, the same CPU model- dual socket Intel Xeon E5 Broadwell processors (2.3 GHz).

Data WCC 10-iter PageRank
Time CPU Mem Time CPU Mem
G500 3.1s 81.0% 2.2% 12.5s 89.5% 2.3%
Twitter 74.1s 100% 7.7% 265.4s 100% 5.6%
Table 6. Analytic Queries

b.3.2. Q2 and Q3

Full-graph queries examine the entire graph and compute results which describe the characteristics of the graph.

Methodology.  We select two full-graph queries, namely weakly connected component labeling and PageRank. A weakly connected component (WCC) is the maximal set of vertices and their connecting edges which can reach one another, if the direction of directed edges is ignored. The WCC query finds and labels all the WCCs in a graph. This query requires that every vertex and every edge be traversed. PageRank is an iterative algorithm which traverses every edge during every iteration and computes a score for each vertex. After several iterations, the scores will converge to steady state values. For our experiment, we run 10 iterations. Both algorithms are implemented in the GSQL language.

Results Discussion. TigerGraph performs analytic queries very fast with excellent memory footprint. Based on Table 6, for the Twitter data set, the storage size is 9.5G and the peak WCC memory consumption is 244G*7.7%, which is 18G and PageRank peak memory consumption is 244G*5.6%, which is 13.6G. Most of the time, the 32 cores’ total utilization is above 80%, showing that TigerGraph exploits multi-core capabilities efficiently.

b.3.3. Q3 Scale Out

Figure 9. Scale Up 3-hop Query Throughput On Twitter Data By Adding More Cores.
Figure 10. Scale Out 10-iteration PageRank Query Response Time On Twitter Data By Adding More Machines.

All previous tests ran in a single-machine setting. This test looks at how TigerGraph’s performance scales as the number of compute servers increases.

Methodology.  For this test, we used a more economical Amazon EC2 instance type (r4.2xlarge: 8 vCPUs, 61GiB memory, and 200GB attached GP2 SSD). When the Twitter dataset (compressed to 9.5 GB by TigerGraph) is distributed across 1 to 8 machines, 60GiB memory and 8 cores per machine is more than enough. For larger graphs or for higher query throughput, more cores may help; TigerGraph provides settings to tune memory and compute resource utilization. Also, to run on a cluster, we switched from the TigerGraph Developer Edition (v2.1.4) to the Enterprise Edition (v2.1.6). We used the Twitter dataset and ran the PageRank query for 10 iterations. We repeated this three times and averaged the query times. We repeated the tests for clusters containing 1, 2, 4 and 8 machines. For each cluster, the Twitter graph was partitioned into equally-sized segments across all the machines being used.

Results Discussion.  PageRank is an iterative algorithm which traverses every edge during every iteration. This means there is much communication between the machines, with information being sent from one partition to another. Despite this communication overhead, TigerGraph’s Native MPP Graph database architecture still succeeds in achieving a 6.7x speedup with 8 machines (Figure 10), showing good linear scale-out performance.

b.4. Reproducibility

All of the files needed to reproduce the tests (datasets, queries, scripts, input parameters, result files, and general instructions) are available on GitHub(Benchmark, 2018).

Appendix C Formal Syntax

In the following, terminal symbols are shown in bold font and not further defined when their meaning is self-understood.

c.1. Declarations

decl  accType gAccName (= expr)?  
     accType vAccName (= expr)? (*@@*)  baseType var (= expr)?  
gAccName  @@Id
vAccName  @Id
accType  SetAccum   <baseType>
        BagAccum <baseType>   HeapAccum<tupleType>capacity ( Id dir?)+
        OrAccum   AndAccum
        (*@BitwiseOrAccum@*)   BitwiseAndAccum
        MaxAccum <orderedType>   MinAccum   <orderedType>
        SumAccum <numTypestring>
        AvgAccum <numType>   ListAccum  <type>
        ArrayAccum <(*@type@*)> dimension+   MapAccum   <baseType  type>
        GroupByAccum <baseType Id ((*@@*) baseType Id)* (*@@*) accType> baseType (*@@*) orderedType   boolean
         datetime   edge (< edgeType >)?
         tupleType orderedType (*@@*) numType  string
            vertex (< (*@vertexType@*) >)? numType (*@@*) int   uint
        float   double
capacity  NumConst paramName paramName (*@@*) Id tupleType (*@@*) Tuple < baseType Id ((*@@*) baseType Id)* > dir (*@@*) ASC DESC
dimension  expr?
type  baseType  accType

c.2. DARPEs

darpe  edgeType
      edgeType>  <edgeType
      darpe(*@@*) bounds?  darpe
      darpe ((*@@*) darpe)+  darpe ( darpe)+
edgeType  Id
bounds  NumConst  NumConst
       (*@@*) NumConst  NumConst 

c.3. Patterns

     vTest ( var)?
  pathPattern -(*@@*)darpe ((*@@*) var)?(*@@*)- vTest ((*@@*) var)? pattern (*@@*) pathPattern ((*@@*) pathPattern)* vTest (*@@*) (*@@*)  vertexType ( vertexType)*
  var  Id
  vertexType  Id

c.4. Atoms

atom  relAtom  graphAtom
relAtom    tableName AS? var
tableName  Id
graphAtom   (graphName AS?)? pattern
graphName   Id

c.5. FROM Clause

fromClause  FROM atom ( atom)*

c.6. Terms

term  constant
     var   varattribName
     gAccName   gAccName
     var(*@@*)vAccName   varvAccName
     var(*@@*)type constant (*@@*) NumConst  StringConst
         DateTimeConst  true
attribName  Id

c.7. Expressions

  expr  term
       (*@@*) expr (*@@*)   expr
       expr arithmOp expr  not expr
       expr logicalOp expr  expr setOp expr
       expr between expr and expr  expr not? in expr
       expr like expr  expr is not? null
       fnName (*@@*) exprs? (*@@*)  case
           (when condition then expr)+
           (else expr)?
       case expr (when constant then expr)+ (else expr)? end  exprs?
       (*@@*)exprs?(*@@*)   exprs -> exprs  // MapAccum input
       expr arrayIndex+ // (*@ArrayAccum access@*) arithmOp (*@@*) (*@\ \mybf{/}\   \ \mybf{+}\   \ \mybf{\&}\  }\rightarrow$@*) and or (*@setOp@*) (*@ intersect@*) union  minus
exprs  expr( exprs)*
arrayIndex   expr

c.8. WHERE Clause

  whereClause  WHERE condition
  condition  expr

c.9. ACCUM Clause

  accClause  ACCUM stmts
  stmts  stmt ( stmt)*
  stmt  varAssignStmt
       vAccUpdateStmt   gAccUpdateStmt
       (*@forStmt@*)   caseStmt
       (*@ifStmt@*)   whileStmt
  varAssignStmt  baseType? var = expr
  vAccUpdateStmt  varvAccName = expr
                 var(*@@*)vAccName (*@+=@*) expr gAccUpdateStmt (*@@*) gAccName = expr   gAccName += expr
  forStmt  foreach var in expr do stmts end
          foreach (var ((*@@*) var)*) in expr do stmts end   foreach var in range expr  expr do stmts end
  caseStmt  case
                (when condition then stmts)+
                (else stmts)?
           case expr (when constant then stmts)+ (else stmts)? end (*@ifStmt@*) (*@@*) if condition then stmts (else stmts)? end (*@whileStmt@*) (*@@*) while condition limit expr do body end body (*@@*) bodyStmt ((*@@*) bodyStmt)* bodyStmt (*@@*) stmt   continue

c.10. POST_ACCUM Clause

  pAccClause  POST_ACCUM stmts

c.11. SELECT Clause

  selectClause  SELECT outTable ( outTable)*
  outTable  DISTINCT? col ( col)* INTO tableName
  col  expr (AS colName)?
  tableName  Id
  colName  Id

c.12. GROUP BY Clause

  groupByClause  GROUP BY exprs ( exprs)*

c.13. HAVING Clause

  havingClause  HAVING condition ( condition)*

c.14. ORDER BY Clause

  orderByClause  ORDER BY oExprs ( oExprs)*
  oExprs  oExpr( oExpr)*
  oExpr  expr dir?

c.15. LIMIT Clause

  limitClause  LIMIT expr ( expr)*

c.16. Query Block Statements

  queryBlock  selectClause
             fromClause   whereClause?
             accClause?   pAccClause?
             groupByClause?   havingClause?
             orderByClause?   limitClause?

c.17. Query

  query  CREATE QUERY Id (params?)
           (FOR GRAPH graphName)? 
             (RETURN expr)?
  params  param ( param)*
  param  paramType paramName
  paramType  baseType
            set<baseType>   bag<baseType>
            map<baseType (*@@*) baseType> qStmt (*@@*) stmt (*@@*) queryBlock 

Appendix D GSQL Formal Semantics

GSQL expresses queries over standard SQL tables and over graphs in which both directed and undirected edges may coexist, and whose vertices and edges carry data (attribute name-value maps).

The core of a GSQL query is the SELECT-FROM-WHERE block modeled after SQL, with the FROM clause specifying a pattern to be matched against the graphs and the tables. The pattern contains vertex, edge and tuple variables and each match induces a variable binding for them. The WHERE and SELECT clauses treat these variables as in SQL. GSQL supports an additional ACCUM clause that is used to update accumulators.

d.1. The Graph Data Model

In TigerGraph’s data model, graphs allow both directed and undirected edges. Both vertices and edges can carry data (in the form of attribute name-value maps). Both vertices and edges are typed.

Let denote a countable set of vertex ids, a countable set of edge ids disjoint from , a countable set of attribute names, a countable set of vertex type names, a countable set of edge type names.

Let denote an infinite domain (set of values), which comprises

  • all numeric values,

  • all string values,

  • the boolean constants true and false,

  • all datetime values,

  • ,

  • ,

  • all sets of values (their sub-class is denoted ),

  • all bags of values (),

  • all lists of values (), and

  • all maps (sets of key-value pairs, ).

A graph is a tuple


  • is a finite set of vertices.

  • is a finite set of edges.

  • is a function that associates with an edge its endpoint vertices. If is directed, is a singleton set containing a (source,target) vertex pair. If is undirected, is a set of two pairs, corresponding to both possible orderings of the endpoints.

  • is a function that associates a type name to each vertex.

  • is a function that associates a type name to each edge.

  • is a function that associates domain values to vertex/edge attributes (identified by the vertex/edge id and the attribute name).

d.2. Contexts

GSQL queries are composed of multiple statements. A statement may refer to intermediate results provided by preceding statements (e.g. global variables, temporary tables, accumulators). In addition, since GSQL queries can be parameterized just like SQL views, statements may refer to parameters whose value is provided by the initial query call.

A statement must therefore be evaluated in a context which provides the values for the names referenced by the statement. We model a context as a map from the names to the values of parameters/global variables/temporary tables/accumulators, etc.

Given a context map and a name , denotes the value associated to in  999 In the remainder of the presentation we assume that the query has passed all appropriate semantic and type checks and therefore is defined for every we use.

denotes the domain of context , i.e. the set of names which have an associated value in (we say that these names are defined in ).

Overriding Context Extension

When a new variable is introduced by a statement operating within a context , the context needs to be extended with the new variable’s name and value.

denotes a new context obtained by modifying a copy of to associate name with value (overwriting any pre-existing entry for ).

Given contexts and , we say that overrides , denoted and defined as:

Consistent Contexts

We call two contexts consistent if they agree on every name in the intersection of their domains. That is, for each , .

Merged Contexts

For consistent contexts, we can define , which denotes the merged context over the union of the domains of and :

d.3. Accumulators

A GSQL query can declare accumulators whose names come from , a countable set of global accumulator names and from , a disjoint countable set of vertex accumulator names.

Accumulators are data types that store an internal value and take inputs that are aggregated into this internal value using a binary operation. GSQL distinguishes among two accumulator flavors:

  • Vertex accumulators are attached to vertices, with each vertex storing its own local accumulator instance.

  • Global accumulators have a single instance.

Accumulators are polymorphic, being parameterized by the type of the stored internal value, the type of the inputs, and the binary combiner operation

Accumulators implement two assignment operators. Denoting with the internal value of accumulator instance ,

  • sets to the provided input ;

  • aggregates the input into using the combiner, i.e. sets to .

Each accumulator instance has a pre-defined default for the internal value.

When an accumulator instance is referenced in a GSQL expression, it evaluates to the internally stored value . Therefore, the context must associate the internally stored value to the instannce of global accumulators (identified by name) and of vertex accumulators (identified by name and vertex).

Specific Accumulator Types

We revisit the accumulator types listed in Section A.

SumAccum<N> is the type of accumulators where the internal value and input have numeric type N/string their default value is /the empty string and the combiner operation is the arithmetic /string concatenation, respectively.

MinAccum<O> is the type of accumulators where the internal value and input have ordered type O (numeric, datetime, string, vertex) and the combiner operation is the binary minimum function. The default values are the (architecture-dependent) minimum numeric value, the default date, the empty string, and undefined, respectively. Analogously for MaxAccum<O>.

AvgAccum<N> stores as internal value a pair consisting of the sum of inputs seen so far, and their count. The combiner adds the input to the running sum and increments the count. The default value is 0.0 (double precision).

AndAccum stores an internal boolean value (default true, and takes boolean inputs, combining them using logical conjunction. Analogously for OrAccum, which defaults to false and uses logical disjunction as combiner.

MapAccum<K,V> stores an internal value that is a map , where K is the type of ’s keys and V the type of ’s values. V can itself be an accumulator type, specifying how to aggregate values mapped by to the same key. The default internal value is the empty map. An input is a key-value pair . The combiner works as follows: if the internal map does not have an entry involving key , is extended to associate with . If is already defined in , then if is not an accumulator type, is modified to associate to , overwriting ’s former entry. If is an accumulator type, then is an accumulator instance. In that case is modified by replacing with the new accumulator obtained by combining into using ’s combiner.

d.4. Declarations

The semantics of a declaration is a function from contexts to contexts.

When they are created, accumulator instances are initialized by setting their stored internal value to a default that is defined with the accumulator type. Alternatively, they can be initialized by explicitly setting this default value using an assignment:

declares a vertex accumulator of name and type type, all of whose instances are initialized with , the result of evaluating the expression in the current context. The effect of this declaration is to create the initialized accumulator instances and extend the current context appropriately. Note that a vertex accumulator instance is identified by the accumulator name and the vertex hosting the instance. We model this by having the context associate the vertex accumulator name with a map from vertices to instances.



declares a global accumulator named of type type, whose single instance is initialized with . This instance is identified in the context simply by the accumulator name:

Finally, global variable declarations also extend the context:

d.5. DARPE Semantics

DARPEs specify a set of paths in the graph, formalized as follows.


A path in graph is a sequence

where and for each ,

  • , and

  • , and

  • and are the endpoints of edge regardless of ’s orientation: or .

Path Length

We call the length of and denote it with . Note that when we have .

Path Source and Target

We call the source, and the target of path , denoted and , respectively. When , .

Path Hop

For each , we call the triple the hop of path .

Path Label

We define the label of hop in , denoted as follows:

where denotes the type name of edge , and denotes a new symbol obtained by concatenating with , and analogously for .

We call label of p, denoted (p), the word obtained by concatenating the hop labels of :

where, as usual (Hopcroft et al., 2003), denotes the empty word.

DARPE Satisfaction

We denote with

the set of symbols obtained by treating each edge type as a symbol, as well as creating new symbols by concatenating and , as well as and .

We say that path satisfies DARPE D, denoted , if is a word in the language accepted by D, , when viewed as a regular expression over the alphabet  (Hopcroft et al., 2003):


We say that DARPE D matches path (and that is a match for D) whenever is a shortest path that satisfies D:

We denote with the set of matches of a DARPE  in graph .

d.6. Pattern Semantics

Patterns consist of DARPEs and variables. The former specify a set of paths in the graph, the latter are bound to vertices/edges occuring on these paths.

A pattern specifies a function from a graph and a context to

  • a set of paths in the graph, each called a match of , and

  • a family of bindings for ’s variables, one for each match . Here, denotes the binding induced by match .

Temporary Tables and Vertex Sets

To formalize pattern semantics, we note that some GSQL query statments may construct temporary tables that can be referred to by subsequent statements. Therefore, among others, the context maps the names to the extents (contents) of temporary tables. Since we can model a set of vertices as a single-column, duplicate-free table containining vertex ids, we refer to such tables as vertex sets.

V-Test Match

Consider graph and a context . Given a v-test , a match for is a vertex (a path of length 0) such that belongs to (if is a vertex set name defined in ), or is a vertex of type (otherwise). We denote the set of all matches of against in context  with .

Variable Bindings

Given graph and tuple of variables , a binding for in is a function from the variables in to vertices or edges in , . Notice that a variable binding (binding for short) is a particular kind of context, hence all context-specific definitions and operators apply. In particular, the notion of consistent bindings coincides with that of consistent contexts.

Binding Tables

We refer to a bag of variable bindings as a binding table.

No-hop Path Pattern Match

Given a graph and a context , we say that path is a match for no-hop pattern (and we say that matches ) if . Note that is a path of length 0, i.e. a vertex . The match induces a binding of vertex variable to , .  101010 Note that both and are optional. If is missing, then it is trivially satisfied by all vertices. If is missing, then the induced binding is the emtpy map . These conventions apply for the remainder of the presentation and are not repeated explicitly.

One-hop Path Pattern Match

Recall that in a one-hop pattern

is a disjunction of direction-adorned edge types, , are v-tests, are vertex variables and is an edge variable. We say that (single-edge) path is a match for (and that matches ) if , and , and . The binding induced by this match, denoted , is .

Multi-hop Single-DARPE Path Pattern Match

Given a DARPE , path is a match for multi-hop path pattern (and matches ) if , and and . The match induces a binding .

Multi-DARPE Path Pattern Match

Given DARPEs , a match for path pattern