Classic game theoretic-formulations of the control synthesis problem rely on two player zero-sum games on graphs, where the system is opposed to an antagonist environment. In this context, the modeling game arena is a graph, where the vertices are either owned by player (the system) or by the antagonist player (the environment). The two players move a pebble along the vertices of the graph, starting from an initial position. Whenever the pebble is on a vertex belonging to player (resp. player ), the latter decides where to move the pebble next, according to his strategy. The infinite path followed by the pebble is called a play and represents one possible behavior of the system. The winning objective for player (a set of plays) encodes exactly the acceptable behaviors of the system. Therefore, the goal of player is to ensure with his strategy—the synthesized controller—that the outcome of the game is an acceptable behavior of the system, whatever the strategy played by his adversary.
Quantitative games, where quantitative objectives are defined on weighted
game arenas, provide natural tools for designing faithful models of embedded controllers, since they allow to explicitly handle the quantitative constraints imposed by the environment, the lack of resources or the targeted parameters of operability. In the late seventies, traditional game theory developed for economics defined a number of nowadays classic quantitative objectives, such as meanpayoff (MPG) and discounted-payoff[1, 26], that have been recently extensively investigated for the specification and design of reactive systems . In turn, the problem of controller synthesis with resource constraints has inspired new quantitative objectives and quantitative games, such as e.g. so called energy games in [10, 4, 7] and their variants (see e.g. [12, 13, 16, 5, 25]). The latter turn out to be of broad interest, having applications in computer aided synthesis [12, 13, 8], real-time systems [6, 5], as well as economy, due to their connection with meanpayoff and discounted-sum objectives .
In energy games, edges are fitted with integer weights aimed at modeling rewards or costs. The objective of player is to maintain the sum of the weights (called the energy level) always positive along the play, given a fixed initial credit of energy. Energy games were introduced in [10, 4], where they were also proven memoryless determined: namely, each vertex is either winning for player or it is winning for player , and memoryless strategies are sufficient to consider. Deciding whether a vertex is winning for player in an energy game is equivalent to the corresponding problem in meanpayoff games, and the latter equivalence has provided faster pseudo-polynomial algorithms for MPGs [7, 14, 15]. The above decision problems lie notoriously in the complexity class NP coNP (and even UP coUP), while finding polynomial time procedures for them is a long standing open problem [26, 1]. The minimum credit problem on energy game subsumes the corresponding decision problem, and asks the following: to determine, for each vertex of an energy game , whether is winning for player and which is the minimum credit to stay alive along each play starting from . Such a problem can be also solved in pseudo-polynomial time [26, 7]. Recently, parallel architectures like Graphics Processing Units (GPU) have been successfully used in accelerating many irregulars and low-arithmetic intensity applications like graph traversal-based algorithms, in which the control flow and memory access patterns are data-dependent [9, 3]. Motivated by the large instances that naturally arise from the specification, design and control of reactive systems, in this work we investigate the use of massively data computation capabilities supported by modern GPUs for solving the initial credit problem on energy games. Also, to alleviate the workload unbalancing among threads, we propose a suitable data-thread mapping technique which allows to efficiently solve traditional Energy Games instances. The contributions of the paper are manifold:
we provide a parallel implementation, exploiting traditional multi-core architectures, of the state-of-the-art initial credit procedure for enrgy games in ;
we developed a CUDA implementation which relies on a traditional vertex-parallelism approach and a more suitable variant based on warp-centric parallelism;
we report experimental results where we compare the performance achieved by the above mentioned implementations and a completely sequential one.
After reviewing some minimal preliminary notions (Section 2), the theoretical results on Energy Games (EG) relevant for this paper are recalled in Section 3. Sections 4-5 describe our parallel solutions. The results of the experimentation activity and a comparison between the solvers are outlined in Section 6. We finally discuss related works in Section 7 and draw our conclusion in Section 8.
A weighted graph is a tuple , where is a set of of vertices, is a set of edges, and is a weight function assigning an integer weight to each edge. We assume that weighted graphs are total, i.e. for all , there exists such that . Given a set of vertices in a weighted graph , we denote by the set of vertices having a successor in , i.e. , and by the set of successors of vertices in , i.e. .
A (finite) path in is a nonempty sequence of vertices (resp. ) such that for all (resp. ). The length of a finite path is the number of vertices in , denoted . Given a (finite) path and an integer (resp. ), we denote by the prefix of up to and by the vertex . A cycle in is a finite path such that and . We say that a cycle in a weighted graph is negative (resp. nonnegative) if the sum of its edge weights is less than (resp. not less than ). Given , a cycle is said reachable from in if there exists a path in such that and . A path is acyclic if for all .
A game arena is a tuple where is a weighted graph and is a partition of into the set of player- vertices and the set of player- vertices. An infinite game on is played for infinitely many rounds by two players moving a pebble along the edges of the game arena . In the first round, the pebble is on some vertex . In each round, if the pebble is on a vertex (), then player chooses an edge and the next round starts with the pebble on . A play in the arena is an infinite path in . An objective for player is a set : the play is said to be winning for player if . In this paper, we restrict our attention to -sum games, i.e. games where the two players are antagonists: therefore, the objective of player is . A game is a tuple , where is a graph arena, and is the objective of player . Given a game , the players play according to strategies to ensure a play that accomplish their objective.
A strategy for player () is a function , such that for all finite paths with , we have . We denote by () the set of strategies for player . A strategy for player is memoryless if for all sequences and such that . We denote by the set of memoryless strategies of player . A play is consistent with a strategy for player if for all positions such that . Given an initial vertex , the outcome of two strategies and in is the (unique) play that starts in and is consistent with both and . Given a memoryless strategy for player in the game , we denote by the weighted graph obtained by removing from all edges such that and .
3 Energy Games
In this section, we introduce energy games [10, 4, 7], that are the main objective of the rest of this paper. An energy game is a game over the arena , where the goal of player is to construct an infinite play such that for some initial credit , it holds that:
The quantity in (1) is called the energy level of the play prefix , given the initial credit . Conversely, player aims at building a play such that for any initial credit , there exists a prefix of such that the energy level of is negative. Formally, energy games are defined as follows:
Definition 1 (Energy Games)
An energy game (EG) is a game , where and is given by:
A vertex is winning for player , if there exists an initial credit and a winning strategy for player from for credit . In the sequel, we denote by the set of winning vertices for player .
Energy games are memoryless determined , i.e. for all , either is winning for player , or is winning for player , and memoryless strategies are sufficient. Using the memoryless determinacy of energy games, one can easily prove the following result for EG, characterizing the winning strategies for player in a EG.
Lemma 2 ()
Let be an EG, for all vertices , for all memoryless strategies for player , the strategy is winning from iff all cycles reachable from in the weighted graph are nonnegative.
Consider the energy game depicted in Figure 1, where round vertices are owned by player , and square vertices are owned by player . The set of winning vertices for player is . In fact, the memoryless strategy for player , where , ensures that any cycle in a play consistent with —against any (memoryless) strategy for player —is nonnegative. Therefore, provided a suitable initial credit of energy, each play consistent with will enjoy a nonnegative energy level along its run. Given , the minimum initial credit that player needs to survive along a play starting from is given by , where . As a further example, if the edge is deleted from the energy game in Figure 1, then player does not have any winning strategy from , but only from the set of vertices with initial credit of energy .
The next definition introduces the initial credit problem.
Definition 4 (Initial Credit Problem)
Given an energy game , the initial credit problem on asks to determine, for each vertex , the following:
if is winning for player , i.e. if .
in case , the minimum initial credit such that there is a winning strategy for player in .
The decision problem for an energy game asks to solve only the first one of the two items above, i.e. to partition into . The decision problem on energy games is equivalent to the decision problem on so called meanpayoff games , a game on graphs originally introduced by game theorists within the economic community, where the objective of player is to minimize the long-run average weight of plays. Several algorithms exist to solve the decision problem on meanpayoff games (cf.  for a survey of the available algorithms): indeed, it is worth noticing that the best pseudo-polynomial meanpayoff algorithm is based on its reduction to energy games [7, 14, 15]. Ad-hoc procedures are instead necessary to solve the initial credit problem on energy games, that is specific to energy objectives. The latter problem was solved in  with a pseudo-polynomial procedure having complexity , where (resp. ) is the number of edges in the game arena and is the maximum weight labeling an edge. Energy games were algorithmically studied also in , where the authors provide a polynomial algorithm for solving the initial credit problem on EG with special weights structures. In particular, the authors of  show that solving EG where all the cycles are either ’good’ or significantly ‘bad’111Graphs, for instance, where all the negative cycles have weight less than , where is the maximum weight in the graph can be done in polynomial time.
In the rest of this paper, we will show the design of a CUDA-based parallel EG algorithm based on the procedure for the EG initial credit problem defined in  (which is briefly described in the next subsection). The latter allows to exploit the computational power offered by modern GPUs.
3.1 Computing the Minimum Initial Credit of Energy on the CPU
In this subsection, we briefly describe the sequential algorithm in  to solve the initial credit problem on energy games. The procedure in  is based on the of so called energy progress measure, which is recalled in Definition 5 and relies on the following notation.
Let be the total order on , where if and only if either or . Let be the operator such that, for each and :
Roughly speaking, the local conditions on the nodes of an energy game imposed by an energy progress measure (cf. Definition 5) guarantee that the following property on holds: for each node in , if , then player has a strategy to ensure that the energy level along each play compatible with is not negative, provided the initial credit .
Definition 5 (Energy Progress Measure (EPM) )
A function is a energy progress measure for the EG iff the following conditions hold:
if , then for some
if , then for all
For a game , let be the set of functions , and consider the partial order , defined as iff for all , . The authors of  proved that admits a least energy progress measure w.r.t. , satisfying the following properties:
for each node ,
for each node , , where:
Given an EG , the initial energy credit algorithm in  computes exactly the least energy progress measure for :
More precisely, such an algorithm initializes to the constant function and relies on the following -monotone lifting operator to update , until a least fixpoint is reached.
Definition 6 (Lifting Operator )
Given , the lifting operator is defined by where:
4 An OpenMP implementation
In order to obtain an immediate way to parallelize the computation of the minimum initial credit on EG , let us observe that each application of the lift operation in Definition 6 never decreases the value of for any vertex . Hence, processing all elements in in parallel is a sound procedure. Moreover, as motivated in the previous subsection, a bounded number of lift operations suffices to determine a solution. Consequently, a simple way to parallelize the computation of the EPM consists in applying the lift operation in parallel for each vertex of the graph and in iterating this step until either a fixpoint or the theoretical bound on loops is reached. Algorithm 2 presents the skeleton of the resulting algorithm implemented exploiting OpenMP. In particular, the loops starting in lines 1 (performing the initialization of ) and 8 (performing the lift step), respectively, are executed in parallel by distributing the computation among the available OpenMP threads. The while-loop in lines 5–10 iterates until an ending condition is achieved. We experimented with this implementation by using 1, 2, 4, and 8 threads, always mapped to different CPUs (see Section 6).
5 A CUDA-Based Solver
This section describes the main design choices made in implementing a CUDA-based parallel solution to the EG initial credit problem. As concerns data structures, the adjacency matrix of the input EG is represented in device memory, by exploiting the standard Compressed Sparse Row (CSR) format, usually employed to store sparse matrices. The progress measure to be computed is stored in an array of elements.
By analyzing the pseudo-code of the sequential Algorithm 1, one plainly identifies tasks that could be executed in parallel. The simplest one is the initialization of the set of active nodes (those whose least energy progress measure needs to be lifted) and the initialization of the least progress measure (lines 1-3 in Algorithm 3). The set is represented by an array of (at most) elements, A specific kernel function has been defined for the initialization of such . In particular, a 1-to-1 mapping (vertex-parallelism ) assigns each node to one thread. Each thread determines whether the corresponding node has to be inserted in or not (line 3).
The core of the sequential algorithm is in lines 7–17, where elements are extracted from , one at a time, and their progress measures are lifted. As mentioned before, processing all elements in in parallel is a sound procedure. Therefore, a specific kernel function has been designed to compute, in parallel, the new values of for each in . In doing this, all elements in the set have to be considered. To better exploit the mass parallelism supported by GPUs, each node in is assigned to a set of threads of the same warp. Such threads process in parallel all elements in and (conjunctively) compute the value of . Information between such threads is exchanged through warp-shuffle operations, which are enabled because all threads always belong to the same warp. Acting in this manner reduces the number of accesses to global and shared memories and this, in turn, speeds-up the overall computation. Indeed, as mentioned earlier, by means of shuffle operations data are moved directly between threads’ registers instead of communicating them through global/shared memory operations. The value
can be heuristically chosen as (a fraction of) the average degree of nodes in.
Thanks to the use of the CSR format, all members of are stored in consecutive locations of the device memory. This optimizes the time needed by the threads for accessing the initially needed data. The first of the threads stores the new value of , after the interaction between the threads is completed.
Consider now that, by Definition 6, the computation of lifting operator involves the evaluation of either a operation or a operation of a set of values, depending on the player controlling the active node. A further optimization is applied in order to minimize thread divergence between threads of the same warp. The set is sorted so that all nodes in (resp. ) correspond to consecutive lines of the adjacency matrix of the EG. Consequently, in all warps (but at most one) all threads always execute the same sequence of instructions. Namely, all of them compute the (resp. ) operation.
Once the progress measure of a node has been updated, the set of predecessors of has to be considered in order to compute the new set of active nodes. Also this task is performed in parallel by splitting the work load among the threads. The set of threads that computed , process each node in and determine if it has to be inserted in . Notice that in this phase of the computation it might be the case that the same element might be inserted in the new because of different reasons, as it is predecessor of different processed active nodes. Repeated insertion of the same element in
is avoided by marking each inserted node (a suitable vector of flags is used for this purpose).
Similarly to what done in computing , in order to optimize the access patterns exploited to retrieve the needed data, the elements of are stored in consecutive memory locations. This is achieved by adopting a redundant representation of . More specifically, the adjacency matrix of EG is represented in the device memory using the Compressed Sparse Column (CSC) format too. This representation is easily computed by transposing the corresponding CSR representation, through standard functions provided by the CUSPARSE library.
With the differences described so far, the overall structure of the resulting CUDA implementation essentially reflects the one of the sequential Algorithm 1. The computation starts on the CPU by reading and parsing a text file specifying the input arena. The EG is then transferred to the device memory and a conversion from CSR to CSC is executed by the device. Now, the CPU controls the computation by calling the device functions described earlier. First the initialization of data is performed. Then, the device function which improves the progress measure is repeatedly called until an empty set of active nodes is obtained (this corresponds to the while-loop in Algorithm 1). We experimented with different choices for the values of (the case clearly corresponds to vertex-parallelism, while for we have warp-centric parallelism). Finally, the result is transferred back to the host memory and output.
6 Experimental Results
Numerical experiments have been performed on a server equipped with an Intel Xeon E5-2640 v3 and four Nvidia K80 GPU. The code has been generated using the GNU C compiler version 4.8.2, CUDA C compiler version 7.5.
A sequential solver, named “CPU EG1”, following the pseudo-code listed in Algorithm 1 has been implemented in C. In order to develop a fair comparison with the GPU-based solution, the very same representation (using both CSR and CSC formats) used in the CUDA implementation has been adopted in the sequential solver. We refer to “CPU EG2”, “CPU EG4” and “CPU EG8” the codes that implement the algorithm described in Algorithm 2 with 2,4,8 threads respectively.
The codename “GPU-v” and “GPU-w” denote the code implemented for GPUs based on vertex parallelism and warp parallelism respectively. As a source for our benchmark, we consider the suite for games arena in . In particular,  provides a large database of games (over instances) that originate from different verification problems and are notable—for experimental purposes—in terms of their diversity and applicative coverability. Table 1 provides references to the filenames of the exact instances used in the experimentation as well as a succinct description of the characteristics of the graphs. Such instances encode equivalence checking (-) model-checking (-) problems into qualitative games with parity objectives . Standard conversion from qualitative games to quantitative games with meanpayoff and energy objectives  have been used in  to generate the final data set.
In the present section, we show our experimental results. First, we compared the performance of CPU-EG over the data set by increasing the number of threads (strong scaling). Due to slow convergence time, CPU-EG is not able to solve some instances within a given time-out (for our convenience we set up it to 900 seconds). In Figure 2 we only show the most representative results. Please refer to Table 1 for a detailed analysis. In general, experiments show a good scalability between 2 and 4 threads, after that threads do not have enough work to do. For some instances (i.e., M33), CPU-EG shows a better scalability since it takes advantage from the parallelism that a more complex structure exhibits. Other instances, like M28, on the contrary, do not have a significant benefit from multi-core architectures.
In the second set of experiments, we compare the performance (time-to-solution) between GPU-v and GPU-w. Furthermore, we also show the time of CPU-EG8 as a baseline. In detail, Figure 3 shows the performance over “equivalence checking” data set, whereas Figure 4 is related to “model checking” instances. Generally, CPU-EG8 is faster on “easy” instances where the algorithm converges quickly in few iterations. The identification of “easy” instances is hard to do a priori since, as we mentioned, the convergence strongly depends on the weights and the structure of the graphs. By analyzing final performance we can say that EG-GPUs are up to x than CPU-EG8 (36x faster than EG-CPU1). Concerning the comparison between GPU-v and GPU-w, we do not observe a significant difference in terms of performance except for a small number of instance. On average, GPU-w achieves better performance slightly up to a factor of x.
As a final comment, the results of this initial experimentation seem to witness the advantages offered even by a plain parallelization of Algorithm 1. Although our results are remarkable, a deeper investigation has to be conducted in order to identify (if any) those classes of EG where a specific approach may achieve the best performance. Again, it seems reasonable that particular topologies of the underlying graph game may reduce the gap between the sequential and the parallel algorithms. On the other hand, various optimizations and refinements can be introduced in the parallel solver. Among the ones under consideration we just mention here the possibility of partitioning the given arena w.r.t. the strongly connected components of the graph game and processing them in parallel, just by imposing a topological order among the components.
7 Related Works
Similar approaches exist for other kind of games used in the context of computer aided design and formal verification. For instance, the parallelization of Meanpayoff Games has been dealt with in  and in . Whereas in the first case the target architecture is not GPU-based, but it is a common multi-core machine,  proposes an OpenCL implementation suitable to run on AMD devices. A proposal concerning Parity Games has been described in , also based on OpenCL.
Several solutions have been proposed to reduce the workload unbalancing among threads and alleviate the irregular memory access. Jia et al.  evaluated two different data-thread mapping techniques vertex-parallel and edge-parallel. Due to the difference in the out-degree among vertices in scale-free networks, vertex-parallel suffers from load imbalance among threads. The edge-parallel approach solves that problem by assigning edges to threads during the frontier expansion. However, it is not suitable for graphs with a low average degree, as well as dense graphs . Furthermore, the edge-based parallelism requires much memory and atomic operations [19, 24] especially for Energy Games instances where an atomic min can be required. Mclaughlin and Bader  discussed two hybrid methods for the selection of the parallelization strategy. Sarıyüce et al. , introduced the vertex virtualization technique based on a relabeling of the data structure (e.g., CSR, Compressed Sparse Row). The technique replaces a high-degree vertex v with virtual vertices having at most neighbors. Vertex virtualization technique is not very effective for graphs with a low average degree. Typical Energy Games instances are characterized by a low average out degree (cfr. Table 1), therefore a vertex-based parallelism would be more suitable for such instances. Other efficient data-thread mapping techniques, like active-edge parallelism [2, 3] or other warp-centric strategies , seem to be not very effective for Energy games instances where the average degree is pretty low.
8 Concluding Remarks
To the best of our knowledge, we present the first GPU-based implementation of a solver for Energy Games. We investigated the possibility of implementing a solver for the initial credit problem on Energy Games capable of exploiting the computational power offered by modern Graphics Processing Units. We illustrated how a first prototype relying on the SIMT conceptual model of parallelism adopted within CUDA framework can be plainly obtained by parallelizing the different steps of a sequential algorithm. The proposed CUDA-based solver tersely exhibits great performance and demonstrated the viability of the approach, when compared against its sequential and CPU multi-core counterpart. However, a detailed analysis of the topology of the graph is still required in order to design an efficient data-thread mapping technique on GPUs. Further, a number of improvements and heuristics can be applied to our current implementation, involving for example a static analysis of the input instance aimed at customizing the configuration parameters used to launch the CUDA kernels, or aimed at taking advantage of the topological structure of the graph. These are challenging themes for future work.
-  K. Apt and E. Gradel. Lectures in Game Theory for Computer Scientists. Cambridge University Press, USA, 2011.
-  M. Bernaschi, G. Carbone, E. Mastrostefano, and F. Vella. Solutions to the st-connectivity problem using a gpu-based distributed bfs. Journal of Parallel and Distributed Computing, 76:145 – 153, 2015. Special Issue on Architecture and Algorithms for Irregular Applications.
-  M. Bernaschi, G. Carbone, and F. Vella. Scalable betweenness centrality on multi-gpu systems. In Proc. of CF’16, pages 29–36, USA, 2016. ACM.
-  P. Bouyer, U. Fahrenberg, K. G. Larsen, N. Markey, and J. Srba. Resource interfaces. In Proc. of FORMATS, volume 5215 of LNCS, pages 33–47. Springer, 2008.
-  P. Bouyer, N. Markey, M. Randour, K. Larsen, and S. Laursen. Average-energy games. In Proc. of GandALF, pages 1–15, 2015.
-  R. Brenguier, F. Cassez, and J.-F. Raskin. Energy and mean-payoff timed games. In Proc. of HSCC’14, pages 283–292, New York, NY, USA, 2014. ACM.
-  L. Brim, J. Chaloupka, L. Doyen, R. Gentilini, and J. Raskin. Faster algorithms for mean-payoff games. Formal Methods in System Design, 38(2):97–118, 2011.
-  V. Bruyère. Computer aided synthesis: a game theoretic approach. CoRR, 2017.
-  M. Burtscher, R. Nasre, and K. Pingali. A quantitative study of irregular programs on gpus. In Workload Characterization (IISWC), 2012 IEEE Int. Symposium on, pages 141–151. IEEE, 2012.
-  A. Chakrabarti, L. de Alfaro, T. Henzinger, and M. Stoelinga. Resource interfaces. In Proc. of EMSOFT:Embedded Software, volume 2855 of LNCS, pages 117–133. Springer, 2003.
-  J. Chaloupka. Parallel algorithms for mean-payoff games. In A. Fiat and P. Sanders, editors, Proc. of ESA 2009, volume 5757 of LNCS, pages 599–610, 2009.
-  K. Chatterjee and L. Doyen. Energy parity games. In Proc. of ICALP, volume 6199 of LNCS, pages 599–610. Springer, 2010.
-  K. Chatterjee, L. Doyen, T. Henzinger, and J.-F. Raskin. Generalized mean-payoff and energy games. In Proc. of FSTTCS, volume 8 of LIPIcs, pages 505–516, 2010.
-  K. Chatterjee, M. Henzinger, S. Krinninger, and D. Nanongkai. Polynomial-time algorithms for energy games with special weight structures. Algorithmica, 70(3), 2014.
-  C. Comin and R. Rizzi. Improved pseudo-polynomial bound for the value problem and optimal strategy synthesis in mean payoff games. Algorithmica, 77(4):995–1021, 2017.
-  U. Fahrenberg, L. Juhl, K. Larsen, and J. Srba. Energy games in multiweighted automata. In Proc. of ICTAC, volume 6916 of LNCS, pages 95–115. Springer, 2011.
-  P. Hoffmann and M. Luttenberger. Solving parity games on the GPU. In D. Van Hung and M. Ogawa, editors, Proc. of ATVA 2013, volume 8172 of LNCS, pages 455–459, 2013.
-  S. Hong, S. K. Kim, T. Oguntebi, and K. Olukotun. Accelerating cuda graph algorithms at maximum warp. In Proceedings of the 16th ACM Symposium on Principles and Practice of Parallel Programming, PPoPP ’11, pages 267–276, New York, NY, USA, 2011. ACM.
-  Y. Jia, V. Lu, J. Hoberock, M. Garland, and J. Hart. Edge vs. node parallelism for graph centrality metrics. GPU Computing Gems: Jade Edition, pages 15–28, 2011.
-  J. Keiren. Benchmarks for parity games. In M. Dastani and M. Sirjani, editors, Proc. of FSEN 2015. Revised Selected Papers, volume 9392 of LNCS, pages 127–142, 2015.
-  A. McLaughlin and D. Bader. Scalable and high performance betweenness centrality on the gpu. In Proc. of the Int. Conference for High Performance Computing, Networking, Storage and Analysis, pages 572–583. IEEE Press, 2014.
-  P. Meyer and M. Luttenberger. Solving mean-payoff games on the GPU. In C. Artho, A. Legay, and D. Peled, editors, Proc. of ATVA 2016, volume 9938 of LNCS, 2016.
-  A. Sariyüce, K. Kaya, E. Saule, and U. Çatalyürek. Betweenness centrality on gpus and heterogeneous architectures. In Proc. of GPGPU-6, pages 76–85, New York, NY, USA, 2013. ACM.
-  A. Sarıyüce, E. Saule, K. Kaya, and Ü. Çatalyürek. Regularizing graph centrality computations. J. of Parallel and Distributed Computing, 76(0):106 – 119, 2015. Special Issue on Architecture and Algorithms for Irregular Applications.
-  Y. Velner, K. Chatterjee, L. Doyen, T. Henzinger, A. Rabinovich, and J. Raskin. The complexity of multi-mean-payoff and multi-energy games. Inf. Comput., 241:177–196, 2015.
-  U. Zwick and M. Paterson. The complexity of mean payoff games on graphs. TCS, 158(2):343–359, 1996.