Graph based Data Dependence Identifier for Parallelization of Programs

02/18/2021
by   Kavya Alluru, et al.
0

Automatic parallelization improves the performance of serial program by automatically converting to parallel program. Automatic parallelization typically works in three phases: check for data dependencies in the input program, perform transformations, and generate the parallel code for target machine. Though automatic parallelization is beneficial, it is not done as a part of compiling process because of the time complexity of the data dependence tests and transformation techniques. Data dependencies arise because of data access from memory required for the execution of instructions of the program. In a program, memory is allocated for variables like scalars, arrays and pointers. As of now, different techniques are used to identify data dependencies in scalars, arrays and pointers in a program. In this paper, we propose a graph based Data Dependence Identifier (DDI), which is capable of identifying all types of data dependencies that arise in all types of variables, in polynomial time. In our proposed DDI model, for identifying data dependence in a program, we represent a program as graph. Though many graphical representation of program exist, our approach of representing a program as graph takes a different approach. Also using our DDI model, one can perform basic transformations like dead code elimination, constant propagation, and induction variable detection.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/09/2022

Parallelism detection using graph labelling

Usage of multiprocessor and multicore computers implies parallel program...
research
05/07/2022

Can We Run in Parallel? Automating Loop Parallelization for TornadoVM

With the advent of multi-core systems, GPUs and FPGAs, loop parallelizat...
research
11/28/2017

MPI_XSTAR: MPI-based Parallelization of the XSTAR Photoionization Program

We describe a program for the parallel implementation of multiple runs o...
research
11/30/2022

A comparison between Automatically versus Manually Parallelized NAS Benchmarks

We compare automatically and manually parallelized NAS Benchmarks in ord...
research
07/17/2023

Maximum Flows in Parametric Graph Templates

Execution graphs of parallel loop programs exhibit a nested, repeating s...
research
11/13/2019

Compile-time Parallelization of Subscripted Subscript Patterns

An increasing number of scientific applications are making use of irregu...
research
11/08/2022

Focused Dynamic Slicing for Large Applications using an Abstract Memory-Model

Dynamic slicing techniques compute program dependencies to find all stat...

Please sign up or login with your details

Forgot password? Click here to reset