-
libhclooc: Software Library Facilitating Out-of-core Implementations of Accelerator Kernels on Hybrid Computing Platforms
Hardware accelerators such as Graphics Processing Units (GPUs), Intel Xe...
read it
-
AN5D: Automated Stencil Framework for High-Degree Temporal Blocking on GPUs
Stencil computation is one of the most widely-used compute patterns in h...
read it
-
GraphCage: Cache Aware Graph Processing on GPUs
Efficient Graph processing is challenging because of the irregularity of...
read it
-
High Performance Computing with FPGAs and OpenCL
In this work we evaluate the potential of FPGAs for accelerating HPC wor...
read it
-
Cooperative Kernels: GPU Multitasking for Blocking Algorithms (Extended Version)
There is growing interest in accelerating irregular data-parallel algori...
read it
-
Revisiting Huffman Coding: Toward Extreme Performance on Modern GPU Architectures
Today's high-performance computing (HPC) applications are producing vast...
read it
-
Performance Models for Data Transfers: A Case Study with Molecular Chemistry Kernels
With increasing complexity of hardwares, systems with different memory n...
read it
A Versatile Software Systolic Execution Model for GPU Memory-Bound Kernels
This paper proposes a versatile high-performance execution model, inspired by systolic arrays, for memory-bound regular kernels running on CUDA-enabled GPUs. We formulate a systolic model that shifts partial sums by CUDA warp primitives for the computation. We also employ register files as a cache resource in order to operate the entire model efficiently. We demonstrate the effectiveness and versatility of the proposed model for a wide variety of stencil kernels that appear commonly in HPC, and also convolution kernels (increasingly important in deep learning workloads). Our algorithm outperforms the top reported state-of-the-art stencil implementations, including implementations with sophisticated temporal and spatial blocking techniques, on the two latest Nvidia architectures: Tesla V100 and P100. For 2D convolution of general filter sizes and shapes, our algorithm is on average 2.5x faster than Nvidia's NPP on V100 and P100 GPUs.
READ FULL TEXT
Comments
There are no comments yet.