Using PHAST to port Caffe library: First experiences and lessons learned

05/26/2020
by   Eduardo Gómez, et al.
0

Performance has always been a hot topic in computing. However, the viable ways to achieve it have taken many forms in the different moments of computing history. Today, technological limits have pushed the adoption of increasingly parallel multi-core and many-core architectures and even the use of highly specific hardware (aka Domain-Specific Architectures, or DSAs) to solve very specific problems. In this new context, one major problem is how to develop software once, and be able to run it on multiple accelerator architectures, seamlessly. Ideally aiming at a single programming model that can automatically target the code to different kinds of parallel architectures, allowing specific tuning with minimal, if any, changes to the source-code in order to seek performance portability. A comprehensive solution to this is still lacking. In this work, we present the use of the PHAST Library, which allows users to code once, at a high level of abstraction and thus with high productivity, and automatically targeting different parallel devices by changing the compilation process. As a case study, we have worked on the porting of the well-known deep-learning Caffe framework. The framework has been split into different parts and some of them have been ported, obtaining a working straightforward implementation that can be run on both CPUs and GPUs. We conclude discussing the lessons learned during the porting process, and analyzing the obtained performance in the perspective of completing the porting and expanding it to future consequent works.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/14/2021

FLOWER: A comprehensive dataflow compiler for high-level synthesis

FPGAs have found their way into data centers as accelerator cards, makin...
research
09/25/2018

HSTREAM: A directive-based language extension for heterogeneous stream computing

Big data streaming applications require utilization of heterogeneous par...
research
06/15/2020

Solving the Bethe-Salpeter equation on massively parallel architectures

The last ten years have witnessed fast spreading of massively parallel c...
research
03/08/2020

Towards Green Computing: A Survey of Performance and Energy Efficiency of Different Platforms using OpenCL

When considering different hardware platforms, not just the time-to-solu...
research
02/26/2016

Alpaka - An Abstraction Library for Parallel Kernel Acceleration

Porting applications to new hardware or programming models is a tedious ...
research
05/27/2021

Early Experiences Migrating CUDA codes to oneAPI

The heterogeneous computing paradigm represents a real programming chall...

Please sign up or login with your details

Forgot password? Click here to reset