GPU PaaS Computation Model in Aneka Cloud Computing Environment

08/20/2018
by   Shashikant Ilager, et al.
0

Due to the surge in the volume of data generated and rapid advancement in Artificial Intelligence (AI) techniques like machine learning and deep learning, the existing traditional computing models have become inadequate to process an enormous volume of data and the complex application logic for extracting intrinsic information. Computing accelerators such as Graphics processing units (GPUs) have become de facto SIMD computing system for many big data and machine learning applications. On the other hand, the traditional computing model has gradually switched from conventional ownership-based computing to subscription-based cloud computing model. However, the lack of programming models and frameworks to develop cloud-native applications in a seamless manner to utilize both CPU and GPU resources in the cloud has become a bottleneck for rapid application development. To support this application demand for simultaneous heterogeneous resource usage, programming models and new frameworks are needed to manage the underlying resources effectively. Aneka is emerged as a popular PaaS computing model for the development of Cloud applications using multiple programming models like Thread, Task, and MapReduce in a single container .NET platform. Since, Aneka addresses MIMD application development that uses CPU based resources and GPU programming like CUDA is designed for SIMD application development, here, the chapter discusses GPU PaaS computing model for Aneka Clouds for rapid cloud application development for .NET platforms. The popular opensource GPU libraries are utilized and integrated it into the existing Aneka task programming model. The scheduling policies are extended that automatically identify GPU machines and schedule respective tasks accordingly. A case study on image processing is discussed to demonstrate the system, which has been built using PaaS Aneka SDKs and CUDA library.

READ FULL TEXT

page 6

page 18

page 20

page 26

research
12/02/2018

An API for Development of User Defined Scheduling Algorithms in Aneka PaaS Cloud Software

Cloud computing has been developed as one of the prominent paradigm for ...
research
02/19/2018

PRUNE: Dynamic and Decidable Dataflow for Signal Processing on Heterogeneous Platforms

The majority of contemporary mobile devices and personal computers are b...
research
12/15/2022

Kernel-as-a-Service: A Serverless Interface to GPUs

Serverless computing has made it easier than ever to deploy applications...
research
05/21/2020

SymJAX: symbolic CPU/GPU/TPU programming

SymJAX is a symbolic programming version of JAX simplifying graph input/...
research
09/12/2023

CloudBrain-NMR: An Intelligent Cloud Computing Platform for NMR Spectroscopy Processing, Reconstruction and Analysis

Nuclear Magnetic Resonance (NMR) spectroscopy has served as a powerful a...
research
09/09/2020

GPU-accelerated machine learning inference as a service for computing in neutrino experiments

Machine learning algorithms are becoming increasingly prevalent and perf...

Please sign up or login with your details

Forgot password? Click here to reset