Accelerator-level Parallelism

07/02/2019
by   Mark D. Hill, et al.
0

Future applications demand more performance, but technology advances have been faltering. A promising approach to further improve computer system performance under energy constraints is to employ hardware accelerators. Already today, mobile systems concurrently employ multiple accelerators in what we call accelerator-level parallelism (ALP). To spread the benefits of ALP more broadly, we charge computer scientists to develop the science needed to best achieve the performance and cost goals of ALP hardware and software.

READ FULL TEXT

page 2

page 4

research
07/23/2023

MARS: Exploiting Multi-Level Parallelism for DNN Workloads on Adaptive Multi-Accelerator Systems

Along with the fast evolution of deep neural networks, the hardware syst...
research
10/01/2019

UltraShare: FPGA-based Dynamic Accelerator Sharing and Allocation

Despite all the available commercial and open-source frameworks to ease ...
research
01/20/2020

SPARTA: A Divide and Conquer Approach to Address Translation for Accelerators

Virtual memory (VM) is critical to the usability and programmability of ...
research
01/21/2022

Trireme: Exploring Hierarchical Multi-Level Parallelism for Domain Specific Hardware Acceleration

The design of heterogeneous systems that include domain specific acceler...
research
02/26/2016

Alpaka - An Abstraction Library for Parallel Kernel Acceleration

Porting applications to new hardware or programming models is a tedious ...
research
12/05/2021

Using Convolutional Neural Networks for fault analysis and alleviation in accelerator systems

Today, Neural Networks are the basis of breakthroughs in virtually every...
research
01/08/2023

A Multi-Site Accelerator-Rich Processing Fabric for Scalable Brain-Computer Interfacing

Hull is an accelerator-rich distributed implantable Brain-Computer Inter...

Please sign up or login with your details

Forgot password? Click here to reset