Evaluating the Performance of NVIDIA's A100 Ampere GPU for Sparse Linear Algebra Computations

08/19/2020
by   Yuhsiang Mike Tsai, et al.
0

GPU accelerators have become an important backbone for scientific high performance computing, and the performance advances obtained from adopting new GPU hardware are significant. In this paper we take a first look at NVIDIA's newest server line GPU, the A100 architecture part of the Ampere generation. Specifically, we assess its performance for sparse linear algebra operations that form the backbone of many scientific applications and assess the performance improvements over its predecessor.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/30/2020

Ginkgo: A Modern Linear Operator Algebra Framework for High Performance Computing

In this paper, we present Ginkgo, a modern C++ math library for scientif...
research
06/24/2021

Choice of Parallelism: Multi-GPU Driven Pipeline for Huge Academic Backbone Network

Science Information Network (SINET) is a Japanese academic backbone netw...
research
05/15/2016

A Foray into Efficient Mapping of Algorithms to Hardware Platforms on Heterogeneous Systems

Heterogeneous computing can potentially offer significant performance an...
research
09/02/2021

Supporting CUDA for an extended RISC-V GPU architecture

With the rapid development of scientific computation, more and more rese...
research
10/27/2020

A GPU-accelerated adaptive FSAI preconditioner for massively parallel simulations

The solution of linear systems of equations is a central task in a numbe...
research
09/17/2018

AlSub: Fully Parallel and Modular Subdivision

In recent years, mesh subdivision---the process of forging smooth free-f...
research
02/06/2018

The performances of R GPU implementations of the GMRES method

Although the performance of commodity computers has improved drastically...

Please sign up or login with your details

Forgot password? Click here to reset