Evaluating the Performance of NVIDIA's A100 Ampere GPU for Sparse Linear Algebra Computations

08/19/2020
by   Yuhsiang Mike Tsai, et al.
0

GPU accelerators have become an important backbone for scientific high performance computing, and the performance advances obtained from adopting new GPU hardware are significant. In this paper we take a first look at NVIDIA's newest server line GPU, the A100 architecture part of the Ampere generation. Specifically, we assess its performance for sparse linear algebra operations that form the backbone of many scientific applications and assess the performance improvements over its predecessor.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

06/30/2020

Ginkgo: A Modern Linear Operator Algebra Framework for High Performance Computing

In this paper, we present Ginkgo, a modern C++ math library for scientif...
05/15/2016

A Foray into Efficient Mapping of Algorithms to Hardware Platforms on Heterogeneous Systems

Heterogeneous computing can potentially offer significant performance an...
06/24/2021

Choice of Parallelism: Multi-GPU Driven Pipeline for Huge Academic Backbone Network

Science Information Network (SINET) is a Japanese academic backbone netw...
09/02/2021

Supporting CUDA for an extended RISC-V GPU architecture

With the rapid development of scientific computation, more and more rese...
10/27/2020

A GPU-accelerated adaptive FSAI preconditioner for massively parallel simulations

The solution of linear systems of equations is a central task in a numbe...
09/17/2018

AlSub: Fully Parallel and Modular Subdivision

In recent years, mesh subdivision---the process of forging smooth free-f...
07/20/2020

Parallel Performance of ARM ThunderX2 for Atomistic Simulation Algorithms

Atomistic simulation drives scientific advances in modern material scien...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.