DGEMM performance is data-dependent

12/11/2019
by   Tom Cornebize, et al.
0

The DGEMM function is a widely used implementation of the matrix product. While the asymptotic complexity of the algorithm only depends on the sizes of the matrices, we show that the performance is significantly impacted by the matrices content. Our experiments show that this may be due to bit flips in the CPU causing an energy consumption overhead.

READ FULL TEXT

page 1

page 6

page 7

page 9

page 10

research
05/02/2017

How does Docker affect energy consumption? Evaluating workloads in and out of Docker containers

Context: Virtual machines provide isolation of services at the cost of h...
research
05/12/2021

Winograd Algorithm for AdderNet

Adder neural network (AdderNet) is a new kind of deep model that replace...
research
12/20/2021

Fast and Green Computing with Graphics Processing Units for solving Sparse Linear Systems

In this paper, we aim to introduce a new perspective when comparing high...
research
10/26/2018

Comparing Multilayer Perceptron and Multiple Regression Models for Predicting Energy Use in the Balkans

Global demographic and economic changes have a critical impact on the to...
research
10/30/2017

A Massively Parallel Algorithm for the Approximate Calculation of Inverse p-th Roots of Large Sparse Matrices

We present the submatrix method, a highly parallelizable method for the ...
research
09/04/2019

Engineering Boolean Matrix Multiplication for Multiple-Accelerator Shared-Memory Architectures

We study the problem of multiplying two bit matrices with entries either...
research
02/10/2022

Mixture-of-Rookies: Saving DNN Computations by Predicting ReLU Outputs

Deep Neural Networks (DNNs) are widely used in many applications domains...

Please sign up or login with your details

Forgot password? Click here to reset