Dissecting FLOPs along input dimensions for GreenAI cost estimations

07/26/2021
by   Andrea Asperti, et al.
0

The term GreenAI refers to a novel approach to Deep Learning, that is more aware of the ecological impact and the computational efficiency of its methods. The promoters of GreenAI suggested the use of Floating Point Operations (FLOPs) as a measure of the computational cost of Neural Networks; however, that measure does not correlate well with the energy consumption of hardware equipped with massively parallel processing units like GPUs or TPUs. In this article, we propose a simple refinement of the formula used to compute floating point operations for convolutional layers, called α-FLOPs, explaining and correcting the traditional discrepancy with respect to different layers, and closer to reality. The notion of α-FLOPs relies on the crucial insight that, in case of inputs with multiple dimensions, there is no reason to believe that the speedup offered by parallelism will be uniform along all different axes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/04/2021

BEANNA: A Binary-Enabled Architecture for Neural Network Acceleration

Modern hardware design trends have shifted towards specialized hardware ...
research
02/17/2021

NEAT: A Framework for Automated Exploration of Floating Point Approximations

Much recent research is devoted to exploring tradeoffs between computati...
research
10/15/2020

FPRaker: A Processing Element For Accelerating Neural Network Training

We present FPRaker, a processing element for composing training accelera...
research
08/07/2018

Rethinking Numerical Representations for Deep Neural Networks

With ever-increasing computational demand for deep learning, it is criti...
research
04/28/2022

FPIRM: Floating-point Processing in Racetrack Memories

Convolutional neural networks (CNN) have become a ubiquitous algorithm w...
research
05/08/2020

Measuring the Algorithmic Efficiency of Neural Networks

Three factors drive the advance of AI: algorithmic innovation, data, and...
research
07/23/2020

PareCO: Pareto-aware Channel Optimization for Slimmable Neural Networks

Slimmable neural networks provide a flexible trade-off front between pre...

Please sign up or login with your details

Forgot password? Click here to reset