Trends in Energy Estimates for Computing in AI/Machine Learning Accelerators, Supercomputers, and Compute-Intensive Applications

10/12/2022
by   Sadasivan Shankar, et al.
0

We examine the computational energy requirements of different systems driven by the geometrical scaling law, and increasing use of Artificial Intelligence or Machine Learning (AI-ML) over the last decade. With more scientific and technology applications based on data-driven discovery, machine learning methods, especially deep neural networks, have become widely used. In order to enable such applications, both hardware accelerators and advanced AI-ML methods have led to the introduction of new architectures, system designs, algorithms, and software. Our analysis of energy trends indicates three important observations: 1) Energy efficiency due to geometrical scaling is slowing down; 2) The energy efficiency at the bit-level does not translate into efficiency at the instruction-level, or at the system-level for a variety of systems, especially for large-scale AI-ML accelerators or supercomputers; 3) At the application level, general-purpose AI-ML methods can be computationally energy intensive, off-setting the gains in energy from geometrical scaling and special purpose accelerators. Further, our analysis provides specific pointers for integrating energy efficiency with performance analysis for enabling high-performance and sustainable computing in the future.

READ FULL TEXT
research
09/19/2022

New Trends in Photonic Switching and Optical Network Architecture for Data Centre and Computing Systems

AI/ML for data centres and data centres for AI/ML are defining new trend...
research
06/22/2021

GPTPU: Accelerating Applications using Edge Tensor Processing Units

Neural network (NN) accelerators have been integrated into a wide-spectr...
research
05/02/2023

Design Space Exploration and Optimization for Carbon-Efficient Extended Reality Systems

As computing hardware becomes more specialized, designing environmentall...
research
03/27/2019

High Performance Monte Carlo Simulation of Ising Model on TPU Clusters

Large scale deep neural networks profited from an emerging class of AI a...
research
02/13/2023

A full-stack view of probabilistic computing with p-bits: devices, architectures and algorithms

The transistor celebrated its 75^th birthday in 2022. The continued scal...
research
07/20/2020

Energy Efficient Computing Systems: Architectures, Abstractions and Modeling to Techniques and Standards

Computing systems have undergone several inflexion points - while Moore'...
research
02/11/2022

Compute Trends Across Three Eras of Machine Learning

Compute, data, and algorithmic advances are the three fundamental factor...

Please sign up or login with your details

Forgot password? Click here to reset