DeepAI AI Chat
Log In Sign Up

Doing Moore with Less -- Leapfrogging Moore's Law with Inexactness for Supercomputing

by   Sven Leyffer, et al.
Argonne National Laboratory

Energy and power consumption are major limitations to continued scaling of computing systems. Inexactness, where the quality of the solution can be traded for energy savings, has been proposed as an approach to overcoming those limitations. In the past, however, inexactness necessitated the need for highly customized or specialized hardware. The current evolution of commercial off-the-shelf(COTS) processors facilitates the use of lower-precision arithmetic in ways that reduce energy consumption. We study these new opportunities in this paper, using the example of an inexact Newton algorithm for solving nonlinear equations. Moreover, we have begun developing a set of techniques we call reinvestment that, paradoxically, use reduced precision to improve the quality of the computed result: They do so by reinvesting the energy saved by reduced precision.


page 4

page 5


On the Impact of Device-Level Techniques on Energy-Efficiency of Neural Network Accelerators

Energy-efficiency is a key concern for neural network applications. To a...

Performance Evaluation of Mixed-Precision Runge-Kutta Methods

Additive Runge-Kutta methods designed for preserving highly accurate sol...

Combining Learning and Optimization for Transprecision Computing

The growing demands of the worldwide IT infrastructure stress the need f...

Constrained Precision Tuning

Precision tuning or customized precision number representations is emerg...

New satellites of figure-eight orbit computed with high precision

In this paper we use a Modified Newton's method based on the Continuous ...

Low precision logarithmic number systems: Beyond base-2

Logarithmic number systems (LNS) are used to represent real numbers in m...

Winograd Algorithm for AdderNet

Adder neural network (AdderNet) is a new kind of deep model that replace...