NEAT: A Framework for Automated Exploration of Floating Point Approximations

02/17/2021
by   Saeid Barati, et al.
0

Much recent research is devoted to exploring tradeoffs between computational accuracy and energy efficiency at different levels of the system stack. Approximation at the floating point unit (FPU) allows saving energy by simply reducing the number of computed floating point bits in return for accuracy loss. Although, finding the most energy efficient approximation for various applications with minimal effort is the main challenge. To address this issue, we propose NEAT: a pin tool that helps users automatically explore the accuracy-energy tradeoff space induced by various floating point implementations. NEAT helps programmers explore the effects of simultaneously using multiple floating point implementations to achieve the lowest energy consumption for an accuracy constraint or vice versa. NEAT accepts one or more user-defined floating point implementations and programmable placement rules for where/when to apply them. NEAT then automatically replaces floating point operations with different implementations based on the user-specified rules during the runtime and explores the resulting tradeoff space to find the best use of approximate floating point implementations for the precision tuning throughout the program. We evaluate NEAT by enforcing combinations of 24/53 different floating point implementations with three sets of placement rules on a wide range of benchmarks. We find that heuristic precision tuning at the function level provides up to 22 loss comparing to applying a single implementation for the whole application. Also, NEAT is applicable to neural networks where it finds the optimal precision level for each layer considering an accuracy target for the model.

READ FULL TEXT

page 1

page 3

page 11

research
06/01/2016

Profile-Driven Automated Mixed Precision

We present a scheme to automatically set the precision of floating point...
research
07/31/2021

A Study of the Floating-Point Tuning Behaviour on the N-body Problem

In this article, we apply a new methodology for precision tuning to the ...
research
09/20/2018

FFT Convolutions are Faster than Winograd on Modern CPUs, Here is Why

Winograd-based convolution has quickly gained traction as a preferred ap...
research
05/06/2020

Custom-Precision Mathematical Library Explorations for Code Profiling and Optimization

The typical processors used for scientific computing have fixed-width da...
research
07/26/2021

Dissecting FLOPs along input dimensions for GreenAI cost estimations

The term GreenAI refers to a novel approach to Deep Learning, that is mo...
research
04/04/2019

Regularizing Activation Distribution for Training Binarized Deep Networks

Binarized Neural Networks (BNNs) can significantly reduce the inference ...
research
04/28/2022

FPIRM: Floating-point Processing in Racetrack Memories

Convolutional neural networks (CNN) have become a ubiquitous algorithm w...

Please sign up or login with your details

Forgot password? Click here to reset