Training Neural Networks for Execution on Approximate Hardware

04/08/2023
by   Tianmu Li, et al.
0

Approximate computing methods have shown great potential for deep learning. Due to the reduced hardware costs, these methods are especially suitable for inference tasks on battery-operated devices that are constrained by their power budget. However, approximate computing hasn't reached its full potential due to the lack of work on training methods. In this work, we discuss training methods for approximate hardware. We demonstrate how training needs to be specialized for approximate hardware, and propose methods to speed up the training process by up to 18X.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/29/2020

Insights on Training Neural Networks for QUBO Tasks

Current hardware limitations restrict the potential when solving quadrat...
research
10/29/2020

Toward Lattice QCD On Billion Core Approximate Computers

We present evidence of the feasibility of using billion core approximate...
research
03/07/2017

Using Approximate Computing for the Calculation of Inverse Matrix p-th Roots

Approximate computing has shown to provide new ways to improve performan...
research
04/13/2017

ApproxDBN: Approximate Computing for Discriminative Deep Belief Networks

Probabilistic generative neural networks are useful for many application...
research
05/21/2018

AxTrain: Hardware-Oriented Neural Network Training for Approximate Inference

The intrinsic error tolerance of neural network (NN) makes approximate c...
research
03/11/2019

AX-DBN: An Approximate Computing Framework for the Design of Low-Power Discriminative Deep Belief Networks

The power budget for embedded hardware implementations of Deep Learning ...
research
03/18/2022

LeHDC: Learning-Based Hyperdimensional Computing Classifier

Thanks to the tiny storage and efficient execution, hyperdimensional Com...

Please sign up or login with your details

Forgot password? Click here to reset