PocketNN: Integer-only Training and Inference of Neural Networks via Direct Feedback Alignment and Pocket Activations in Pure C++

01/08/2022
by   Jaewoo Song, et al.
0

Standard deep learning algorithms are implemented using floating-point real numbers. This presents an obstacle for implementing them on low-end devices which may not have dedicated floating-point units (FPUs). As a result, researchers in TinyML have considered machine learning algorithms that can train and run a deep neural network (DNN) on a low-end device using integer operations only. In this paper we propose PocketNN, a light and self-contained proof-of-concept framework in pure C++ for the training and inference of DNNs using only integers. Unlike other approaches, PocketNN directly operates on integers without requiring any explicit quantization algorithms or customized fixed-point formats. This was made possible by pocket activations, which are a family of activation functions devised for integer-only DNNs, and an emerging DNN training algorithm called direct feedback alignment (DFA). Unlike the standard backpropagation (BP), DFA trains each layer independently, thus avoiding integer overflow which is a key problem when using BP with integer-only operations. We used PocketNN to train some DNNs on two well-known datasets, MNIST and Fashion-MNIST. Our experiments show that the DNNs trained with our PocketNN achieved 96.98 Fashion-MNIST datasets, respectively. The accuracies are very close to the equivalent DNNs trained using BP with floating-point real number operations, such that accuracy degradations were just 1.02 Finally, our PocketNN has high compatibility and portability for low-end devices as it is open source and implemented in pure C++ without any dependencies.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/23/2018

Deep Neural Network inference with reduced word length

Deep neural networks (DNN) are powerful models for many pattern recognit...
research
07/30/2019

Deep Learning Training on the Edge with Low-Precision Posits

Recently, the posit numerical format has shown promise for DNN data repr...
research
09/28/2020

NITI: Training Integer Neural Networks Using Integer-only Arithmetic

While integer arithmetic has been widely adopted for improved performanc...
research
05/12/2022

Adaptive Block Floating-Point for Analog Deep Learning Hardware

Analog mixed-signal (AMS) devices promise faster, more energy-efficient ...
research
11/17/2018

Stacking-Based Deep Neural Network: Deep Analytic Network for Pattern Classification

Stacking-based deep neural network (S-DNN) is aggregated with pluralitie...
research
06/25/2021

Nonlinear Acoustic Echo Cancellation with Deep Learning

We propose a nonlinear acoustic echo cancellation system, which aims to ...
research
01/06/2019

Efficient Convolutional Neural Network Training with Direct Feedback Alignment

There were many algorithms to substitute the back-propagation (BP) in th...

Please sign up or login with your details

Forgot password? Click here to reset