DeepAI AI Chat
Log In Sign Up

PocketNN: Integer-only Training and Inference of Neural Networks via Direct Feedback Alignment and Pocket Activations in Pure C++

by   Jaewoo Song, et al.
The Hong Kong University of Science and Technology

Standard deep learning algorithms are implemented using floating-point real numbers. This presents an obstacle for implementing them on low-end devices which may not have dedicated floating-point units (FPUs). As a result, researchers in TinyML have considered machine learning algorithms that can train and run a deep neural network (DNN) on a low-end device using integer operations only. In this paper we propose PocketNN, a light and self-contained proof-of-concept framework in pure C++ for the training and inference of DNNs using only integers. Unlike other approaches, PocketNN directly operates on integers without requiring any explicit quantization algorithms or customized fixed-point formats. This was made possible by pocket activations, which are a family of activation functions devised for integer-only DNNs, and an emerging DNN training algorithm called direct feedback alignment (DFA). Unlike the standard backpropagation (BP), DFA trains each layer independently, thus avoiding integer overflow which is a key problem when using BP with integer-only operations. We used PocketNN to train some DNNs on two well-known datasets, MNIST and Fashion-MNIST. Our experiments show that the DNNs trained with our PocketNN achieved 96.98 Fashion-MNIST datasets, respectively. The accuracies are very close to the equivalent DNNs trained using BP with floating-point real number operations, such that accuracy degradations were just 1.02 Finally, our PocketNN has high compatibility and portability for low-end devices as it is open source and implemented in pure C++ without any dependencies.


page 1

page 2

page 3

page 4


Deep Neural Network inference with reduced word length

Deep neural networks (DNN) are powerful models for many pattern recognit...

Deep Learning Training on the Edge with Low-Precision Posits

Recently, the posit numerical format has shown promise for DNN data repr...

NITI: Training Integer Neural Networks Using Integer-only Arithmetic

While integer arithmetic has been widely adopted for improved performanc...

Adaptive Block Floating-Point for Analog Deep Learning Hardware

Analog mixed-signal (AMS) devices promise faster, more energy-efficient ...

Stacking-Based Deep Neural Network: Deep Analytic Network for Pattern Classification

Stacking-based deep neural network (S-DNN) is aggregated with pluralitie...

Nonlinear Acoustic Echo Cancellation with Deep Learning

We propose a nonlinear acoustic echo cancellation system, which aims to ...

Credit Assignment Through Broadcasting a Global Error Vector

Backpropagation (BP) uses detailed, unit-specific feedback to train deep...