Density Encoding Enables Resource-Efficient Randomly Connected Neural Networks

09/19/2019
by   Denis Kleyko, et al.
0

The deployment of machine learning algorithms on resource-constrained edge devices is an important challenge from both theoretical and applied points of view. In this article, we focus on resource-efficient randomly connected neural networks known as Random Vector Functional Link (RVFL) networks since their simple design and extremely fast training time make them very attractive for solving many applied classification tasks. We propose to represent input features via the density-based encoding known in the area of stochastic computing and use the operations of binding and bundling from the area of hyperdimensional computing for obtaining the activations of the hidden neurons. Using a collection of 121 real-world datasets from the UCI Machine Learning Repository, we empirically show that the proposed approach demonstrates higher average accuracy than the conventional RVFL. We also demonstrate that it is possible to represent the readout matrix using only integers in a limited range with minimal loss in the accuracy. In this case, the proposed approach operates only on small n-bits integers, which results in a computationally efficient architecture. Finally, through hardware FPGA implementations, we show that such an approach consumes approximately eleven times less energy than that of the conventional RVFL.

READ FULL TEXT

page 1

page 4

research
06/17/2021

Generalized Learning Vector Quantization for Classification in Randomized Neural Networks and Hyperdimensional Computing

Machine learning algorithms deployed on edge devices must meet certain r...
research
12/22/2020

FracBNN: Accurate and FPGA-Efficient Binary Neural Networks with Fractional Activations

Binary neural networks (BNNs) have 1-bit weights and activations. Such n...
research
11/03/2022

Hardware/Software co-design with ADC-Less In-memory Computing Hardware for Spiking Neural Networks

Spiking Neural Networks (SNNs) are bio-plausible models that hold great ...
research
09/01/2016

Ternary Neural Networks for Resource-Efficient AI Applications

The computation and storage requirements for Deep Neural Networks (DNNs)...
research
10/27/2021

Binarized ResNet: Enabling Automatic Modulation Classification at the resource-constrained Edge

In this paper, we propose a ResNet based neural architecture to solve th...
research
11/18/2017

MorphNet: Fast & Simple Resource-Constrained Structure Learning of Deep Networks

We present MorphNet, an approach to automate the design of neural networ...

Please sign up or login with your details

Forgot password? Click here to reset