On Effects of Compression with Hyperdimensional Computing in Distributed Randomized Neural Networks

06/17/2021
by   Antonello Rosato, et al.
0

A change of the prevalent supervised learning techniques is foreseeable in the near future: from the complex, computational expensive algorithms to more flexible and elementary training ones. The strong revitalization of randomized algorithms can be framed in this prospect steering. We recently proposed a model for distributed classification based on randomized neural networks and hyperdimensional computing, which takes into account cost of information exchange between agents using compression. The use of compression is important as it addresses the issues related to the communication bottleneck, however, the original approach is rigid in the way the compression is used. Therefore, in this work, we propose a more flexible approach to compression and compare it to conventional compression algorithms, dimensionality reduction, and quantization techniques.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/02/2021

Hyperdimensional Computing for Efficient Distributed Classification with Randomized Neural Networks

In the supervised learning domain, considering the recent prevalence of ...
research
01/26/2019

Distributed Learning with Compressed Gradient Differences

Training very large machine learning models requires a distributed compu...
research
05/18/2018

Neural Network Compression using Transform Coding and Clustering

With the deployment of neural networks on mobile devices and the necessi...
research
06/07/2021

Smoothness-Aware Quantization Techniques

Distributed machine learning has become an indispensable tool for traini...
research
09/11/2023

Data efficiency, dimensionality reduction, and the generalized symmetric information bottleneck

The Symmetric Information Bottleneck (SIB), an extension of the more fam...
research
02/21/2018

3LC: Lightweight and Effective Traffic Compression for Distributed Machine Learning

The performance and efficiency of distributed machine learning (ML) depe...
research
05/28/2022

ByteComp: Revisiting Gradient Compression in Distributed Training

Gradient compression (GC) is a promising approach to addressing the comm...

Please sign up or login with your details

Forgot password? Click here to reset