Streaming Batch Eigenupdates for Hardware Neuromorphic Networks

03/05/2019
by   Brian D. Hoskins, et al.
0

Neuromorphic networks based on nanodevices, such as metal oxide memristors, phase change memories, and flash memory cells, have generated considerable interest for their increased energy efficiency and density in comparison to graphics processing units (GPUs) and central processing units (CPUs). Though immense acceleration of the training process can be achieved by leveraging the fact that the time complexity of training does not scale with the network size, it is limited by the space complexity of stochastic gradient descent, which grows quadratically. The main objective of this work is to reduce this space complexity by using low-rank approximations of stochastic gradient descent. This low spatial complexity combined with streaming methods allows for significant reductions in memory and compute overhead, opening the doors for improvements in area, time and energy efficiency of training. We refer to this algorithm and architecture to implement it as the streaming batch eigenupdate (SBE) approach.

READ FULL TEXT
research
04/25/2020

Memory-efficient training with streaming dimensionality reduction

The movement of large quantities of data during the training of a Deep N...
research
07/12/2017

Capacity, Fidelity, and Noise Tolerance of Associative Spatial-Temporal Memories Based on Memristive Neuromorphic Network

We have calculated the key characteristics of associative (content-addre...
research
05/25/2022

Learning from time-dependent streaming data with online stochastic algorithms

We study stochastic algorithms in a streaming framework, trained on samp...
research
11/30/2018

On the Computational Inefficiency of Large Batch Sizes for Stochastic Gradient Descent

Increasing the mini-batch size for stochastic gradient descent offers si...
research
03/25/2019

Learning-to-Learn Stochastic Gradient Descent with Biased Regularization

We study the problem of learning-to-learn: inferring a learning algorith...
research
01/21/2023

ScaDLES: Scalable Deep Learning over Streaming data at the Edge

Distributed deep learning (DDL) training systems are designed for cloud ...

Please sign up or login with your details

Forgot password? Click here to reset