Design Space Exploration of Dense and Sparse Mapping Schemes for RRAM Architectures

01/18/2022
by   Corey Lammie, et al.
6

The impact of device and circuit-level effects in mixed-signal Resistive Random Access Memory (RRAM) accelerators typically manifest as performance degradation of Deep Learning (DL) algorithms, but the degree of impact varies based on algorithmic features. These include network architecture, capacity, weight distribution, and the type of inter-layer connections. Techniques are continuously emerging to efficiently train sparse neural networks, which may have activation sparsity, quantization, and memristive noise. In this paper, we present an extended Design Space Exploration (DSE) methodology to quantify the benefits and limitations of dense and sparse mapping schemes for a variety of network architectures. While sparsity of connectivity promotes less power consumption and is often optimized for extracting localized features, its performance on tiled RRAM arrays may be more susceptible to noise due to under-parameterization, when compared to dense mapping schemes. Moreover, we present a case study quantifying and formalizing the trade-offs of typical non-idealities introduced into 1-Transistor-1-Resistor (1T1R) tiled memristive architectures and the size of modular crossbar tiles using the CIFAR-10 dataset.

READ FULL TEXT

page 1

page 4

research
07/27/2021

Griffin: Rethinking Sparse Optimization for Deep Learning Architectures

This paper examines the design space trade-offs of DNNs accelerators aim...
research
02/04/2021

Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training

In this paper, we introduce a new perspective on training deep neural ne...
research
02/25/2020

TxSim:Modeling Training of Deep Neural Networks on Resistive Crossbar Systems

Resistive crossbars have attracted significant interest in the design of...
research
12/27/2021

Two Sparsities Are Better Than One: Unlocking the Performance Benefits of Sparse-Sparse Networks

In principle, sparse neural networks should be significantly more effici...
research
06/04/2020

Counting Cards: Exploiting Weight and Variance Distributions for Robust Compute In-Memory

Compute in-memory (CIM) is a promising technique that minimizes data tra...
research
09/07/2022

Architecture-Algorithmic Trade-offs in Multi-path Channel Estimation for mmWAVE Systems

5G mmWave massive MIMO systems are likely to be deployed in dense urban ...
research
04/13/2021

The Impact of Activation Sparsity on Overfitting in Convolutional Neural Networks

Overfitting is one of the fundamental challenges when training convoluti...

Please sign up or login with your details

Forgot password? Click here to reset