A Deep Neural Network Deployment Based on Resistive Memory Accelerator Simulation

04/22/2023
by   Tejaswanth Reddy Maram, et al.
0

The objective of this study is to illustrate the process of training a Deep Neural Network (DNN) within a Resistive RAM (ReRAM) Crossbar-based simulation environment using CrossSim, an Application Programming Interface (API) developed for this purpose. The CrossSim API is designed to simulate neural networks while taking into account factors that may affect the accuracy of solutions during training on non-linear and noisy ReRAM devices. ReRAM-based neural cores that serve as memory accelerators for digital cores on a chip can significantly reduce energy consumption by minimizing data transfers between the processor and SRAM and DRAM. CrossSim employs lookup tables obtained from experimentally derived datasets of real fabricated ReRAM devices to digitally reproduce noisy weight updates to the neural network. The CrossSim directory comprises eight device configurations that operate at different temperatures and are made of various materials. This study aims to analyse the results of training a Neural Network on the Breast Cancer Wisconsin (Diagnostic) dataset using CrossSim, plotting the innercore weight updates and average training and validation loss to investigate the outcomes of all the devices.

READ FULL TEXT

page 1

page 2

page 3

research
07/24/2019

Zero-shifting Technique for Deep Neural Network Training on Resistive Cross-point Arrays

A resistive memory device-based computing architecture is one of the pro...
research
10/12/2019

EDEN: Enabling Energy-Efficient, High-Performance Deep Neural Network Inference Using Approximate DRAM

The effectiveness of deep neural networks (DNN) in vision, speech, and l...
research
06/25/2022

Heterogeneous Multi-core Array-based DNN Accelerator

In this article, we investigate the impact of architectural parameters o...
research
05/23/2023

Bulk-Switching Memristor-based Compute-In-Memory Module for Deep Neural Network Training

The need for deep neural network (DNN) models with higher performance an...
research
02/17/2020

STANNIS: Low-Power Acceleration of Deep Neural Network Training Using Computational Storage

This paper proposes a framework for distributed, in-storage training of ...
research
07/20/2017

Adaptive Learning Rule for Hardware-based Deep Neural Networks Using Electronic Synapse Devices

In this paper, we propose a learning rule based on a back-propagation (B...
research
02/17/2022

SWIM: Selective Write-Verify for Computing-in-Memory Neural Accelerators

Computing-in-Memory architectures based on non-volatile emerging memorie...

Please sign up or login with your details

Forgot password? Click here to reset