NAX: Co-Designing Neural Network and Hardware Architecture for Memristive Xbar based Computing Systems

06/23/2021
by   Shubham Negi, et al.
0

In-Memory Computing (IMC) hardware using Memristive Crossbar Arrays (MCAs) are gaining popularity to accelerate Deep Neural Networks (DNNs) since it alleviates the "memory wall" problem associated with von-Neumann architecture. The hardware efficiency (energy, latency and area) as well as application accuracy (considering device and circuit non-idealities) of DNNs mapped to such hardware are co-dependent on network parameters, such as kernel size, depth etc. and hardware architecture parameters such as crossbar size. However, co-optimization of both network and hardware parameters presents a challenging search space comprising of different kernel sizes mapped to varying crossbar sizes. To that effect, we propose NAX – an efficient neural architecture search engine that co-designs neural network and IMC based hardware architecture. NAX explores the aforementioned search space to determine kernel and corresponding crossbar sizes for each DNN layer to achieve optimal tradeoffs between hardware efficiency and application accuracy. Our results from NAX show that the networks have heterogeneous crossbar sizes across different network layers, and achieves optimal hardware efficiency and accuracy considering the non-idealities in crossbars. On CIFAR-10 and Tiny ImageNet, our models achieve 0.8 (energy-delay-area product) compared to a baseline ResNet-20 and ResNet-18 models, respectively.

READ FULL TEXT

page 1

page 2

page 3

research
03/30/2023

XPert: Peripheral Circuit Neural Architecture Co-search for Area and Energy-efficient Xbar-based Computing

The hardware-efficiency and accuracy of Deep Neural Networks (DNNs) impl...
research
10/31/2019

Device-Circuit-Architecture Co-Exploration for Computing-in-Memory Neural Accelerators

Co-exploration of neural architectures and hardware design is promising ...
research
08/14/2021

SIAM: Chiplet-based Scalable In-Memory Acceleration with Mesh for Deep Neural Networks

In-memory computing (IMC) on a monolithic chip for deep learning faces d...
research
11/24/2021

Algorithm and Hardware Co-design for Reconfigurable CNN Accelerator

Recent advances in algorithm-hardware co-design for deep neural networks...
research
04/10/2022

SplitNets: Designing Neural Architectures for Efficient Distributed Computing on Head-Mounted Systems

We design deep neural networks (DNNs) and corresponding networks' splitt...
research
12/16/2021

Implementation of a Binary Neural Network on a Passive Array of Magnetic Tunnel Junctions

The increasing scale of neural networks and their growing application sp...
research
04/20/2023

SALSA: Simulated Annealing based Loop-Ordering Scheduler for DNN Accelerators

To meet the growing need for computational power for DNNs, multiple spec...

Please sign up or login with your details

Forgot password? Click here to reset