SIAM: Chiplet-based Scalable In-Memory Acceleration with Mesh for Deep Neural Networks

08/14/2021
by   Gokul Krishnan, et al.
15

In-memory computing (IMC) on a monolithic chip for deep learning faces dramatic challenges on area, yield, and on-chip interconnection cost due to the ever-increasing model sizes. 2.5D integration or chiplet-based architectures interconnect multiple small chips (i.e., chiplets) to form a large computing system, presenting a feasible solution beyond a monolithic IMC architecture to accelerate large deep learning models. This paper presents a new benchmarking simulator, SIAM, to evaluate the performance of chiplet-based IMC architectures and explore the potential of such a paradigm shift in IMC architecture design. SIAM integrates device, circuit, architecture, network-on-chip (NoC), network-on-package (NoP), and DRAM access models to realize an end-to-end system. SIAM is scalable in its support of a wide range of deep neural networks (DNNs), customizable to various network structures and configurations, and capable of efficient design space exploration. We demonstrate the flexibility, scalability, and simulation speed of SIAM by benchmarking different state-of-the-art DNNs with CIFAR-10, CIFAR-100, and ImageNet datasets. We further calibrate the simulation results with a published silicon result, SIMBA. The chiplet-based IMC architecture obtained through SIAM shows 130× and 72× improvement in energy-efficiency for ResNet-50 on the ImageNet dataset compared to Nvidia V100 and T4 GPUs.

READ FULL TEXT

page 3

page 6

page 14

page 17

research
07/06/2021

Impact of On-Chip Interconnect on In-Memory Acceleration of Deep Neural Networks

With the widespread use of Deep Neural Networks (DNNs), machine learning...
research
06/23/2021

NAX: Co-Designing Neural Network and Hardware Architecture for Memristive Xbar based Computing Systems

In-Memory Computing (IMC) hardware using Memristive Crossbar Arrays (MCA...
research
10/12/2017

NeuroTrainer: An Intelligent Memory Module for Deep Learning Training

This paper presents, NeuroTrainer, an intelligent memory module with in-...
research
07/18/2021

Domino: A Tailored Network-on-Chip Architecture to Enable Highly Localized Inter- and Intra-Memory DNN Computing

The ever-increasing computation complexity of fast-growing Deep Neural N...
research
08/17/2020

DORY: Automatic End-to-End Deployment of Real-World DNNs on Low-Cost IoT MCUs

The deployment of Deep Neural Networks (DNNs) on end-nodes at the extrem...
research
04/06/2019

Ring-Mesh: A Scalable and High-Performance Approach for Manycore Accelerators

There is an increasing number of works addressing the design challenge o...
research
10/19/2022

Scalable Coherent Optical Crossbar Architecture using PCM for AI Acceleration

Optical computing has been recently proposed as a new compute paradigm t...

Please sign up or login with your details

Forgot password? Click here to reset