System and Design Technology Co-optimization of SOT-MRAM for High-Performance AI Accelerator Memory System

03/22/2023
by   Kaniz Mishty, et al.
0

SoCs are now designed with their own AI accelerator segment to accommodate the ever-increasing demand of Deep Learning (DL) applications. With powerful MAC engines for matrix multiplications, these accelerators show high computing performance. However, because of limited memory resources (i.e., bandwidth and capacity), they fail to achieve optimum system performance during large batch training and inference. In this work, we propose a memory system with high on-chip capacity and bandwidth to shift the gear of AI accelerators from memory-bound to achieving system-level peak performance. We develop the memory system with DTCO-enabled customized SOT-MRAM as large on-chip memory through STCO and detailed characterization of the DL workloads. workload-aware memory system on the CV and NLP benchmarks and observe significant PPA improvement compared to an SRAM-based in both inference and training modes. Our workload-aware memory system achieves 8X energy and 9X latency improvement on Computer Vision (CV) benchmarks in training and 8X energy and 4.5X latency improvement on Natural Language Processing (NLP) benchmarks in training while consuming only around 50 iso-capacity.

READ FULL TEXT

page 12

page 13

research
04/06/2021

Designing Efficient and High-performance AI Accelerators with Customized STT-MRAM

In this paper, we demonstrate the design of efficient and high-performan...
research
09/28/2020

Breaking the Memory Wall for AI Chip with a New Dimension

Recent advancements in deep learning have led to the widespread adoption...
research
11/30/2020

Accelerating Bandwidth-Bound Deep Learning Inference with Main-Memory Accelerators

DL inference queries play an important role in diverse internet services...
research
08/09/2020

SEALing Neural Network Models in Secure Deep Learning Accelerators

Deep learning (DL) accelerators are increasingly deployed on edge device...
research
02/18/2019

Beyond the Memory Wall: A Case for Memory-centric HPC System for Deep Learning

As the models and the datasets to train deep learning (DL) models scale,...
research
06/08/2020

Yield Loss Reduction and Test of AI and Deep Learning Accelerators

With data-driven analytics becoming mainstream, the global demand for de...
research
03/26/2021

RCT: Resource Constrained Training for Edge AI

Neural networks training on edge terminals is essential for edge AI comp...

Please sign up or login with your details

Forgot password? Click here to reset