DeepAI AI Chat
Log In Sign Up

Design space exploration of Ferroelectric FET based Processing-in-Memory DNN Accelerator

08/12/2019
by   Insik Yoon, et al.
0

In this letter, we quantify the impact of device limitations on the classification accuracy of an artificial neural network, where the synaptic weights are implemented in a Ferroelectric FET (FeFET) based in-memory processing architecture. We explore a design-space consisting of the resolution of the analog-to-digital converter, number of bits per FeFET cell, and the neural network depth. We show how the system architecture, training models and overparametrization can address some of the device limitations.

READ FULL TEXT
09/03/2021

On the Accuracy of Analog Neural Network Inference Accelerators

Specialized accelerators have recently garnered attention as a method to...
04/17/2023

RAELLA: Reforming the Arithmetic for Efficient, Low-Resolution, and Low-Loss Analog PIM: No Retraining Required!

Processing-In-Memory (PIM) accelerators have the potential to efficientl...
04/03/2019

Low Power Artificial Neural Network Architecture

Recent artificial neural network architectures improve performance and p...
05/09/2023

Instant-NeRF: Instant On-Device Neural Radiance Field Training via Algorithm-Accelerator Co-Designed Near-Memory Processing

Instant on-device Neural Radiance Fields (NeRFs) are in growing demand f...
11/15/2019

NeuMMU: Architectural Support for Efficient Address Translations in Neural Processing Units

To satisfy the compute and memory demands of deep neural networks, neura...
06/18/2021

Application-driven Design Exploration for Dense Ferroelectric Embedded Non-volatile Memories

The memory wall bottleneck is a key challenge across many data-intensive...