ConvPIM: Evaluating Digital Processing-in-Memory through Convolutional Neural Network Acceleration

05/06/2023
by   Orian Leitersdorf, et al.
0

Processing-in-memory (PIM) architectures are emerging to reduce data movement in data-intensive applications. These architectures seek to exploit the same physical devices for both information storage and logic, thereby dwarfing the required data transfer and utilizing the full internal memory bandwidth. Whereas analog PIM utilizes the inherent connectivity of crossbar arrays for approximate matrix-vector multiplication in the analog domain, digital PIM architectures enable bitwise logic operations with massive parallelism across columns of data within memory arrays. Several recent works have extended the computational capabilities of digital PIM architectures towards the full-precision (single-precision floating-point) acceleration of convolutional neural networks (CNNs); yet, they lack a comprehensive comparison to GPUs. In this paper, we examine the potential of digital PIM for CNN acceleration through an updated quantitative comparison with GPUs, supplemented with an analysis of the overall limitations of digital PIM. We begin by investigating the different PIM architectures from a theoretical perspective to understand the underlying performance limitations and improvements compared to state-of-the-art hardware. We then uncover the tradeoffs between the different strategies through a series of benchmarks ranging from memory-bound vectored arithmetic to CNN acceleration. We conclude with insights into the general performance of digital PIM architectures for different data-intensive applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/05/2023

FourierPIM: High-Throughput In-Memory Fast Fourier Transform and Polynomial Multiplication

The Discrete Fourier Transform (DFT) is essential for various applicatio...
research
05/22/2017

Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices

In a previous work we have detailed the requirements to obtain a maximal...
research
06/30/2022

MatPIM: Accelerating Matrix Operations with Memristive Stateful Logic

The emerging memristive Memory Processing Unit (mMPU) overcomes the memo...
research
06/09/2022

AritPIM: High-Throughput In-Memory Arithmetic

Digital processing-in-memory (PIM) architectures are rapidly emerging to...
research
05/22/2023

IMBUE: In-Memory Boolean-to-CUrrent Inference ArchitecturE for Tsetlin Machines

In-memory computing for Machine Learning (ML) applications remedies the ...
research
08/08/2023

Collaborative Acceleration for FFT on Commercial Processing-In-Memory Architectures

This paper evaluates the efficacy of recent commercial processing-in-mem...
research
11/10/2022

NEON: Enabling Efficient Support for Nonlinear Operations in Resistive RAM-based Neural Network Accelerators

Resistive Random-Access Memory (RRAM) is well-suited to accelerate neura...

Please sign up or login with your details

Forgot password? Click here to reset