Analog, In-memory Compute Architectures for Artificial Intelligence

01/13/2023
by   Patrick Bowen, et al.
0

This paper presents an analysis of the fundamental limits on energy efficiency in both digital and analog in-memory computing architectures, and compares their performance to single instruction, single data (scalar) machines specifically in the context of machine inference. The focus of the analysis is on how efficiency scales with the size, arithmetic intensity, and bit precision of the computation to be performed. It is shown that analog, in-memory computing architectures can approach arbitrarily high energy efficiency as both the problem size and processor size scales.

READ FULL TEXT
research
11/09/2018

A Microprocessor implemented in 65nm CMOS with Configurable and Bit-scalable Accelerator for Programmable In-memory Computing

This paper presents a programmable in-memory-computing processor, demons...
research
12/09/2016

Field-Programmable Crossbar Array (FPCA) for Reconfigurable Computing

For decades, advances in electronics were directly driven by the scaling...
research
12/25/2020

Fundamental Limits on Energy-Delay-Accuracy of In-memory Architectures in Inference Applications

This paper obtains fundamental limits on the computational precision of ...
research
03/04/2022

Efficient Analog CAM Design

Content Addressable Memories (CAMs) are considered a key-enabler for in-...
research
05/25/2023

Benchmarking and modeling of analog and digital SRAM in-memory computing architectures

In-memory-computing is emerging as an efficient hardware paradigm for de...
research
04/13/2021

An Adaptive Synaptic Array using Fowler-Nordheim Dynamic Analog Memory

In this paper we present a synaptic array that uses dynamical states to ...

Please sign up or login with your details

Forgot password? Click here to reset