Mixed-Signal Charge-Domain Acceleration of Deep Neural networks through Interleaved Bit-Partitioned Arithmetic

06/27/2019
by   Soroush Ghodrati, et al.
0

Low-power potential of mixed-signal design makes it an alluring option to accelerate Deep Neural Networks (DNNs). However, mixed-signal circuitry suffers from limited range for information encoding, susceptibility to noise, and Analog to Digital (A/D) conversion overheads. This paper aims to address these challenges by offering and leveraging the insight that a vector dot-product (the basic operation in DNNs) can be bit-partitioned into groups of spatially parallel low-bitwidth operations, and interleaved across multiple elements of the vectors. As such, the building blocks of our accelerator become a group of wide, yet low-bitwidth multiply-accumulate units that operate in the analog domain and share a single A/D converter. The low-bitwidth operation tackles the encoding range limitation and facilitates noise mitigation. Moreover, we utilize the switched-capacitor design for our bit-level reformulation of DNN operations. The proposed switched-capacitor circuitry performs the group multiplications in the charge domain and accumulates the results of the group in its capacitors over multiple cycles. The capacitive accumulation combined with wide bit-partitioned operations alleviate the need for A/D conversion per operation. With such mathematical reformulation and its switched-capacitor implementation, we define a 3D-stacked microarchitecture, dubbed BIHIWE.

READ FULL TEXT

page 1

page 2

page 3

page 9

research
10/02/2022

Reliability-Aware Deployment of DNNs on In-Memory Analog Computing Architectures

Conventional in-memory computing (IMC) architectures consist of analog m...
research
09/19/2023

A Blueprint for Precise and Fault-Tolerant Analog Neural Networks

Analog computing has reemerged as a promising avenue for accelerating de...
research
12/05/2017

Bit Fusion: Bit-Level Dynamically Composable Architecture for Accelerating Deep Neural Networks

Hardware acceleration of Deep Neural Networks (DNNs) aims to tame their ...
research
11/29/2022

A Charge Domain P-8T SRAM Compute-In-Memory with Low-Cost DAC/ADC Operation for 4-bit Input Processing

This paper presents a low cost PMOS-based 8T (P-8T) SRAM Compute-In-Memo...
research
08/17/2022

Fuse and Mix: MACAM-Enabled Analog Activation for Energy-Efficient Neural Acceleration

Analog computing has been recognized as a promising low-power alternativ...
research
09/18/2019

Exploring Bit-Slice Sparsity in Deep Neural Networks for Efficient ReRAM-Based Deployment

Emerging resistive random-access memory (ReRAM) has recently been intens...
research
04/02/2019

Improving Noise Tolerance of Mixed-Signal Neural Networks

Mixed-signal hardware accelerators for deep learning achieve orders of m...

Please sign up or login with your details

Forgot password? Click here to reset