DeepAI AI Chat
Log In Sign Up

Mixed-Signal Charge-Domain Acceleration of Deep Neural networks through Interleaved Bit-Partitioned Arithmetic

06/27/2019
by   Soroush Ghodrati, et al.
Google
Qualcomm
Microsoft
Georgia Institute of Technology
University of California, San Diego
0

Low-power potential of mixed-signal design makes it an alluring option to accelerate Deep Neural Networks (DNNs). However, mixed-signal circuitry suffers from limited range for information encoding, susceptibility to noise, and Analog to Digital (A/D) conversion overheads. This paper aims to address these challenges by offering and leveraging the insight that a vector dot-product (the basic operation in DNNs) can be bit-partitioned into groups of spatially parallel low-bitwidth operations, and interleaved across multiple elements of the vectors. As such, the building blocks of our accelerator become a group of wide, yet low-bitwidth multiply-accumulate units that operate in the analog domain and share a single A/D converter. The low-bitwidth operation tackles the encoding range limitation and facilitates noise mitigation. Moreover, we utilize the switched-capacitor design for our bit-level reformulation of DNN operations. The proposed switched-capacitor circuitry performs the group multiplications in the charge domain and accumulates the results of the group in its capacitors over multiple cycles. The capacitive accumulation combined with wide bit-partitioned operations alleviate the need for A/D conversion per operation. With such mathematical reformulation and its switched-capacitor implementation, we define a 3D-stacked microarchitecture, dubbed BIHIWE.

READ FULL TEXT

page 1

page 2

page 3

page 9

10/02/2022

Reliability-Aware Deployment of DNNs on In-Memory Analog Computing Architectures

Conventional in-memory computing (IMC) architectures consist of analog m...
12/05/2017

Bit Fusion: Bit-Level Dynamically Composable Architecture for Accelerating Deep Neural Networks

Hardware acceleration of Deep Neural Networks (DNNs) aims to tame their ...
11/29/2022

A Charge Domain P-8T SRAM Compute-In-Memory with Low-Cost DAC/ADC Operation for 4-bit Input Processing

This paper presents a low cost PMOS-based 8T (P-8T) SRAM Compute-In-Memo...
08/17/2022

Fuse and Mix: MACAM-Enabled Analog Activation for Energy-Efficient Neural Acceleration

Analog computing has been recognized as a promising low-power alternativ...
05/09/2021

Efficiency-driven Hardware Optimization for Adversarially Robust Neural Networks

With a growing need to enable intelligence in embedded devices in the In...
09/18/2019

Exploring Bit-Slice Sparsity in Deep Neural Networks for Efficient ReRAM-Based Deployment

Emerging resistive random-access memory (ReRAM) has recently been intens...
12/14/2016

A 700uW 1GS/s 4-bit Folding-Flash ADC in 65nm CMOS for Wideband Wireless Communications

We present the design of a low-power 4-bit 1GS/s folding-flash ADC with ...