SupeRBNN: Randomized Binary Neural Network Using Adiabatic Superconductor Josephson Devices

09/21/2023
by   Zhengang Li, et al.
0

Adiabatic Quantum-Flux-Parametron (AQFP) is a superconducting logic with extremely high energy efficiency. By employing the distinct polarity of current to denote logic `0' and `1', AQFP devices serve as excellent carriers for binary neural network (BNN) computations. Although recent research has made initial strides toward developing an AQFP-based BNN accelerator, several critical challenges remain, preventing the design from being a comprehensive solution. In this paper, we propose SupeRBNN, an AQFP-based randomized BNN acceleration framework that leverages software-hardware co-optimization to eventually make the AQFP devices a feasible solution for BNN acceleration. Specifically, we investigate the randomized behavior of the AQFP devices and analyze the impact of crossbar size on current attenuation, subsequently formulating the current amplitude into the values suitable for use in BNN computation. To tackle the accumulation problem and improve overall hardware performance, we propose a stochastic computing-based accumulation module and a clocking scheme adjustment-based circuit optimization method. We validate our SupeRBNN framework across various datasets and network architectures, comparing it with implementations based on different technologies, including CMOS, ReRAM, and superconducting RSFQ/ERSFQ. Experimental results demonstrate that our design achieves an energy efficiency of approximately 7.8x10^4 times higher than that of the ReRAM-based BNN framework while maintaining a similar level of model accuracy. Furthermore, when compared with superconductor-based counterparts, our framework demonstrates at least two orders of magnitude higher energy efficiency.

READ FULL TEXT

page 1

page 3

page 4

page 5

page 6

page 7

page 8

page 11

research
04/20/2022

Multiply-and-Fire (MNF): An Event-driven Sparse Neural Network Accelerator

Machine learning, particularly deep neural network inference, has become...
research
05/04/2021

HASCO: Towards Agile HArdware and Software CO-design for Tensor Computation

Tensor computations overwhelm traditional general-purpose computing devi...
research
02/12/2019

A Case for Superconducting Accelerators

As the scaling of conventional CMOS-based technologies slows down, there...
research
02/13/2021

CrossLight: A Cross-Layer Optimized Silicon Photonic Neural Network Accelerator

Domain-specific neural network accelerators have seen growing interest i...
research
02/01/2019

Efficient Hybrid Network Architectures for Extremely Quantized Neural Networks Enabling Intelligence at the Edge

The recent advent of `Internet of Things' (IOT) has increased the demand...
research
05/19/2023

Energy-frugal and Interpretable AI Hardware Design using Learning Automata

Energy efficiency is a crucial requirement for enabling powerful artific...
research
06/17/2023

Superconducting Heater Cryotron-Based Reconfigurable Logic Towards Cryogenic IC Camouflaging

Superconducting electronics are among the most promising alternatives to...

Please sign up or login with your details

Forgot password? Click here to reset