DeepAI
Log In Sign Up

Training of mixed-signal optical convolutional neural network with reduced quantization level

08/20/2020
by   Joseph Ulseth, et al.
0

Mixed-signal artificial neural networks (ANNs) that employ analog matrix-multiplication accelerators can achieve higher speed and improved power efficiency. Though analog computing is known to be susceptible to noise and device imperfections, various analog computing paradigms have been considered as promising solutions to address the growing computing demand in machine learning applications, thanks to the robustness of ANNs. This robustness has been explored in low-precision, fixed-point ANN models, which have proven successful on compressing ANN model size on digital computers. However, these promising results and network training algorithms cannot be easily migrated to analog accelerators. The reason is that digital computers typically carry intermediate results with higher bit width, though the inputs and weights of each ANN layers are of low bit width; while the analog intermediate results have low precision, analogous to digital signals with a reduced quantization level. Here we report a training method for mixed-signal ANN with two types of errors in its analog signals, random noise, and deterministic errors (distortions). The results showed that mixed-signal ANNs trained with our proposed method can achieve an equivalent classification accuracy with noise level up to 50 training method on a mixed-signal optical convolutional neural network based on diffractive optics.

READ FULL TEXT

page 7

page 9

04/02/2019

Improving Noise Tolerance of Mixed-Signal Neural Networks

Mixed-signal hardware accelerators for deep learning achieve orders of m...
06/23/2020

Inference with Artificial Neural Networks on the Analog BrainScaleS-2 Hardware

The neuromorphic BrainScaleS-2 ASIC comprises mixed-signal neurons and s...
10/11/2021

C3PU: Cross-Coupling Capacitor Processing Unit Using Analog-Mixed Signal In-Memory Computing for AI Inference

This paper presents a novel cross-coupling capacitor processing unit (C3...
06/06/2019

Training large-scale ANNs on simulated resistive crossbar arrays

Accelerating training of artificial neural networks (ANN) with analog re...
12/15/2021

TAFA: Design Automation of Analog Mixed-Signal FIR Filters Using Time Approximation Architecture

A digital finite impulse response (FIR) filter design is fully synthesiz...
02/12/2021

Dynamic Precision Analog Computing for Neural Networks

Analog electronic and optical computing exhibit tremendous advantages ov...
09/19/2017

An Analog Neural Network Computing Engine using CMOS-Compatible Charge-Trap-Transistor (CTT)

An analog neural network computing engine based on CMOS-compatible charg...