Monolithic Silicon Photonic Architecture for Training Deep Neural Networks with Direct Feedback Alignment

11/12/2021
by   Zhimu Guo, et al.
0

The field of artificial intelligence (AI) has witnessed tremendous growth in recent years, however some of the most pressing challenges for the continued development of AI systems are the fundamental bandwidth, energy efficiency, and speed limitations faced by electronic computer architectures. There has been growing interest in using photonic processors for performing neural network inference operations, however these networks are currently trained using standard digital electronics. Here, we propose on-chip training of neural networks enabled by a CMOS-compatible silicon photonic architecture to harness the potential for massively parallel, efficient, and fast data operations. Our scheme employs the direct feedback alignment training algorithm, which trains neural networks using error feedback rather than error backpropagation, and can operate at speeds of trillions of multiply-accumulate (MAC) operations per second while consuming less than one picojoule per MAC operation. The photonic architecture exploits parallelized matrix-vector multiplications using arrays of microring resonators for processing multi-channel analog signals along single waveguide buses to calculate the gradient vector of each neural network layer in situ, which is the most computationally expensive operation performed during the backward pass. We also experimentally demonstrate training a deep neural network with the MNIST dataset using on-chip MAC operation results. Our novel approach for efficient, ultra-fast neural network training showcases photonics as a promising platform for executing AI applications.

READ FULL TEXT
research
05/19/2020

In-memory Implementation of On-chip Trainable and Scalable ANN for AI/ML Applications

Traditional von Neumann architecture based processors become inefficient...
research
08/31/2018

Learning in Memristive Neural Network Architectures using Analog Backpropagation Circuits

The on-chip implementation of learning algorithms would speed-up the tra...
research
11/06/2017

A General Neural Network Hardware Architecture on FPGA

Field Programmable Gate Arrays (FPGAs) plays an increasingly important r...
research
09/28/2020

Breaking the Memory Wall for AI Chip with a New Dimension

Recent advancements in deep learning have led to the widespread adoption...
research
11/30/2020

Monadic Pavlovian associative learning in a backpropagation-free photonic network

Over a century ago, Ivan P. Pavlov, in a classic experiment, demonstrate...
research
10/01/2021

Enhanced Multigradient Dilution Preparation

Abstract: In our paper the new algorithm enhanced multi gradient Dilutio...
research
07/24/2014

Trainable and Dynamic Computing: Error Backpropagation through Physical Media

Machine learning algorithms, and more in particular neural networks, arg...

Please sign up or login with your details

Forgot password? Click here to reset