TRON: Transformer Neural Network Acceleration with Non-Coherent Silicon Photonics

03/22/2023
by   Salma Afifi, et al.
0

Transformer neural networks are rapidly being integrated into state-of-the-art solutions for natural language processing (NLP) and computer vision. However, the complex structure of these models creates challenges for accelerating their execution on conventional electronic platforms. We propose the first silicon photonic hardware neural network accelerator called TRON for transformer-based models such as BERT, and Vision Transformers. Our analysis demonstrates that TRON exhibits at least 14x better throughput and 8x better energy efficiency, in comparison to state-of-the-art transformer accelerators.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/07/2023

ITA: An Energy-Efficient Attention and Softmax Accelerator for Quantized Transformers

Transformer networks have emerged as the state-of-the-art approach for n...
research
05/09/2022

Row-wise Accelerator for Vision Transformer

Following the success of the natural language processing, the transforme...
research
02/22/2020

A^3: Accelerating Attention Mechanisms in Neural Networks with Approximation

With the increasing computational demands of neural networks, many hardw...
research
03/13/2023

X-Former: In-Memory Acceleration of Transformers

Transformers have achieved great success in a wide variety of natural la...
research
01/05/2021

A Survey on Silicon Photonics for Deep Learning

Deep learning has led to unprecedented successes in solving some very di...
research
04/20/2023

An Introduction to Transformers

The transformer is a neural network component that can be used to learn ...
research
11/09/2022

ViTALiTy: Unifying Low-rank and Sparse Approximation for Vision Transformer Acceleration with a Linear Taylor Attention

Vision Transformer (ViT) has emerged as a competitive alternative to con...

Please sign up or login with your details

Forgot password? Click here to reset