An Efficient FPGA-Based Accelerator for Swin Transformer

08/26/2023
by   Zhiyang Liu, et al.
0

Since introduced, Swin Transformer has achieved remarkable results in the field of computer vision, it has sparked the need for dedicated hardware accelerators, specifically catering to edge computing demands. For the advantages of flexibility, low power consumption, FPGAs have been widely employed to accelerate the inference of convolutional neural networks (CNNs) and show potential in Transformer-based models. Unlike CNNs, which mainly involve multiply and accumulate (MAC) operations, Transformer involve non-linear computations such as Layer Normalization (LN), Softmax, and GELU. These nonlinear computations do pose challenges for accelerator design. In this paper, to propose an efficient FPGA-based hardware accelerator for Swin Transformer, we focused on using different strategies to deal with these nonlinear calculations and efficiently handling MAC computations to achieve the best acceleration results. We replaced LN with BN, Given that Batch Normalization (BN) can be fused with linear layers during inference to optimize inference efficiency. The modified Swin-T, Swin-S, and Swin-B respectively achieved Top-1 accuracy rates of 80.7 Furthermore, We employed strategies for approximate computation to design hardware-friendly architectures for Softmax and GELU computations. We also designed an efficient Matrix Multiplication Unit to handle all linear computations in Swin Transformer. As a conclude, compared with CPU (AMD Ryzen 5700X), our accelerator achieved 1.76x, 1.66x, and 1.25x speedup and achieved 20.45x, 18.60x, and 14.63x energy efficiency (FPS/power consumption) improvement on Swin-T, Swin-S, and Swin-B models, respectively. Compared to GPU (Nvidia RTX 2080 Ti), we achieved 5.05x, 4.42x, and 3.00x energy efficiency improvement respectively. As far as we know, the accelerator we proposed is the fastest FPGA-based accelerator for Swin Transformer.

READ FULL TEXT

page 1

page 6

research
02/20/2017

A GPU-Outperforming FPGA Accelerator Architecture for Binary Convolutional Neural Networks

FPGA-based hardware accelerators for convolutional neural networks (CNNs...
research
09/18/2020

Hardware Accelerator for Multi-Head Attention and Position-Wise Feed-Forward in the Transformer

Designing hardware accelerators for deep neural networks (DNNs) has been...
research
12/03/2021

NN-LUT: Neural Approximation of Non-Linear Operations for Efficient Transformer Inference

Non-linear operations such as GELU, Layer normalization, and Softmax are...
research
02/27/2023

Full Stack Optimization of Transformer Inference: a Survey

Recent advances in state-of-the-art DNN architecture design have been mo...
research
09/10/2018

DNN Dataflow Choice Is Overrated

Many DNN accelerators have been proposed and built using different micro...
research
02/27/2018

L1-Norm Batch Normalization for Efficient Training of Deep Neural Networks

Batch Normalization (BN) has been proven to be quite effective at accele...
research
08/07/2022

A Length Adaptive Algorithm-Hardware Co-design of Transformer on FPGA Through Sparse Attention and Dynamic Pipelining

Transformers are considered one of the most important deep learning mode...

Please sign up or login with your details

Forgot password? Click here to reset