Neural Architecture Search on Efficient Transformers and Beyond

07/28/2022
by   Zexiang Liu, et al.
28

Recently, numerous efficient Transformers have been proposed to reduce the quadratic computational complexity of standard Transformers caused by the Softmax attention. However, most of them simply swap Softmax with an efficient attention mechanism without considering the customized architectures specially for the efficient attention. In this paper, we argue that the handcrafted vanilla Transformer architectures for Softmax attention may not be suitable for efficient Transformers. To address this issue, we propose a new framework to find optimal architectures for efficient Transformers with the neural architecture search (NAS) technique. The proposed method is validated on popular machine translation and image classification tasks. We observe that the optimal architecture of the efficient Transformer has the reduced computation compared with that of the standard Transformer, but the general accuracy is less comparable. It indicates that the Softmax attention and efficient attention have their own distinctions but neither of them can simultaneously balance the accuracy and efficiency well. This motivates us to mix the two types of attention to reduce the performance imbalance. Besides the search spaces that commonly used in existing NAS Transformer approaches, we propose a new search space that allows the NAS algorithm to automatically search the attention variants along with architectures. Extensive experiments on WMT' 14 En-De and CIFAR-10 demonstrate that our searched architecture maintains comparable accuracy to the standard Transformer with notably improved computational efficiency.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/14/2022

AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers

Neural architecture search (NAS) has demonstrated promising results on i...
research
07/07/2021

GLiT: Neural Architecture Search for Global and Local Image Transformer

We introduce the first Neural Architecture Search (NAS) method to find a...
research
10/02/2022

DARTFormer: Finding The Best Type Of Attention

Given the wide and ever growing range of different efficient Transformer...
research
11/25/2022

MPCViT: Searching for MPC-friendly Vision Transformer with Heterogeneous Attention

Secure multi-party computation (MPC) enables computation directly on enc...
research
08/22/2023

TurboViT: Generating Fast Vision Transformers via Generative Architecture Search

Vision transformers have shown unprecedented levels of performance in ta...
research
02/20/2021

Towards Accurate and Compact Architectures via Neural Architecture Transformer

Designing effective architectures is one of the key factors behind the s...
research
03/24/2021

Finetuning Pretrained Transformers into RNNs

Transformers have outperformed recurrent neural networks (RNNs) in natur...

Please sign up or login with your details

Forgot password? Click here to reset