FlexiBERT: Are Current Transformer Architectures too Homogeneous and Rigid?

05/23/2022
by   Shikhar Tuli, et al.
0

The existence of a plethora of language models makes the problem of selecting the best one for a custom task challenging. Most state-of-the-art methods leverage transformer-based models (e.g., BERT) or their variants. Training such models and exploring their hyperparameter space, however, is computationally expensive. Prior work proposes several neural architecture search (NAS) methods that employ performance predictors (e.g., surrogate models) to address this issue; however, analysis has been limited to homogeneous models that use fixed dimensionality throughout the network. This leads to sub-optimal architectures. To address this limitation, we propose a suite of heterogeneous and flexible models, namely FlexiBERT, that have varied encoder layers with a diverse set of possible operations and different hidden dimensions. For better-posed surrogate modeling in this expanded design space, we propose a new graph-similarity-based embedding scheme. We also propose a novel NAS policy, called BOSHNAS, that leverages this new scheme, Bayesian modeling, and second-order optimization, to quickly train and use a neural surrogate model to converge to the optimal architecture. A comprehensive set of experiments shows that the proposed policy, when applied to the FlexiBERT design space, pushes the performance frontier upwards compared to traditional models. FlexiBERT-Mini, one of our proposed models, has 3 higher GLUE score. A FlexiBERT model with equivalent performance as the best homogeneous model achieves 2.6x smaller size. FlexiBERT-Large, another proposed model, achieves state-of-the-art results, outperforming the baseline models by at least 5.7

READ FULL TEXT

page 7

page 10

page 15

page 23

page 27

research
11/02/2020

PV-NAS: Practical Neural Architecture Search for Video Recognition

Recently, deep learning has been utilized to solve video recognition pro...
research
09/03/2019

MANAS: Multi-Agent Neural Architecture Search

The Neural Architecture Search (NAS) problem is typically formulated as ...
research
07/28/2021

Homogeneous Architecture Augmentation for Neural Predictor

Neural Architecture Search (NAS) can automatically design well-performed...
research
06/13/2020

Neural Architecture Search using Bayesian Optimisation with Weisfeiler-Lehman Kernel

Bayesian optimisation (BO) has been widely used for hyperparameter optim...
research
06/02/2022

MMTM: Multi-Tasking Multi-Decoder Transformer for Math Word Problems

Recently, quite a few novel neural architectures were derived to solve m...
research
10/02/2022

DARTFormer: Finding The Best Type Of Attention

Given the wide and ever growing range of different efficient Transformer...
research
08/24/2021

Learning Effective and Efficient Embedding via an Adaptively-Masked Twins-based Layer

Embedding learning for categorical features is crucial for the deep lear...

Please sign up or login with your details

Forgot password? Click here to reset