Dynamic Transformers Provide a False Sense of Efficiency

05/20/2023
by   Yiming Chen, et al.
0

Despite much success in natural language processing (NLP), pre-trained language models typically lead to a high computational cost during inference. Multi-exit is a mainstream approach to address this issue by making a trade-off between efficiency and accuracy, where the saving of computation comes from an early exit. However, whether such saving from early-exiting is robust remains unknown. Motivated by this, we first show that directly adapting existing adversarial attack approaches targeting model accuracy cannot significantly reduce inference efficiency. To this end, we propose a simple yet effective attacking framework, SAME, a novel slowdown attack framework on multi-exit models, which is specially tailored to reduce the efficiency of the multi-exit models. By leveraging the multi-exit models' design characteristics, we utilize all internal predictions to guide the adversarial sample generation instead of merely considering the final prediction. Experiments on the GLUE benchmark show that SAME can effectively diminish the efficiency gain of various multi-exit models by 80 generalization ability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/05/2023

White-Box Multi-Objective Adversarial Attack on Dialogue Generation

Pre-trained transformers are popular in state-of-the-art dialogue genera...
research
08/22/2023

ConcatPlexer: Additional Dim1 Batching for Faster ViTs

Transformers have demonstrated tremendous success not only in the natura...
research
07/05/2023

SkipDecode: Autoregressive Skip Decoding with Batching and Caching for Efficient LLM Inference

Autoregressive large language models (LLMs) have made remarkable progres...
research
05/24/2022

BabyBear: Cheap inference triage for expensive language models

Transformer language models provide superior accuracy over previous mode...
research
04/01/2023

GradMDM: Adversarial Attack on Dynamic Networks

Dynamic neural networks can greatly reduce computation redundancy withou...
research
02/10/2023

Step by Step Loss Goes Very Far: Multi-Step Quantization for Adversarial Text Attacks

We propose a novel gradient-based attack against transformer-based langu...
research
07/31/2022

Building an Efficiency Pipeline: Commutativity and Cumulativeness of Efficiency Operators for Transformers

There exists a wide variety of efficiency methods for natural language p...

Please sign up or login with your details

Forgot password? Click here to reset