RoSearch: Search for Robust Student Architectures When Distilling Pre-trained Language Models

06/07/2021
by   Xin Guo, et al.
6

Pre-trained language models achieve outstanding performance in NLP tasks. Various knowledge distillation methods have been proposed to reduce the heavy computation and storage requirements of pre-trained language models. However, from our observations, student models acquired by knowledge distillation suffer from adversarial attacks, which limits their usage in security sensitive scenarios. In order to overcome these security problems, RoSearch is proposed as a comprehensive framework to search the student models with better adversarial robustness when performing knowledge distillation. A directed acyclic graph based search space is built and an evolutionary search strategy is utilized to guide the searching approach. Each searched architecture is trained by knowledge distillation on pre-trained language model and then evaluated under a robustness-, accuracy- and efficiency-aware metric as environmental fitness. Experimental results show that RoSearch can improve robustness of student models from 7 datasets with comparable weight compression ratio to existing distillation methods (4.6× 6.5× improvement from teacher model BERT_BASE) and low accuracy drop. In addition, we summarize the relationship between student architecture and robustness through statistics of searched models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/02/2022

Gradient Knowledge Distillation for Pre-trained Language Models

Knowledge distillation (KD) is an effective framework to transfer knowle...
research
11/06/2022

Robust Lottery Tickets for Pre-trained Language Models

Recent works on Lottery Ticket Hypothesis have shown that pre-trained la...
research
12/29/2020

Accelerating Pre-trained Language Models via Calibrated Cascade

Dynamic early exiting aims to accelerate pre-trained language models' (P...
research
10/22/2021

How and When Adversarial Robustness Transfers in Knowledge Distillation?

Knowledge distillation (KD) has been widely used in teacher-student trai...
research
11/20/2022

AI-KD: Adversarial learning and Implicit regularization for self-Knowledge Distillation

We present a novel adversarial penalized self-knowledge distillation met...
research
09/15/2021

EfficientBERT: Progressively Searching Multilayer Perceptron via Warm-up Knowledge Distillation

Pre-trained language models have shown remarkable results on various NLP...
research
10/04/2022

A Study on the Efficiency and Generalization of Light Hybrid Retrievers

Existing hybrid retrievers which integrate sparse and dense retrievers, ...

Please sign up or login with your details

Forgot password? Click here to reset