Cross-Model Comparative Loss for Enhancing Neuronal Utility in Language Understanding

01/10/2023
by   Yunchang Zhu, et al.
0

Current natural language understanding (NLU) models have been continuously scaling up, both in terms of model size and input context, introducing more hidden and input neurons. While this generally improves performance on average, the extra neurons do not yield a consistent improvement for all instances. This is because some hidden neurons are redundant, and the noise mixed in input neurons tends to distract the model. Previous work mainly focuses on extrinsically reducing low-utility neurons by additional post- or pre-processing, such as network pruning and context selection, to avoid this problem. Beyond that, can we make the model reduce redundant parameters and suppress input noise by intrinsically enhancing the utility of each neuron? If a model can efficiently utilize neurons, no matter which neurons are ablated (disabled), the ablated submodel should perform no better than the original full model. Based on such a comparison principle between models, we propose a cross-model comparative loss for a broad range of tasks. Comparative loss is essentially a ranking loss on top of the task-specific losses of the full and ablated models, with the expectation that the task-specific loss of the full model is minimal. We demonstrate the universal effectiveness of comparative loss through extensive experiments on 14 datasets from 3 distinct NLU tasks based on 4 widely used pretrained language models, and find it particularly superior for models with few parameters or long input.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/09/2022

Attribution-based Task-specific Pruning for Multi-task Language Models

Multi-task language models show outstanding performance for various natu...
research
11/14/2022

Finding Skill Neurons in Pre-trained Transformer-based Language Models

Transformer-based pre-trained language models have demonstrated superior...
research
12/10/2019

Unsupervised Transfer Learning via BERT Neuron Selection

Recent advancements in language representation models such as BERT have ...
research
07/19/2022

Analyzing Bagging Methods for Language Models

Modern language models leverage increasingly large numbers of parameters...
research
05/24/2023

SmartTrim: Adaptive Tokens and Parameters Pruning for Efficient Vision-Language Models

Despite achieving remarkable performance on various vision-language task...
research
11/10/2022

Cherry Hypothesis: Identifying the Cherry on the Cake for Dynamic Networks

Dynamic networks have been extensively explored as they can considerably...

Please sign up or login with your details

Forgot password? Click here to reset