Deep Model Compression Also Helps Models Capture Ambiguity

06/12/2023
by   Hancheol Park, et al.
0

Natural language understanding (NLU) tasks face a non-trivial amount of ambiguous samples where veracity of their labels is debatable among annotators. NLU models should thus account for such ambiguity, but they approximate the human opinion distributions quite poorly and tend to produce over-confident predictions. To address this problem, we must consider how to exactly capture the degree of relationship between each sample and its candidate classes. In this work, we propose a novel method with deep model compression and show how such relationship can be accounted for. We see that more reasonably represented relationships can be discovered in the lower layers and that validation accuracies are converging at these layers, which naturally leads to layer pruning. We also see that distilling the relationship knowledge from a lower layer helps models produce better distribution. Experimental results demonstrate that our method makes substantial improvement on quantifying ambiguity without gold distribution labels. As positive side-effects, our method is found to reduce the model size significantly and improve latency, both attractive aspects of NLU products.

READ FULL TEXT
research
06/06/2021

Embracing Ambiguity: Shifting the Training Target of NLI Models

Natural Language Inference (NLI) datasets contain examples with highly a...
research
10/16/2021

What do Compressed Large Language Models Forget? Robustness Challenges in Model Compression

Recent works have focused on compressing pre-trained language models (PL...
research
05/02/2023

The Benefits of Bad Advice: Autocontrastive Decoding across Model Layers

Applying language models to natural language processing tasks typically ...
research
08/02/2020

Impossibility of Unambiguous Communication as a Source of Failure in AI Systems

Ambiguity is pervasive at multiple levels of linguistic analysis effecti...
research
07/24/2017

Modeling Label Ambiguity for Neural List-Wise Learning to Rank

List-wise learning to rank methods are considered to be the state-of-the...
research
03/09/2021

Probabilistic Modeling of Semantic Ambiguity for Scene Graph Generation

To generate "accurate" scene graphs, almost all existing methods predict...

Please sign up or login with your details

Forgot password? Click here to reset