Do You Have the Right Scissors? Tailoring Pre-trained Language Models via Monte-Carlo Methods

07/13/2020
by   Ning Miao, et al.
0

It has been a common approach to pre-train a language model on a large corpus and fine-tune it on task-specific data. In practice, we observe that fine-tuning a pre-trained model on a small dataset may lead to over- and/or under-estimation problem. In this paper, we propose MC-Tailor, a novel method to alleviate the above issue in text generation tasks by truncating and transferring the probability mass from over-estimated regions to under-estimated ones. Experiments on a variety of text generation datasets show that MC-Tailor consistently and significantly outperforms the fine-tuning approach. Our code is available at this url.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/17/2021

Stage-wise Fine-tuning for Graph-to-Text Generation

Graph-to-text generation has benefited from pre-trained language models ...
research
05/03/2022

Embedding Hallucination for Few-Shot Language Fine-tuning

Few-shot language learners adapt knowledge from a pre-trained model to r...
research
06/30/2020

Technical Report: Auxiliary Tuning and its Application to Conditional Text Generation

We introduce a simple and efficient method, called Auxiliary Tuning, for...
research
09/30/2021

Self-conditioning pre-trained language models

We study the presence of expert units in pre-trained Transformer-based L...
research
05/15/2022

Mitigating Toxic Degeneration with Empathetic Data: Exploring the Relationship Between Toxicity and Empathy

Large pre-trained neural language models have supported the effectivenes...
research
08/26/2019

Measuring Patent Claim Generation by Span Relevancy

Our goal of patent claim generation is to realize "augmented inventing" ...
research
06/02/2023

Unsupervised Paraphrasing of Multiword Expressions

We propose an unsupervised approach to paraphrasing multiword expression...

Please sign up or login with your details

Forgot password? Click here to reset