Improving Social Meaning Detection with Pragmatic Masking and Surrogate Fine-Tuning

08/01/2021
by   Chiyu Zhang, et al.
0

Masked language models (MLMs) are pretrained with a denoising objective that, while useful, is in a mismatch with the objective of downstream fine-tuning. We propose pragmatic masking and surrogate fine-tuning as two strategies that exploit social cues to drive pre-trained representations toward a broad set of concepts useful for a wide class of social meaning tasks. To test our methods, we introduce a new benchmark of 15 different Twitter datasets for social meaning detection. Our methods achieve 2.34 while outperforming other transfer learning methods such as multi-task learning and domain-specific language models pretrained on large datasets. With only 5 of training data (severely few-shot), our methods enable an impressive 68.74 average F1, and we observe promising results in a zero-shot setting involving six datasets from three different languages.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/09/2022

Model-Agnostic Multitask Fine-tuning for Few-shot Vision-Language Transfer Learning

Despite achieving state-of-the-art zero-shot performance, existing visio...
research
01/29/2023

Debiased Fine-Tuning for Vision-language Models by Prompt Regularization

We present a new paradigm for fine-tuning large-scale visionlanguage pre...
research
05/28/2023

Transfer Learning for Power Outage Detection Task with Limited Training Data

Early detection of power outages is crucial for maintaining a reliable p...
research
04/19/2023

MasakhaNEWS: News Topic Classification for African languages

African languages are severely under-represented in NLP research due to ...
research
06/12/2022

DeepEmotex: Classifying Emotion in Text Messages using Deep Transfer Learning

Transfer learning has been widely used in natural language processing th...
research
06/16/2022

Zero-Shot AutoML with Pretrained Models

Given a new dataset D and a low compute budget, how should we choose a p...
research
09/21/2021

Stepmothers are mean and academics are pretentious: What do pretrained language models learn about you?

In this paper, we investigate what types of stereotypical information ar...

Please sign up or login with your details

Forgot password? Click here to reset