SIFTER: A Task-specific Alignment Strategy for Enhancing Sentence Embeddings

06/21/2023
by   Chao Yu, et al.
0

The paradigm of pre-training followed by fine-tuning on downstream tasks has become the mainstream method in natural language processing tasks. Although pre-trained models have the advantage of generalization, their performance may still vary significantly across different domain tasks. This is because the data distribution in different domains varies. For example, the different parts of the sentence 'He married Smt. Dipali Ghosh in 1947 and led a very happy married life' may have different impact for downstream tasks. For similarity calculations, words such as 'led' and 'life' are more important. On the other hand, for sentiment analysis, the word 'happy' is crucial. This indicates that different downstream tasks have different levels of sensitivity to sentence components. Our starting point is to scale information of the model and data according to the specifics of downstream tasks, enhancing domain information of relevant parts for these tasks and reducing irrelevant elements for different domain tasks, called SIFTER. In the experimental part, we use the SIFTER to improve SimCSE by constructing positive sample pairs based on enhancing the sentence stem and reducing the unimportant components in the sentence, and maximize the similarity between three sentences. Similarly, SIFTER can improve the gate mechanism of the LSTM model by short-circuiting the input gate of important words so that the LSTM model remembers the important parts of the sentence. Our experiments demonstrate that SIFTER outperforms the SimCSE and LSTM baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/14/2021

Different Strokes for Different Folks: Investigating Appropriate Further Pre-training Approaches for Diverse Dialogue Tasks

Loading models pre-trained on the large-scale corpus in the general doma...
research
11/24/2022

Using Selective Masking as a Bridge between Pre-training and Fine-tuning

Pre-training a language model and then fine-tuning it for downstream tas...
research
08/01/2022

giMLPs: Gate with Inhibition Mechanism in MLPs

This paper presents a new model architecture, gate with inhibition MLP (...
research
12/22/2020

ActionBert: Leveraging User Actions for Semantic Understanding of User Interfaces

As mobile devices are becoming ubiquitous, regularly interacting with a ...
research
11/06/2019

SentiLR: Linguistic Knowledge Enhanced Language Representation for Sentiment Analysis

Most of the existing pre-trained language representation models neglect ...
research
08/04/2020

Taking Notes on the Fly Helps BERT Pre-training

How to make unsupervised language pre-training more efficient and less r...
research
06/11/2021

Bridging Subword Gaps in Pretrain-Finetune Paradigm for Natural Language Generation

A well-known limitation in pretrain-finetune paradigm lies in its inflex...

Please sign up or login with your details

Forgot password? Click here to reset