Exploring Robustness of Prefix Tuning in Noisy Data: A Case Study in Financial Sentiment Analysis

10/26/2022
by   Sudhandar Balakrishnan, et al.
0

The invention of transformer-based models such as BERT, GPT, and RoBERTa has enabled researchers and financial companies to finetune these powerful models and use them in different downstream tasks to achieve state-of-the-art performance. Recently, a lightweight alternative (approximately 0.1 the original model parameters) to fine-tuning, known as prefix tuning has been introduced. This method freezes the model parameters and only updates the prefix to achieve performance comparable to full fine-tuning. Prefix tuning enables researchers and financial practitioners to achieve similar results with much fewer parameters. In this paper, we explore the robustness of prefix tuning when facing noisy data. Our experiments demonstrate that fine-tuning is more robust to noise than prefix tuning – the latter method faces a significant decrease in performance on most corrupted data sets with increasing noise levels. Furthermore, prefix tuning has high variances in the F1 scores compared to fine-tuning in many corruption methods. We strongly advocate that caution should be carefully taken when applying the state-of-the-art prefix tuning method to noisy data.

READ FULL TEXT
research
07/07/2022

Sensitivity Analysis on Transferred Neural Architectures of BERT and GPT-2 for Financial Sentiment Analysis

The explosion in novel NLP word embedding and deep learning techniques h...
research
09/20/2022

Integer Fine-tuning of Transformer-based Models

Transformer based models are used to achieve state-of-the-art performanc...
research
06/27/2021

A Closer Look at How Fine-tuning Changes BERT

Given the prevalence of pre-trained contextualized representations in to...
research
01/10/2022

TiltedBERT: Resource Adjustable Version of BERT

In this paper, we proposed a novel adjustable finetuning method that imp...
research
11/24/2021

Supervised Neural Discrete Universal Denoiser for Adaptive Denoising

We improve the recently developed Neural DUDE, a neural network-based ad...
research
05/16/2019

IMHO Fine-Tuning Improves Claim Detection

Claims are the central component of an argument. Detecting claims across...
research
08/01/2022

giMLPs: Gate with Inhibition Mechanism in MLPs

This paper presents a new model architecture, gate with inhibition MLP (...

Please sign up or login with your details

Forgot password? Click here to reset