A Novel Approach to Train Diverse Types of Language Models for Health Mention Classification of Tweets

04/13/2022
by   Pervaiz Iqbal Khan, et al.
0

Health mention classification deals with the disease detection in a given text containing disease words. However, non-health and figurative use of disease words adds challenges to the task. Recently, adversarial training acting as a means of regularization has gained popularity in many NLP tasks. In this paper, we propose a novel approach to train language models for health mention classification of tweets that involves adversarial training. We generate adversarial examples by adding perturbation to the representations of transformer models for tweet examples at various levels using Gaussian noise. Further, we employ contrastive loss as an additional objective function. We evaluate the proposed method on the PHM2017 dataset extended version. Results show that our proposed approach improves the performance of classifier significantly over the baseline methods. Moreover, our analysis shows that adding noise at earlier layers improves models' performance whereas adding noise at intermediate layers deteriorates models' performance. Finally, adding noise towards the final layers performs better than the middle layers noise addition.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/03/2022

Improving Health Mentioning Classification of Tweets using Contrastive Adversarial Training

Health mentioning classification (HMC) classifies an input text as healt...
research
11/26/2021

Simple Contrastive Representation Adversarial Learning for NLP Tasks

Self-supervised learning approach like contrastive learning is attached ...
research
03/25/2019

Robust Neural Networks using Randomized Adversarial Training

Since the discovery of adversarial examples in machine learning, researc...
research
11/09/2019

Improving Machine Reading Comprehension via Adversarial Training

Adversarial training (AT) as a regularization method has proved its effe...
research
04/30/2020

TextAT: Adversarial Training for Natural Language Understanding with Token-Level Perturbation

Adversarial training is effective in improving the robustness of neural ...
research
10/22/2020

Contrastive Learning with Adversarial Examples

Contrastive learning (CL) is a popular technique for self-supervised lea...
research
05/26/2021

Joint Optimization of Tokenization and Downstream Model

Since traditional tokenizers are isolated from a downstream task and mod...

Please sign up or login with your details

Forgot password? Click here to reset