Dual Adversarial Co-Learning for Multi-Domain Text Classification

09/18/2019
by   Yuan Wu, et al.
0

In this paper we propose a novel dual adversarial co-learning approach for multi-domain text classification (MDTC). The approach learns shared-private networks for feature extraction and deploys dual adversarial regularizations to align features across different domains and between labeled and unlabeled data simultaneously under a discrepancy based co-learning framework, aiming to improve the classifiers' generalization capacity with the learned features. We conduct experiments on multi-domain sentiment classification datasets. The results show the proposed approach achieves the state-of-the-art MDTC performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/30/2022

Co-Regularized Adversarial Learning for Multi-Domain Text Classification

Multi-domain text classification (MDTC) aims to leverage all available r...
research
10/27/2022

A Curriculum Learning Approach for Multi-domain Text Classification Using Keyword weight Ranking

Text classification is a very classic NLP task, but it has two prominent...
research
01/29/2022

Maximum Batch Frobenius Norm for Multi-Domain Text Classification

Multi-domain text classification (MDTC) has obtained remarkable achievem...
research
01/31/2021

Mixup Regularized Adversarial Networks for Multi-Domain Text Classification

Using the shared-private paradigm and adversarial training has significa...
research
04/26/2022

A Robust Contrastive Alignment Method For Multi-Domain Text Classification

Multi-domain text classification can automatically classify texts in var...
research
05/04/2023

Multi-Domain Learning From Insufficient Annotations

Multi-domain learning (MDL) refers to simultaneously constructing a mode...
research
02/19/2021

Conditional Adversarial Networks for Multi-Domain Text Classification

In this paper, we propose conditional adversarial networks (CANs), a fra...

Please sign up or login with your details

Forgot password? Click here to reset