Conditional Adversarial Networks for Multi-Domain Text Classification

02/19/2021
by   Yuan Wu, et al.
0

In this paper, we propose conditional adversarial networks (CANs), a framework that explores the relationship between the shared features and the label predictions to impose more discriminability to the shared features, for multi-domain text classification (MDTC). The proposed CAN introduces a conditional domain discriminator to model the domain variance in both shared feature representations and class-aware information simultaneously and adopts entropy conditioning to guarantee the transferability of the shared features. We provide theoretical analysis for the CAN framework, showing that CAN's objective is equivalent to minimizing the total divergence among multiple joint distributions of shared features and label predictions. Therefore, CAN is a theoretically sound adversarial network that discriminates over multiple distributions. Evaluation results on two MDTC benchmarks show that CAN outperforms prior methods. Further experiments demonstrate that CAN has a good ability to generalize learned knowledge to unseen domains.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/15/2018

Multinomial Adversarial Networks for Multi-Domain Text Classification

Many text classification tasks are known to be highly domain-dependent. ...
research
01/30/2022

Co-Regularized Adversarial Learning for Multi-Domain Text Classification

Multi-domain text classification (MDTC) aims to leverage all available r...
research
01/31/2021

Mixup Regularized Adversarial Networks for Multi-Domain Text Classification

Using the shared-private paradigm and adversarial training has significa...
research
09/18/2019

Dual Adversarial Co-Learning for Multi-Domain Text Classification

In this paper we propose a novel dual adversarial co-learning approach f...
research
02/18/2021

DINO: A Conditional Energy-Based GAN for Domain Translation

Domain translation is the process of transforming data from one domain t...
research
08/07/2021

Learning to Transfer with von Neumann Conditional Divergence

The similarity of feature representations plays a pivotal role in the su...
research
09/16/2019

Discovering Differential Features: Adversarial Learning for Information Credibility Evaluation

A series of deep learning approaches extract a large number of credibili...

Please sign up or login with your details

Forgot password? Click here to reset