A Label Proportions Estimation technique for Adversarial Domain Adaptation in Text Classification

03/16/2020 ∙ by Zhuohao Chen, et al. ∙ 0

Many text classification tasks are domain-dependent, and various domain adaptation approaches have been proposed to predict unlabeled data in a new domain. Domain-adversarial neural networks(DANN) and their variants have been actively used recently and have achieved state-of-the-art results for this problem. However, most of these approaches assume that the label proportions of the source and target domains are similar, which rarely holds in real-world scenarios. Sometimes the label shift is very large and the DANN fails to learn domain-invariant features. In this study, we focus on unsupervised domain adaptation of text classification with label shift and introduce a domain adversarial network with label proportions estimation (DAN-LPE) framework. The DAN-LPEsimultaneously trains a domain adversarial net and processes label proportions estimation by the distributions of the predictions. Experiments show the DAN-LPE achieves a good estimate of the target label distributions and reduces the label shift to improve the classification performance.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Text classification is one of the most important tasks of natural language processing (NLP). However, many text data sets are unlabelled. Moreover, text data is always domain-dependent and it is difficult to obtain annotated data for all the domains of interest. To handle this, researchers apply the domain adaptation techniques for text classification. The study

[blitzer2007biographies] first applied the structural correspondence learning (SCL) [blitzer2006domain] algorithm to the cross-domain sentiment classification. In [pan2010cross] the spectral feature alignment (SFA) is proposed algorithm to reduce the gap between domain-specific words of the domains. The study [bollegala2015cross] modeled the cross-domain classification task as an embedding learning.

Recently deep adversarial networks [goodfellow2014generative] have achieved success across many tasks. The domain-adversarial neural networks (DANN) structure proposed by [ganin2016domain]

outperforms the traditional approaches in domain adaptation tasks of sentiment analysis. It implements a domain classifier to learn domain-invariant features. Some studies extended DANN for different multi-source scenarios

[liu2017adversarial, zhao2018multiple, chen2018multinomial]

. However, they all assume that the label proportions across the domains remain unchanged, an assumption that often is not met in real world tasks. The changes in the label distribution are known as prior probability shift or label shift and they prohibit the DANN from learning domain-invariant features. In the study of

[zhang2013domain] the author applies the kernel mean matching (KMM) methods to estimate the label shift. Another attempt to quantify the shift is the Black Box Shift Estimation (BBSE) [lipton2018detecting, azizzadenesheli2019regularized] which obtains accurate estimates on high-dimensional datasets. Recently, [li2019target] proposed to address the label shift problem by using distribution matching to estimate label proportions.

In this paper, we implement a domain adversarial network with label proportions estimation (DAN-LPE) framework which learns domain-invariant features and estimates the target label proportions. The proportion estimation only uses the validation confusion and label predictions as inputs. We reduce the label shift by re-weighting the sample in the domain classifier based on the estimate. In the experiments we compare the DAN-LPE with other algorithms and show it leads in most of the tasks.

2 Problem Setup

Let and be the source and target domains defined on and be a classifier. We use and to denote the feature and label variables. The output of a classifier is denoted by . We use and

to indicate the probability density function of

and . The source and target datasets are represented by and . We split the into training set and validation set . The prior distributions of and are given by and .

The red box in Fig. 1 shows the DANN structure consisting of a feature extractor , a text classifier and a domain classifier . We expect the feature extractor to capture the features satisfying with the help of which makes the feature distributions between source and target domains indistinguishable by back-propagation with gradient reversal. However, the performance of would be declined if the prior distributions between and differ a lot. To handle this, we implement a prior distribution estimator to estimate the target label proportions and correct the label shift by weighting the samples feeding into the based on this updating proportions.

Figure 1: The Structure of DAN-LPE

3 Domain Adversarial Network With Label Proportion Estimation

3.1 Moments and Matrices Definition

We first define the validation set

The moments, matrices of

and and their plug-in estimates are denoted as follows

The distributions of the label predictions of and their plug-in estimates are expressed by

3.2 Label Proportions Estimation

The crucial component of the DAN-LPE is the a way of updating label proportion estimate. We define a random vector

to estimate . And we assume the perfect domain-invariant features are learnt that , which implies . When , the equality holds for every

, thus we proposed the loss function

(1)

After replacing with the plug-in estimate we get

(2)

Now we relate only to the observable data. We set , which implies , and we get that

(3)

Then we conclude the following implication

Lemma 3.1

Assume and

is an invertible matrix,

is achieved if and only if .

By computing the gradient we can derive

(4)

The proposed prior is updated by gradient descent using Equation (3). However, since is constrained by , we apply the projected gradient descent

(5)

Where is the learning rate of updating . To avoid the existence of the negative proportion estimate, we also set a lower bound that . Once , we have

(6)

We define and as the loss functions of and . To eliminate the prior shift, we re-weight the samples from in based on their labels. Let , where is the prior distribution of For a mini-batch of size , the instances from and are and , the sample weight vector of is . So we compute by

(7)

Where presents the cross-entropy loss.

1:Step 1:
2:Initialization: ; , ;
3:for t = i to T do
4:     Sample a mini-batch training set
5:     Fix
6:     Update C parameters using
7:     Update F parameters using
8:     Update D parameters using
9:     if  and mod  then
10:         for j = 1 to m do
11:              Predict the labels of and .
12:              Update by Equation (4)-(6)               
13:Step 2:
14:Perform DANN with fixed and modified loss function of in Equation (7)
Algorithm 1 Domain Adversarial Network with Label Proportion Estimation

The complete pseudo-code of this learning procedure is given in Algorithm 1

. In the first step it trains a domain adversarial net and processes label proportion estimation alternately to get an estimate of the target prior distribution. During this procedure, the label shift effect is being reduced due to the improving estimate of label proportions, which helps the model derive better domain-invariant features. Since the label shift still matters in early epochs, we need a second step to perform general DANN with the fixed

achieved in the first step and the modified loss function in (6).

The hyper-parameters of step 1 in algorithm 1 are quite flexible. The number of iteration is suitable when the validation loss does not increase evidently. The role of is to guarantee that we update when a decent model is trained. We update every iterations so it reduces the times to predict and and accelerates the process. Parameter controls how fast changes. The DAN-LPE is not very sensitive to these hyper-parameters. When is fixed as the prior distribution of , step 1 of Algorithm 1 is equivalent to DANN.

4 Experiments

4.1 Experiments on Yelp Data

The Yelp Open Dataset [yelpdata2019] includes 192,609 businesses and 6,685,900 reviews of more than 20 categories. In each review a user expresses opinions about a business and gives a rating ranging from 1 to 5. We compute the average review ratings of each business and label the business with if and if . The business with are filtered out to make the gap. We select the data of Financial Services(F), HotelTravel(H), BeautySpas(B) and Pets(P) for the tasks. Their label distributions vary are shown in Fig. 2a. We sample 2800 businesses for each domain preserving the label proportions and predict the class using their reviews. Among the samples of each domain, 10% of them are split into the validation set.

In feature extraction, we find 500 words by the intersection of exact 837 most common words of each domain, and form the bag of words representation for each review by the occurrence of these tokens. The feature vector of a business is achieved by averaging the vectors of its reviews. In the DAN-LPE setting we implement a standard neural network with 2 hidden layers of 32 dimensions.

takes the output of the first layer as the input and another hidden layer of the same size. Dropout of p = 0.6 is set for all the hidden layers.

We compare DAN-LPE with SVM, DNN and DANN. DNN is constructed by and DANN by . For DNN, DANN and DAN-LPE, the learning rate is fixed as and the size of mini-batch is 64. For optimization, the Adam [kingma2014adam] optimizer was applied following an early stopping strategy. In the first step of DAN-LPE, we set , , , and .

Task Accuracy Estimation Results
P->Q SVM DNN DANN
DAN-
LPE
B->H 0.881 0.882 0.884 0.885 0.110.03 0.40
B->F 0.869 0.876 0.883 0.880 0.130.06 0.32
P->H 0.842 0.863 0.858 0.864 0.130.05 0.47
P->F 0.871 0.879 0.880 0.881 0.210.04 0.38
H->B 0.862 0.861 0.858 0.872 0.030.01 0.40
H->P 0.871 0.878 0.875 0.878 0.080.02 0.47
F->B 0.885 0.879 0.877 0.896 0.070.02 0.32
F->P 0.840 0.828 0.826 0.846 0.030.01 0.38
B->P 0.884 0.892 0.893 0.893 0.020.01 0.07
P->B 0.896 0.907 0.908 0.907 0.080.03 0.07
H->F 0.881 0.885 0.883 0.884 0.070.03 0.08
F->H 0.846 0.839 0.852 0.848 0.170.05 0.08
Table 1: Results of Yelp Experiments.

To evaluate the label proportions estimation, we define to be the estimate by DAN-LPE and and be the label proportions of samples of and .

The results are shown in Table 1. In the first eight tasks and differ a lot and the DANN does not show much improvement over DNN. In some tasks it even degrades the classification performance. In these experiments, the DAN-LPE shows a significant gain because the label proportions estimate reduces the label shift. In the last four tasks the label proportions between the and are close and DANN gets the best accuracy in three of them. The DAN-LPE performs comparatively with DANN in these tasks since does not increase much shift. It is worth mentioning that given accurate label shift estimate, we can also improve the classification accuracy by the prior probability adjustment [saerens2002adjusting] re-weighting the class importance in . Here we focus on the behavior in the domain adapter and will not further discuss on this.

(a) Yelp Data (b) Behavioral Coding Data
Figure 2: Label Distributions

4.2 Experiments on Behavioral Coding Data

One of the text classification application is the behavioral coding in the Motivational Interviewing (MI) [miller2012motivational] counseling. The utterances of therapist are coded to evaluate a therapist based on the Motivational Interviewing Skill Code (MISC) [miller2003manual] manual. Some of these MISC codes confuse with each other and training classifier for subset of these confused codes help improve the behavioral coding [chen2019improving]. In this experiment we classify the utterances of Giving Information (GI), simple reflection (RES) and complex reflection(REC) collected from MI sessions of alcohol addiction (A), drug-abuse (D) [atkins2014scaling] and general psychology conversations (G) from at a US university counseling center with each category containing around 10000 samples. The label proportions are shown in Fig. 2b. The modules and in the DAN-LPE structure is similar to the one in the Yelp experiment with dimensions of 128 in hidden layers and dropout rate p = 0.4. We replace of the yelp experiment with a word embedding layer, followed by a bidirectional LSTM layer and an attention mechanism implemented as in [yang2016hierarchical] above the LSTM. As shown in Fig. 2b, the data is highly imbalanced so we evaluate the performance by the average F1 score. In module of both step 1 and step 2 of the algorithm 1

, we assign weights for each class inversely proportional to their class frequencies to make the algorithm more robust as well as improving the F-score.

Task F-score Estimation Results
P->Q DNN DANN DAN-LPE
A->G 0.496 0492 0.503 0.140.03 0.27
G->A 0.489 0.489 0.496 0.100.02 0.27
D->G 0.512 0.508 0.522 0.050.01 0.24
G->D 0.552 0.556 0.558 0.060.02 0.24
A->D 0.627 0.639 0.644 0.130.02 0.25
D->A 0.593 0.594 0.590 0.190.04 0.25
Table 2: Results of Behavioral Coding Experiments.

The results in Table 2 show that the DAN-LPE achieves the overall best performance and the highest F-score in most tasks except in the last one when the estimated proportions does not reduce much label shift. The DANN only has a comparable F-score compared with DNN and even degrades for some tasks. The DNN results suggest that we find the behavioral coding task is harder than the yelp experiment because the number of classes is larger and the behavior codes are human defined and not orthogonal. However, the DAN-LPE shows its robustness and still gives reasonable proportion estimate of the data in unlabelled domain.

5 Conclusion and Future Work

In this paper, we proposed the DAN-LPE framework to handle the label shift in the DANN for unsupervised domain adaptation of text classification. In DAN-LPE we estimate the target label distribution and learn the domain-invariant features simultaneously. We derived the formula to update the label proportions estimate using the validation confusion and target label predictions. Experiments shows that the DAN-LPE is robust to estimate the label distribution and improves the text classification. In the future, we plan to apply the DAN-LPE to other tasks such as image classification.

References