Active Sentence Learning by Adversarial Uncertainty Sampling in Discrete Space

04/17/2020 ∙ by Dongyu Ru, et al. ∙ Shanghai Jiao Tong University ByteDance Inc. 0

In this paper, we focus on reducing the labeled data size for sentence learning. We argue that real-time uncertainty sampling of active learning is time-consuming, and delayed uncertainty sampling may lead to the ineffective sampling problem. We propose the adversarial uncertainty sampling in discrete space, in which sentences are mapped into the popular pre-trained language model encoding space. Our proposed approach can work in real-time and is more efficient than traditional uncertainty sampling. Experimental results on five datasets show that our proposed approach outperforms strong baselines and can achieve better uncertainty sampling effectiveness with acceptable running time.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recently, unsupervised neural models pre-trained on language modeling tasks, such as ELMo Peters et al. (2018), OpenAI GPT Radford et al. (2018), and BERT Devlin et al. (2018)

, have shown impressive improvements in various natural language processing (NLP) tasks. With the help of massive universal knowledge learned from pre-trained language models such as BERT, we can use less task-specific knowledge to solve downstream tasks, namely, we may need less labeled data for training. Much recent work put efforts to boost the downstream task performance with pre-trained language models (LM). Differently, in this paper, we focus on the question that can we use less labeled data with these models for learning of downstream NLP tasks?

Active learning approaches such as uncertainty sampling Lewis and Gale (1994) can be a straightforward choice to reduce the labeled data for training, which needs to traverse all unlabeled data to find informative unlabeled samples. Specifically, in uncertainty sampling, these informative samples are always near the decision boundary with larger entropy. However, the traverse process is very time-consuming, thus cannot be conducted frequently Settles and Craven (2008). A common choice is to perform the sampling process after every specific period, usually when every 10% or 20% data are labeled and well-trained Deng et al. (2018).

We argue that uncertainty sampling after every specific period is not necessarily the best. Infrequently performing uncertainty sampling may lead to the “ineffective sampling” problem. Because in the early phase of training, the decision boundary changes quickly, which makes the uncertainty samples less effective after several updates of the model. Ideally, uncertainty sampling should be performed very frequently in the early phase of model training.

In this paper, we propose the adversarial uncertainty sampling in discrete space (AUSDS) to address the ineffective sampling problem for active sentence learning, aiming to reduce the label data for sentence prediction. Specifically, the sentence learning refers to the learning of NLP tasks such as text classification, sequence labeling, etc. We first borrow the adversarial attack Goodfellow et al. (2014); Kurakin et al. (2016) idea into uncertainty sampling. The basis is that both of the uncertainty sampling and the adversarial attack are to find uncertain samples near the decision boundary of the current model. The traditional uncertainty sampling finds uncertain samples through a costly traversal of all unlabeled samples ( for each sampling run), while adversarial attack algorithms directly find local approximations by simply computing partial derivatives of the current training batch ( for each sampling process), which is much more efficient given a large unlabeled dataset and thus can perform uncertainty sampling much more frequently.

Figure 1: Comparison between uncertainty sampling and AUSDS.

However, it is non-trivial to perform adversarial uncertainty sampling for sentence learning. We can not directly perform adversarial attacks by computing adversarial gradients in a sentence space since the sentence space is discrete. We propose to include a neural encoder to map unlabeled sentences into a continuous space for performing adversarial attacks in this space. Specifically, we use particular pre-trained LM like BERT as the encoder, which provides a continuous hidden space for the representation of sentences. We map every unlabeled sentence into the encoding space and then obtaining adversarial data points of these sentences in the encoding space. Due to that not every data point in the encoding space can be mapped to one of unlabeled sentences, we propose to use the k-nearest neighbor (KNN) algorithm 

Altman (1992) to find the most similar unlabeled sentences (the adversarial sample) to the adversarial data points111

Note that KNN search can be very fast on GPU with open source implementation. We also compare the running time in the experiment.

.

Fig. 1 shows the difference between uncertainty sampling and AUSDS. Besides, empirically, we mix some random samples into the uncertainty samples to alleviate the sampling bias mentioned by Huang et al. (2010). We deploy AUSDS for active sentence learning and conduct experiments on five datasets across 2 NLP tasks, namely sequence classification and sequence labeling. Experimental results show that AUSDS outperforms random sampling and uncertainty sampling strategies. Further analyses show that AUSDS achieves the best sampling effectiveness with linear running time compared with random sampling.

Our contributions are summarized as follows:

  • We propose AUSDS for active sentence learning, which first introduces the adversarial attack for sentence uncertainty sampling, alleviating the ineffective sampling problem.

  • We propose to map sentences into the pre-trained LM encoding space, which makes adversarial uncertainty sampling available in the discrete sentence space.

  • Experimental results demonstrate that the AUSDS assisted learning framework outperforms strong baselines in sampling effectiveness with acceptable running time.

2 Related Work

This work focuses on reducing the labeled data size with the help of pre-trained LM in solving sequence learning tasks. The proposed AUSDS approach is related to two different research topics, namely active learning and adversarial attack.

Figure 2: Overview of AUSDS assisted active sentence learning framework. Some notations are labeled along with corresponding components.

2.1 Active Learning

Active learning algorithms can be categorized into three scenarios, namely membership query synthesis, stream-based selective sampling, and pool-based active learning Settles (2009). Our work is related to pool-based active learning, which assumes that there is a small set of labeled data and a large pool of unlabeled data available Lewis and Gale (1994). To reduce the label complexity, the learner starts from the labeled data and selects one or more queries from the unlabeled data pool for the annotation, then learns from the new labeled data and repeats.

The pool-based active learning scenario has been studied in many real-world applications, such as text classification Lewis and Gale (1994); Hoi et al. (2006), information extraction Settles and Craven (2008) and image classification Joshi et al. (2009). Among the query strategies of existing active learning approaches, the uncertainty sampling strategy Joshi et al. (2009); Lewis and Gale (1994) is the most popular and widely used. The basic idea of uncertainty sampling is to enumerate the unlabeled samples and compute the uncertainty measurement like information entropy for each sample. The enumeration and uncertainty computation makes the sampling process costly and cannot be performed frequently, which induced the ineffective sampling problem.

There are some works that focus on accelerating the costly uncertainty sampling process. Jain et al. Jain et al. (2010) propose a hashing method to accelerate the sampling process in sub-linear time. Deng et al. Deng et al. (2018) propose to train an adversarial discriminator to select informative samples directly and avoid computing the rather costly sequence entropy. Nevertheless, the above works are still computationally expensive and cannot be performed frequently, which means the ineffective sampling problem still exists.

2.2 Adversarial Attack

Adversarial attacks are originally designed to approximate the smallest perturbation for a given latent state to cross the decision boundary Goodfellow et al. (2014); Kurakin et al. (2016)

. As machine learning models are often vulnerable to adversarial samples, adversarial attacks have been used to serve as an important surrogate to evaluate the robustness of deep learning models before they are deployed 

Biggio et al. (2013); Szegedy et al. (2013). Existing adversarial attack approaches can be categorized into three groups, which are one-step gradient-based approaches Goodfellow et al. (2014); Rozsa et al. (2016), iterative methods Kurakin et al. (2016) and optimization-based methods Szegedy et al. (2013).

Inspired by the similar goal of adversarial attacks and uncertainty sampling, in this paper, instead of considering adversarial attacks as a threat, we propose to combine these two approaches for achieving real-time uncertainty sampling. Some works share a similar but different idea with us. Li et al. Li et al. (2018) introduce active learning strategies into black-box attacks to enhance query efficiency. Zhu and Bento Zhu and Bento (2017) propose to train Generative Adversarial Networks to generate samples by minimizing the distance to the decision boundary directly, which is in the query synthesis scenario different from us. Ducoffe and Precioso Ducoffe and Precioso (2018) also introduce adversarial attacks into active learning by augmenting the training set with adversarial samples of unlabeled data, which is totally different from our work as it is in a continuous space. Note that none of the works above share the same scenario with our problem setting.

3 Adversarial Uncertainty Sampling in Discrete Space

Input: an unlabeled text corpus , an oracle , an initial training set , hyper parameters and used to control the frequency of fine-tuning the encoder.
Output: the well-trained model

1:  Load the pre-trained LM as the encoder
2:  Build the decoder , define the latent states and the encoding space according to the downstream task
3:  Initialize the accumulated labeled data set
4:  Train with
5:  Construct a bidirectional mapper between the unlabeled sequences and the latent states in using the well-trained encoder
6:  Sample a training batch from
7:  Initialize the fine-tuning counter
8:  while  do
9:     Train only the decoder on , fix the encoder
10:     Use the adversarial attack algorithm to generate adversarial data points with the loss on
11:     Find the adversarial samples by searching the nearest neighbors of in and mapping them back through
12:     Mix with random samples by the ratio of
13:     Craft by top-k ranking over the information entropy of mixed samples
14:     
15:     Update the current labeled data set
16:     
17:     Sample a training batch from and by the ratio of
18:     
19:     if  mod  then
20:        Fine-tune with for steps
21:        Update the mapper with the fine-tuned encoder
22:     end if
23:  end while
Algorithm 1 Active Sentence Learning with Adversarial Uncertainty Sampling in Discrete Space

In this section, we introduce Adversarial Uncertainty Sampling in Discrete Space (AUSDS) with the AUSDS assisted active sentence learning framework since they are strongly coupled with each other. The learning framework consists of two blocks, training block and sampling block. They interact with each other frequently in batch level, aiming at performing real-time effective sampling (Fig. 2.a).

The framework starts from a training batch, the training block encodes the training samples into latent states, and generates the adversarial data points based on the latent states and the gradients over the loss. With the adversarial data points, the sampling block finds the adversarial samples by KNN search over the encoding space and generates the next training batch. The procedure of the framework is outlined in Algorithm 1, some notations can be found in Fig. 2 along with the corresponding components. We split the framework into four stages, namely initialization, training, sampling, and fine-tuning.

3.1 Initialization

The initialization stage is corresponding to line 1-7 in Algorithm 1. As shown in Devlin et al. (2018), the 2 NLP tasks we considered, sequence classification and sequence labeling, can be solved in an encoder-decoder framework. We first load the pre-trained LM implemented with  Devlin et al. (2018) or  Peters et al. (2018) as our encoder . Then we build the decoder , define the latent states , and the encoding space according to the downstream task. Note that the decoder is different on 2 NLP tasks.

Since the sampling approach requires a basic model to provide a prediction of the decision boundary in , we initialize the accumulated labeled data set and train the basic model with . With the defined encoding space and the well-trained encoder , we can then construct a bidirectional mapper between the unlabeled sequences and the latent states . It means we can easily track the original textual input with its corresponding latent state. Finally, we initialize the training batch and the fine-tuning counter , which are prepared for the rest stages.

3.2 Training

The training stage is corresponding to line 9-10 in Algorithm 1. With the defined decoders and the prepared training batch , we train the decoder parameters directly with a cross entropy loss (Fig. 2.b). Here we fix the encoder because we need to update along with the change of the encoder, which is costly. The refers to a bidirectional mapper between the unlabelled sequences and the latent states in using encoder as described in Algorithm 1 in the paper. In other words, it’s a memory buffer that holds the bijection between the sequences and the corresponding latent states using a given encoder . Since the encoder is well-trained on the entire , fine-tuning the encoder infrequently cannot influence the performance of the model. Therefore, we fine-tune the encoder for steps after every steps, where and

are two hyperparameters.

Then, we perform adversarial attacks over the current model with the gradients of the current batch . The following adversarial attack approaches are considered:

  • Fast Gradient Value (FGV) Rozsa et al. (2016): a one-step gradient-based approach with high efficiency. The adversarial data points are generated by:

    (1)

    where is a hyper parameter, and is the cross entropy loss on .

  • DeepFool Moosavi-Dezfooli et al. (2016)

    : an iterative approach to find the minimal perturbation that is sufficient to change the estimated label.

  • C&W Carlini and Wagner (2017): an optimization-based approach with the optimization problem defined as:

    (2)

    where is a manually designed function, satisfying if and only if ’s label is a specific target label. is a distance measurement like Minkowski distance.

3.3 Sampling

The sampling stage is corresponding to line 11-17 in Algorithm 1. In our sentence learning scenario, the adversarial data points may not be mapped back to the unlabeled samples. Thus we perform k-nearest neighbor (KNN) search Altman (1992) to find the most similar unlabeled samples to the generated data points.

We implement the KNN search using Faiss222https://github.com/facebookresearch/faiss Johnson et al. (2017), an efficient similarity search algorithm with GPUs. The computation cost of KNN search is from two procedures, which are constructing the sample mapper and searching the similar latent states. The mapper construction procedure is performed infrequently, as described in Section 3.2. The searching procedure is very efficient (100 faster than generating ) thanks to Faiss. Thus AUSDS approach can be performed frequently in batch-level.

After acquiring adversarial samples using KNN search, we mix with random samples drawn from by the ratio of , where is a hyperparameter. The motivation of appending random samples is to balance exploration and exploitation, which can alleviate the problem of sampling bias Huang et al. (2010).

Then we perform top-k ranking over the information entropy of the mixed samples. Since the size of the mixed samples is comparable to the batch size, the computation cost is acceptable. The remaining samples are then labeled by and added into the current labeled data set as well as the accumulated labeled data set .

Finally, we sample a training batch from and by the ratio of , where is a hyperparameter. The training samples in are all close to the current decision boundary, which can induce the problem of sampling bias Huang et al. (2010). Therefore, we introduce to balance exploration and exploitation. The details on sampling bias is discussed in Sec. 4.2.2.

Dataset Task Sample Size
SST-2 Socher et al. (2013) sequence classification 11.8k sentences, 215k phrases
SST-5 Socher et al. (2013) sequence classification 11.8k sentences, 215k phrases
MRPC Dolan et al. (2004) sequence classification 5,801 sentence pairs
AG News Zhang et al. (2015) sequence classification 12k sentences
CoNLL’03 Sang and De Meulder (2003) sequence labeling 22k sentences, 300k tokens
Table 1: 5 datasets we used for sentence learning experiments, across sequence classification and sequence labeling tasks.

3.4 Fine-tuning

The fine-tuning stage is corresponding to line 18-22 in Algorithm 1. We fine-tune the encoder for steps after every steps, as described in Section 3.2. During the fine-tuning, both the encoder and the decoder are trained on the accumulated labeled data set . After fine-tuning, we update the mapper for the following KNN search. The algorithm terminates until the unlabeled text corpus is used up.

4 Experiments

We evaluate the AUSDS assisted active sentence learning framework on sequence classification and sequence labeling tasks. For the oracle labeler, we directly use the labels provided by the datasets. In all the experiments, we take average results of 5 runs with different random seeds to alleviate the influence of randomness.

Dataset RM US AUSDS(FGV) AUSDS(DeepFool) AUSDS(C&W)
SST-2 0.39 413.97 10.84 10.87 14.68
SST-5 0.47 911.16 17.55 17.57 24.15
MRPC 0.29 28.06 1.95 1.98 2.63
AG News 0.43 616.58 12.19 13.17 16.44
CoNLL’03 0.32 14.31 1.41
Table 2: The average sampling cost (in secs) for each sampling step on 5 datasets with BERT as the encoder. The statistics are collected using Tesla-V100 GPU. The AUSDS using DeepFool and C&W on CoNLL’03 are omitted because these adversarial attack methods are not suitable for sequence labeling.

4.1 Set-up

Dataset.

We use five datasets, namely Stanford Sentiment Treebank (SST-2 / SST-5) Socher et al. (2013), Microsoft Research Paraphrase Corpus (MRPC) Dolan et al. (2004), AG’s News Corpus (AG News) Zhang et al. (2015)

and CoNLL 2003 Named Entity Recognition dataset (CoNLL’03) 

Sang and De Meulder (2003) for experiments. The statistics can be found in Table 1. And the data split ratios for train, development, and test follow the original settings in those papers. We use accuracy for sequence classification and f1-score for sequence labeling as the metric.

Baseline Approaches.

Our aim here is to prove that our AUSDS can achieve better sampling effectiveness with acceptable time. We use two common baseline approaches in NLP active learning to compare with our framework, namely random sampling (RM) and entropy-based uncertainty sampling (US). For sequence classification tasks, we use the widely used Max Entropy (ME) Berger et al. (1996) as the uncertainty measurement, which is given by:

(3)

where is the number of classes. For sequence labeling tasks, we use the total token entropy (TTE) Settles and Craven (2008) as the uncertainty measurement, which is given by:

(4)

where is the sequence length and is the number of labels.

Implementation Details.

We implement the model based on this repository333https://github.com/huggingface/pytorch-pretrained-BERT and the based on this repository444https://github.com/allenai/allennlp. The configurations of the model are the same as reported in Devlin et al. (2018); Peters et al. (2018). The implementation of KNN search is introduced in section 3.3. The accumulated labeled data set is initialized the same for different approaches, taking 0.1% of the whole unlabeled data (0.5% for MRPC because the dataset is relatively small). We will release our code with full configurations for reproducibility after acceptance.

4.2 Main Results

4.2.1 Computational Efficiency

Label Size 2% 4% 6% 8% 10%
SST-2 RM 87.78(.003) 89.85(.004) 89.85(.010) 89.69(.004) 90.26(.008)
US 87.74(.004) 90.25(.006) 90.38(.008) 90.25(.006) 91.27(.007)
AUSDS (FGV) 89.18(.002) 89.88(.008) 89.16(.014) 91.07(.005) 89.95(.003)
AUSDS (DeepFool) 88.74(.004) 90.06(.003) 89.84(.007) 90.74(.006) 91.58(.002)
AUSDS (C&W) 87.97(.003) 89.95(.005) 90.83(.007) 90.12(.003) 91.13(.001)
SST-5 RM 47.57(.007) 50.22(.014) 49.97(.005) 50.32(.011) 51.16(.012)
US 48.16(.004) 50.00(.010) 50.57(.018) 52.02(.004) 49.41(.014)
AUSDS (FGV) 48.19(.003) 50.19(.011) 50.90(.012) 52.13(.004) 49.64(.010)
AUSDS (DeepFool) 47.71(.016) 50.08(.013) 50.22(.007) 50.35(.016) 51.73(.006)
AUSDS (C&W) 47.65(.013) 49.95(.008) 50.27(.006) 49.05(.016) 51.24(.018)
MRPC RM 67.33(.008) 68.31(.006) 68.56(.018) 70.06(.021) 71.15(.020)
US 62.14(.090) 69.34(.005) 69.11(.010) 70.53(.017) 71.49(.016)
AUSDS (FGV) 68.89(.014) 69.30(.023) 70.28(.015) 70.06(.012) 69.30(.019)
AUSDS (DeepFool) 67.92(.009) 68.88(.017) 69.68(.017) 71.69(.014) 71.55(.012)
AUSDS (C&W) 67.91(.014) 68.53(.017) 70.46(.012) 70.49(.012) 68.89(.016)
AG News RM 89.89(.003) 90.89(.002) 91.37(.002) 91.79(.002) 92.21(.002)
US 90.29(.006) 91.59(.007) 92.34(.003) 92.71(.001) 93.01(.001)
AUSDS (FGV) 90.75(.002) 91.55(.002) 92.26(.003) 92.62(.001) 93.16(.001)
AUSDS (DeepFool) 90.67(.004) 91.65(.004) 92.43(.004) 92.66(.004) 93.12(.002)
AUSDS (C&W) 90.24(.002) 91.29(.002) 92.30(.004) 92.90(.002) 93.10(.003)
CoNLL’03 RM 80.42(.002) 83.38(.002) 85.39(.005) 86.78(.005) 87.42(.003)
US 78.12(.002) 81.49(.019) 84.45(.004) 86.73(.008) 87.79(.004)
AUSDS (FGV) 80.65(.006) 83.60(.003) 85.98(.010) 87.10(.004) 87.83(.003)
AUSDS (DeepFool)
AUSDS (C&W)
Table 3: The convergence results with respect to the label size in the training from scratch setting with BERT as the encoder. The label size denotes for the ratio of labeled data. The numbers are the averaged results of 5 runs on the test set. The best results with each label size are marked as bold. The sequence classification and sequence labeling tasks are evaluated with accuracy and f1 score, respectively. The AUSDS using DeepFool and C&W on CoNLL’03 are omitted because these adversarial attack methods are not suitable for sequence labeling.
Label Size 2% 4% 6% 8% 10%
RM 81.58(.004) 82.90(.006) 83.53(.008) 82.15(.016) 84.40(.006)
US 78.23(.007) 80.34(.003) 81.99(.006) 82.34(.008) 82.21(.004)
AUSDS (FGV) 81.22(.004) 83.25(.001) 84.18(.005) 84.49(.004) 84.62(.009)
AUSDS (DeepFool) 82.37(.003) 83.31(.004) 83.77(.002) 84.68(.001) 84.73(.005)
AUSDS (C&W) 81.27(.006) 84.02(.007) 82.76(.002) 84.40(.002) 83.58(.012)
Table 4: The convergence results with respect to the label size in the training from scratch setting with ELMo as encoder on SST-2. The label size denotes for the ratio of labeled data. The best results with each label size are marked as bold.

AUSDS is computationally more efficient than uncertainty sampling. As we described in section 3, the training block and the sampling block interact with each other frequently in batch level. Thus AUSDS can achieve real-time effective sampling. We conduct experiments in real-time sampling setting, in which we perform the sampling process in batch level.

Table 2 shows the average sampling cost for each sampling step with different approaches. We can observe that uncertainty sampling can hardly work in real-time sampling setting because of the costly sampling process. Our AUSDS sampling approaches are more than 10x faster compared with common uncertainty sampling. The larger the unlabeled data pool is, the more significant the acceleration is. Our framework spends slightly longer computation time, compared with the random sampling baseline, because of extra computation for adversarial examples. But it’s still fast enough for real-time batch-level sampling. Moreover, the experimental results on Sampling Effectiveness show that the extra computation is worthy with obvious performance enhancement on the same amount of labeled data.

(a) Margin during Training
(b) Margin Distribution
Figure 3:

The margin of outputs on samples selected by different sampling strategies on SST-5. The margin denotes for differences between the largest and the second-largest output probabilities on different classes. The lower the margin is, the closer the sample is located to the decision boundary. Fig. (a) shows the average margin of each sampling step during training. The margins of samples selected by RM and US on whole unlabeled data are also plotted as references. Fig. (b) shows the margin distribution of samples selected from sampling step 800 to 1000, where the average uncertainty becomes steady. US in Fig. (b) is omitted for better visualization.

4.2.2 Sampling Effectiveness

AUSDS can achieve higher sampling effectiveness than uncertainty sampling due to the sampling bias problem. Simply training the model until convergence after each sampling step, which we call continuous training setting, can easily induce the problem of sampling bias Huang et al. (2010) and cannot reflect the informativeness of selected samples. The sampling bias denotes the bias of the sampling process for informative unlabelled examples with uncertainty based methods. The decision boundary of the model is merely determined by a small number of labeled examples in the early phase. And the biased decision boundary may lead to the ineffective selection of examples, namely, the selected examples may be informative with higher uncertainty but not that representative to the whole unlabelled data. The error would be accumulated and results in the poorer final performance of the model. The delayed uncertainty sampling also can encounter this problem because of frequent oscillation of the decision boundary in the early phase of training.

Thus we propose another training setting, named training from scratch, for convergence results. In the training from scratch setting, we train models from scratch using the labeled data sampled by different approaches with various label sizes. We argue that this setting is more suitable to measure the sampling efficiency. The results are shown in Table 3. (The results on SST-2 with ELMo as the encoder are demonstrated in our supplemental material to show the generalization ability of our AUSDS to other pre-trained LM encoding space.)

Active learning focuses on training with a limited amount of labeled data by selecting more valuable examples to label. It makes no difference whether to perform active learning or not with enough labeled data available. So we include at most 10% of the whole training data labeled for training in each sampling approach. We believe that with less labeled data, the performance gap, namely the difference of sampling effectiveness is more obvious.

Our framework outperforms the random baselines consistently because it selects more informative samples for identifying the shape of the decision boundary. Also, it outperforms the common uncertainty sampling in most cases with the same label size limits because the frequent sampling processes in our approach alleviate the sampling bias issue. With the results on the five standard benchmarks of 2 NLP tasks, we observe that our AUSDS can achieve better sampling effectiveness.

To prove that our AUSDS framework does not heavily depend on BERT, we conduct experiments on SST-2 with ELMo as the encoder, which has a totally different network structure. The results in Table  4 show that in this setting, our AUSDS framework still achieves higher sampling effectiveness, while the original uncertainty sampling stuck in a more severe sampling bias problem. The results in this experiment can be a side evidence of the generalization ability of our framework to other pre-trained LM encoding space.

4.2.3 Samples Uncertainty

AUSDS can actually select examples with higher uncertainty. We plot the margins of outputs on samples selected with different sampling strategies on the SST-5 dataset in Fig. 3. We use margin as the measure of the distance to the decision boundary. Lower margin indicates positions closer to the decision boundary. As shown in Fig. 3(a), the samples selected by our AUSDS sampling strategies with different attack approaches achieve lower average margins during the entire sampling process. We synthesize the samples selected from step 800 to 1000 for the estimation of the margin distribution, as shown in Fig. 3(b). It shows that our AUSDS sampling strategies have better capability to capture the samples with higher uncertainty as their margin distribution is more to the left. The uncertainty sampling performed on the whole unlabeled data gets the most uncertain samples. However, it is very time-consuming and outperformed by our proposed AUSDS in the above experiments.

In short, we conduct sampling speed comparison experiments to show comparable time efficiency of our AUSDS with respect to random baseline, also, the weakness of uncertainty sampling as expensive computation cost. We revealed the existence of sampling bias and proved the capability of our AUSDS for alleviating this problem with experiments in the from scratch training setting. The better performance enhancement with low label size limits supports our hypothesis from the side since the sampling bias problem is heavier in the early phase.

5 Conclusion

Uncertainty sampling can be an effective way of reducing the labeled data size of sentence learning. However, uncertainty sampling with latency may lead to the ineffective sampling problem. To address this problem, in this paper, we propose the adversarial uncertainty sampling in discrete space for active sentence learning. By introducing the adversarial attack into uncertainty sampling and mapping discrete sentences into pre-trained LM space, the proposed AUSDS is more efficient than traditional uncertainty sampling. Experimental results on five datasets show that our proposed approach outperforms strong baselines in most cases and can achieve better sampling effectiveness.

References

  • N. S. Altman (1992) An introduction to kernel and nearest-neighbor nonparametric regression. The American Statistician 46 (3), pp. 175–185. Cited by: §1, §3.3.
  • A. L. Berger, V. J. D. Pietra, and S. A. D. Pietra (1996) A maximum entropy approach to natural language processing. Computational linguistics 22 (1), pp. 39–71. Cited by: §4.1.
  • B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Šrndić, P. Laskov, G. Giacinto, and F. Roli (2013) Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery in databases, pp. 387–402. Cited by: §2.2.
  • N. Carlini and D. Wagner (2017)

    Towards evaluating the robustness of neural networks

    .
    In 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. Cited by: 3rd item.
  • Y. Deng, K. Chen, Y. Shen, and H. Jin (2018) Adversarial active learning for sequences labeling and generation.. In IJCAI, pp. 4012–4018. Cited by: §1, §2.1.
  • J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §1, §3.1, §4.1.
  • B. Dolan, C. Quirk, and C. Brockett (2004) Unsupervised construction of large paraphrase corpora: exploiting massively parallel news sources. In Proceedings of the 20th international conference on Computational Linguistics, pp. 350. Cited by: Table 1, §4.1.
  • M. Ducoffe and F. Precioso (2018) Adversarial active learning for deep networks: a margin based approach. arXiv preprint arXiv:1802.09841. Cited by: §2.2.
  • I. J. Goodfellow, J. Shlens, and C. Szegedy (2014) Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. Cited by: §1, §2.2.
  • S. C. Hoi, R. Jin, and M. R. Lyu (2006) Large-scale text categorization by batch mode active learning. In Proceedings of the 15th international conference on World Wide Web, pp. 633–642. Cited by: §2.1.
  • S. Huang, R. Jin, and Z. Zhou (2010) Active learning by querying informative and representative examples. In Advances in neural information processing systems, pp. 892–900. Cited by: §1, §3.3, §3.3, §4.2.2.
  • P. Jain, S. Vijayanarasimhan, and K. Grauman (2010)

    Hashing hyperplane queries to near points with applications to large-scale active learning

    .
    In Advances in Neural Information Processing Systems, pp. 928–936. Cited by: §2.1.
  • J. Johnson, M. Douze, and H. Jégou (2017) Billion-scale similarity search with gpus. arXiv preprint arXiv:1702.08734. Cited by: §3.3.
  • A. J. Joshi, F. Porikli, and N. Papanikolopoulos (2009) Multi-class active learning for image classification. In

    2009 IEEE Conference on Computer Vision and Pattern Recognition

    ,
    pp. 2372–2379. Cited by: §2.1.
  • A. Kurakin, I. Goodfellow, and S. Bengio (2016) Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533. Cited by: §1, §2.2.
  • D. D. Lewis and W. A. Gale (1994)

    A sequential algorithm for training text classifiers

    .
    In SIGIR’94, pp. 3–12. Cited by: §1, §2.1, §2.1.
  • P. Li, J. Yi, and L. Zhang (2018) Query-efficient black-box attack by active learning. arXiv preprint arXiv:1809.04913. Cited by: §2.2.
  • S. Moosavi-Dezfooli, A. Fawzi, and P. Frossard (2016) DeepFool: a simple and accurate method to fool deep neural networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: 2nd item.
  • M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer (2018) Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Cited by: §1, §3.1, §4.1.
  • A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever (2018) Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai-assets/research-covers/languageunsupervised/language understanding paper. pdf. Cited by: §1.
  • A. Rozsa, E. M. Rudd, and T. E. Boult (2016) Adversarial diversity and hard positive generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 25–32. Cited by: §2.2, 1st item.
  • E. F. T. K. Sang and F. De Meulder (2003) Introduction to the conll-2003 shared task: language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, Cited by: Table 1, §4.1.
  • B. Settles and M. Craven (2008) An analysis of active learning strategies for sequence labeling tasks. In Proceedings of the conference on empirical methods in natural language processing, pp. 1070–1079. Cited by: §1, §2.1, §4.1.
  • B. Settles (2009) Active learning literature survey. Technical report University of Wisconsin-Madison Department of Computer Sciences. Cited by: §2.1.
  • R. Socher, A. Perelygin, J. Wu, J. Chuang, C. D. Manning, A. Ng, and C. Potts (2013) Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pp. 1631–1642. Cited by: Table 1, §4.1.
  • C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus (2013) Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. Cited by: §2.2.
  • X. Zhang, J. Zhao, and Y. LeCun (2015) Character-level convolutional networks for text classification. In Advances in neural information processing systems, pp. 649–657. Cited by: Table 1, §4.1.
  • J. Zhu and J. Bento (2017) Generative adversarial active learning. arXiv preprint arXiv:1702.07956. Cited by: §2.2.