Towards Zero-Label Language Learning

09/19/2021 ∙ by Zirui Wang, et al. ∙ Google 17

This paper explores zero-label learning in Natural Language Processing (NLP), whereby no human-annotated data is used anywhere during training and models are trained purely on synthetic data. At the core of our framework is a novel approach for better leveraging the powerful pretrained language models. Specifically, inspired by the recent success of few-shot inference on GPT-3, we present a training data creation procedure named Unsupervised Data Generation (UDG), which leverages few-shot prompts to synthesize high-quality training data without real human annotations. Our method enables zero-label learning as we train task-specific models solely on the synthetic data, yet we achieve better or comparable results from strong baseline models trained on human-labeled data. Furthermore, when mixed with labeled data, our approach serves as a highly effective data augmentation procedure, achieving new state-of-the-art results on the SuperGLUE benchmark.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

It is well-known that deep learning models are data-hungry. In natural language processing, language model pre-training has become a successful transfer learning approach to effectively reduce the requirement for task-specific labeled data

Devlin et al. (2018); Liu et al. (2019); Yang et al. (2019); Radford et al. (2019); Raffel et al. (2019); Brown et al. (2020). Via training on unsupervised large-scale text corpus, bi-directional language models such as BERT and XLNet are able to learn contextualized text representations that can then be fine-tuned on downstream tasks with small training data sizes, which have pushed the state of the art on a variety of natural language understanding benchmarks.

Model Setting SuperGLUE Avg.

 

Human 89.8

 

Previous SOTA Supervised 89.3
T5+UDG 90.4

 

GPT3 Few-Shot 71.8

 

UDG Unsupervised 78.1
Table 1: SuperGLUE summary.
Figure 1: Illustration of the UDG framework.

More recently, gigantic language models (GLM) such as GPT3 Brown et al. (2020) have been shown to be effective few-shot learners. As unsupervised training corpus and model size scaling up, the model is able to generate answers for an unseen NLP task with few-shot inference, based on a manually crafted input prompt consist of a task description and a few examples. Despite no fine-tuning is involved, the language model performs competitively against fine-tuned baselines on a wide range of tasks, whose success suggests a new paradigm of transfer learning in NLP. Yet the gaps between few-shot inference and state-of-the-art fine-tuned methods are still large on many tasks (for example 17.5 below prior state-of-the-art on SuperGLUE as shown in Table 1), urging for exploration of applications of giant language models beyond few-shot inference.

Inspired by the few-shot capability of GPT3, we shift our focus towards utilizing GLMs for example creation instead of direct inference, and find that language models are also excellent few-shot generators. Similar to the few-shot inference paradigm, we query the model with a prompt with a few examples and a description of the desired label, and the model generates examples aligned with the label while resembling the given samples. Interestingly, we find no supervision is required for high-quality data creation and thus we only need to use unlabeled examples in our prompts. The dataset created by the model can then used to fine-tune any off-the-shelf model. This approach can therefore be treated as a zero-label

learning procedure, in which no human label is required throughout the whole process. It differs from the unsupervised learning procedure in that the downstream models still need to be trained with

synthetic data, however the training example creation requires no human labor.

Following this procedure, we are able to establish a system trained using unlabeled training data only, and thus we refer to it as Unsupervised Data Generation (UDG). Experiments show that our unsupervised system performs competitively with strong supervised baselines and achieves new state-of-the-art few-shot learning results on text classification and the SuperGLUE language understanding benchmarks. The synthesized data can further be used for data augmentation purpose. When combined with existing labeled data we are able to achieve the first super-human SuperGLUE scores. These results suggest that few-shot training data creation is a promising alternative to few-shot inference with powerful language models.

2 Related Work

Data augmentation has traditionally been a popular technique for NLP model quality improvement, especially in low-resource regimes Yu et al. (2018); Wei and Zou (2019)

While traditionally simple heuristics like token-level modification has been applied to diversify training samples, more recently generative data augmentation has gained popularity due to the progress made in language modeling

Anaby-Tavor et al. (2019); Papanikolaou and Pierleoni (2020); Juuti et al. (2020); Lee et al. (2021); Kumar et al. (2021). However, they often require labeled examples to finetune generative models and heavy postprocessing for data cleaning. On the other hand, our method generates data in a fully unsupervised manner without finetuning the language model, showcasing a new zero-label learning paradigm.

Our approach is also closely related to knowledge retrieval from large language models. These models are known to be good at memorizing facts from training data and capable of performing as open knowledge bases Petroni et al. (2019); Wang et al. (2020); Roberts et al. (2020); Carlini et al. (2021). The high quality of training examples created by our approach is to a large part guaranteed by the model’s strong knowledge retrieval ability, which reduces the chance of erratic hallucinations irrelevant to the provided labels.

IMDb Yelp-2 Yelp-5 Amazon-2 Amazon-5 DBpedia Avg.
XLNet Supervised 96.80 98.63 72.95 97.89 68.33 99.40 89.00
95.49 98.11 70.68 97.37 65.83 99.36 87.81

 

UDA Few-Shot 95.80 97.95 67.92 96.50 62.88 98.91 86.66
Few-shot Inf. 90.38 88.79 48.75 92.63 44.21 82.46 74.54

 

UDG Unsupervised 95.95 98.22 69.05 97.02 64.54 96.47 86.88
 + NLA 96.29 98.38 69.31 97.24 64.88 99.21 87.55
Table 2: Comparison of methods on text classification datasets (Accuracy). Results for XLNet are obtained from Yang et al. (2019) while results for and UDA are from Xie et al. (2019). The best result for semi-supervised/few-shot setup is bolded while underline signifies the overall best.

3 Method

3.1 Background: Few-shot Inference

Given a set of labeled data for a specific downstream task, the most common approach in recent years has been fine-tuning that updates the weights of a pre-trained model according to Devlin et al. (2018); Yang et al. (2019); Raffel et al. (2019). While obtaining state-of-the-art performance on a wide range of tasks, fine-tuning requires extra update steps and non-trivial amounts of labeled data in the target task. On the other hand, few-shot inference is a more resource-efficient paradigm exhibited in the latest gigantic language models such as GPT3 Radford et al. (2019); Brown et al. (2020). The idea is to utilize the language model to infer the correct label based on the task description and a few sample input-label pairs. In particular, the input to the model is a handcrafted ordered prompt consisted of a task description , a small set of K examples , and the query example , and the model is expected to infer the correct label

as the most probable next text sequence to the input prompt:

(1)

Since taking the argmax is intractable, is usually obtained through greedy decoding or beam search. Using much less task-specific data and no gradient update, few-shot inference can obtain performance comparable to fine-tuning methods (e.g. GPT3 performs similarly to fine-tuned BERT on SuperGLUE in Table 4). In its extreme format, giant language models can also perform one-shot (K=1) or even zero-shot (K=0) inference.

3.2 Unsupervised Data Generation

Despite these interesting findings, few-shot inference using giant language models still underperforms state-of-the-art fine-tuned models on many tasks. In Table 4, for instance, T5 largely outperforms GPT3 (89.3 vs 71.8) despite being much smaller in model sizes (11B vs 175B). One potential limitation is that a language model is never explicitly trained to directly conduct inference. Instead, it is trained as a text generator on unsupervised web corpus where inputs () and labels () happen to coexist. Consequently, the few-shot inference method finds the proper prompt that ‘forces’ the model to generate next text sequence which happens to be the label Y. However, this could be suboptimal since the labels often emerge prior to the inputs in real-world web documents. For example, in sentiment classification of IMDb movie reviews Maas et al. (2011), the actual review contexts appear after their corresponding rating scores. Therefore, few-shot inference can force the language model to generate on text distributions that are inconsistent with its training data.

To this end, we propose to utilize language models to perform few-shot generation. Instead of generating and predicting the label Y, we let the model to generate the input X instead, decoupling generation from prediction. We aim to formulate the input prompts that are more likely to naturally exist in the training corpus. Specifically, the model is queried to generate corresponding to a pseudo label with a prompt consisted of a small set of K unlabeled examples and a description of the desired label:

(2)

where is a task-specific transformation function that maps a label class to natural language descriptions, as illustrated in Figure 1. Different from few-shot inference, our method only requires unsupervised few-shot examples, a zero-label learning setting. In addition, we use top-k sampling instead of search-based decoding to sample text from the language model. This allows us to generate a synthetic labeled dataset with controllable size . We then train task-specific models utilizing this synthetic dataset, either as standalone training data or additional auxiliary data. Unlike existing synthetic data generation systems, our method requires no fine-tuning step of the generative model and uses unsupervised data only, and therefore we refer to it as Unsupervised Data Generation to emphasize its resource efficiency. We also hope to emphasize that it is not our intention to leverage the language model to perform generative tasks, but just to take advantage of it to synthesize “labeled” examples for downstream model training.

4 Experiments

K=0 K=1 K=4 K=32

 

IMDb Acc. 64.21 91.34 95.86 96.29
Yelp-2 Acc. 67.34 90.27 98.22 98.38
Amz-5 Acc. 47.35 58.79 62.14 64.88
Table 3: Ablation of number of examples in each prompt.

4.1 Unsupervised Text Classification

We first apply the proposed UDG method on standard text classification tasks.

Experimental Setups. We use six popular text classification benchmark datasets Maas et al. (2011); Zhang et al. (2015), including IMDb, Yelp-2, Yelp-5, Amazon-2 and Amazon-5 sentiment classification and DBPedia topic classification. We mainly follow the experimental settings in Xie et al. (2019) and use the corresponding unlabeled data for each task. We apply similar preprocessing steps to clean noisy web texts and truncate the input to 512 subword tokens. For each prompt, we sample unlabeled examples from the unlabeled data and fit as many examples as allowed by the length of the language model’s context window (detailed templates shown in Figure 1 and Appendix C). This process is then repeated times for each label class, where we set for sentiment classification tasks and 1000 for topic classification. We then utilize the language model to generate one example for each prompt, resulting in a synthetic labeled dataset of size . We use an in-house language model, which is a variant of the one in Adiwardana et al. (2020) but trained with larger data. We exploit top-k sampling with K=40 and temperature=1.0, and only apply basic post-processing to filter generated examples that are too short/long.

Once we obtain the generated synthetic dataset

, it can be utilized as labeled training data for any task-specific training framework. Here, we choose the state-of-the-art semi-supervised learning framework Unsupervised Data Augmentation (UDA)

Xie et al. (2019) as the backbone. We use as our base model and follow the training protocol as described in the UDA paper to tune our hyper-parameters. In our experiment, we find some generated examples are noisy adn thus we additionally implement a Noisy Label Annealing (NLA) technique to filter these examples during the training process (See Appendix A for details).

Figure 2: Ablation of number of examples generated per label class.

Results. We compare models of trained using fully supervised, semi-supervised/few-shot and unsupervised settings in Table 2. We first compare few-shot inference using our giant language model with fine-tuned methods. Despite requiring no additional training costs, the few-shot inference paradigm performs significantly worse than supervised or even semi-supervised UDA, which utilizes similar amounts of labeled data. The gap is more evident on multi-way classification tasks such as Yelp-5 or DBpedia, where the model is required to predict complex labels beyond simple answers such as ‘True/False’. In contrast, the proposed few-shot generation paradigm obtains strong performance while using less supervision. When combined with NLA, our UDG framework consistently outperforms UDA and few-shot inference on all six tasks, achieving new state-of-the-art few-shot learning results. Besides, without using any label, our method outperforms fully supervised on IMDb and Yelp-2 and is also competitive on other tasks. Since both UDA and our method rely on , we expect using XLNet may further boost our unsupervised performance, which we choose to leave for future work.

BoolQ CB COPA MultiRC ReCoRD RTE WiC WSC Avg.
Human 89.0 95.8/98.9 100.0 81.8/51.9 91.7/91.3 93.6 80.0 100.0 89.8

 

Sup. 79.0 84.8/90.4 73.8 70.0/24.1 72.0/71.3 71.7 69.6 64.4 71.5
87.1 90.5/95.2 90.6 84.4/52.5 90.6/90.0 88.2 69.9 89.0 84.6
91.2 93.9/96.8 94.8 88.1/63.3 94.1/93.4 92.5 76.9 93.8 89.3
90.4 94.9/97.2 96.8 88.2/63.7 94.5/94.1 93.2 76.4 95.9 89.9
T5 + UDG 91.4 95.8/97.6 98.0 88.3/63.0 94.2/93.5 93.0 77.9 96.6 90.4

 

Few-Shot 76.4 52.0/75.6 92.0 75.4/30.5 91.1/90.2 69.0 49.4 80.1 71.8
81.2 79.9/88.8 90.8 74.1/31.7 85.9/85.4 70.8 49.3 88.4 75.4
80.0 82.3/92.0 85.4 76.2/35.7 86.1/85.5 75.0 53.5 85.6 76.0

 

UDG Unsup. 81.0 86.2/92.4 80.4 81.1/47.1 82.8/81.8 80.7 67.5 79.5 78.1
Table 4: Comparison of single-model methods on SuperGLUE test scores. Results obtained from the official SuperGLUE leaderboard222https://super.gluebenchmark.com/leaderboard. The best result for semi-supervised/few-shot setup is underlined while bold signifies the overall best. Model references: Devlin et al. (2018) Liu et al. (2019) Raffel et al. (2019) Devlin et al. (2018) Brown et al. (2020) Schick and Schütze (2020) Tam et al. (2021)

Analysis. We first examine the effect of data noisiness on model performance. As is the case for other data augmentation methods, few-shot generation using giant language models can produce examples that are inaccurate to the desired labels. To reduce the negative impact of these noisy labels, we utilize a simple NLA technique to filter out examples when the task-specific models disagree with the synthetic labels with high confidence levels. As shown in Table 2, NLA robustly improves UDG performance on all tasks, especially ones that are sensitive to noise such as DBpedia.

A crucial difference distinguishing our work from existing data generation methods is that we directly query the pretrained language model without any fine-tuning nor supervision. To achieve this, the model needs to not only infer correct knowledge corresponding to the input pseudo label but also generate text with similar styles of the sample unsupervised examples. Thus, we compare the results when the language model uses different amounts of in-context examples in Table 3. The model fails to generate high-quality data when no sample is given, indicating the importance of few-shot generation. On the other hand, including more unsupervised examples does improve the quality of synthetic dataset which leads to better performance.

Finally, we evaluate the impact of the synthetic data sizes in Figure 2. Despite there is a diminishing return trend, we find the final performance to continuously improve with more generated data, showing that the language model can generate diverse examples. In addition, one key benefit of our method is that we can sample as much data as needed with no additional cost or supervision. This is particularly useful for tasks from low-resource domains with limited unsupervised data available.

4.2 Unsupervised Language Understanding

To evaluate the proposed framework in a more challenging and comprehensive setting, we extend it to perform on complex language understanding tasks.

Experimental Setups. We use the SuperGLUE benchmark Wang et al. (2019) for general-purpose language understanding in English, which consists of 8 natural language understanding tasks. Tasks cover textual entailment (CB and RTE), question answering (BoolQ, MultiRC and ReCoRD), common sense reasoning (COPA), word sense disambiguation (WiC), and coreference resolution (WSC). We mainly follow the same generation protocol as described in the previous sections, with some minor changes in prompt templates and data post-processing steps for specific tasks. As before, we use K=32 unlabeled examples and generate using the same language model. For each task, we use all original labeled data as unsupervised examples for training data creation.

For the downstream model, we use T5 Raffel et al. (2019) for fine-tuning on the created data. Different from the released T5 checkpoints that are pretrained on multi-task data, we pretrain our own models on unsupervised Colossal Clean Crawled Corpus (C4) data only and thus the combined framework remains unsupervised. For fair comparison with existing models, we pretrain and then fine-tune a T5-Large model using the created data set. Following Raffel et al. (2019), we use a fine-tuning batch size of 8 with 512 sequence length.

Results. We compare models trained under different settings in Table 4. The GPT3 model Brown et al. (2020) using the few-shot inference method outperform BERT++ with less supervision and no fine-tuning. However, despite containing much more model parameters, it performs worse than other fine-tuned fully supervised models and few-shot methods. On the other hand, our unsupervised framework using few-shot generation outperforms all few-shot learning systems without using any label, and thus it achieves new state-of-the-art results on this benchmark for methods that exploit little-to-no supervision. In particular, our performance gains largely come from natural language entailment tasks (CB and RTE) as well as word sense disambiguation, where GPT3 performs similarly to random guessing. This indicates that language models do contain language knowledge that few-shot inference fails to leverage.

4.3 UDG as Data Augmentation

In previous sections we only use the created examples as pseudo supervision to explore the limits of transfer learning using language models. Nonetheless, the synthetic data can be also treated as augmented data and combined with existing labeled data. To this end, we fine-tune the public T5-XXL checkpoint using both labeled data and generated data. As shown in Table 4

, our method combines well with existing labeled data and brings substantial improvements. This is particularly the case for tasks with small data sizes such as COPA and WSC. Moreover, the combined model outperforms not only prior methods but also the human baselines for the first time on this important NLP benchmark, setting a new milestone for natural language understanding with machine learning models.

5 Conclusion

In this paper, we propose a “zero-label” training procedure and show that language models are also few-shot example creators in that they can be used to generate high-quality synthetic data in a fully unsupervised manner. Through this, we demonstrate that NLP models can obtain strong results without any human annotated label. Our work illustrate a promising direction for future transfer learning research in NLP.

References

  • D. Adiwardana, M. Luong, D. R. So, J. Hall, N. Fiedel, R. Thoppilan, Z. Yang, A. Kulshreshtha, G. Nemade, Y. Lu, et al. (2020) Towards a human-like open-domain chatbot. arXiv preprint arXiv:2001.09977. Cited by: §4.1.
  • A. Anaby-Tavor, B. Carmeli, E. Goldbraich, A. Kantor, G. Kour, S. Shlomov, N. Tepper, and N. Zwerdling (2019) Not enough data? deep learning to the rescue!. External Links: 1911.03118 Cited by: §2.
  • T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. (2020) Language models are few-shot learners. arXiv preprint arXiv:2005.14165. Cited by: §1, §1, §3.1, §4.2, Table 4.
  • N. Carlini, F. Tramer, E. Wallace, M. Jagielski, A. Herbert-Voss, K. Lee, A. Roberts, T. Brown, D. Song, U. Erlingsson, A. Oprea, and C. Raffel (2021) Extracting training data from large language models. External Links: 2012.07805 Cited by: §2.
  • J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §1, §3.1, Table 4.
  • M. Juuti, T. Gröndahl, A. Flanagan, and N. Asokan (2020) A little goes a long way: improving toxic language classification despite data scarcity. In Findings of the Association for Computational Linguistics: EMNLP 2020, Online, pp. 2991–3009. External Links: Link, Document Cited by: §2.
  • V. Kumar, A. Choudhary, and E. Cho (2021) Data augmentation using pre-trained transformer models. External Links: 2003.02245 Cited by: §2.
  • K. Lee, K. Guu, L. He, T. Dozat, and H. W. Chung (2021) Neural data augmentation via example extrapolation. External Links: 2102.01335 Cited by: §2.
  • Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov (2019) Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Cited by: §1, Table 4.
  • A. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts (2011)

    Learning word vectors for sentiment analysis

    .
    In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies, pp. 142–150. Cited by: §3.2, §4.1.
  • Y. Papanikolaou and A. Pierleoni (2020) DARE: data augmented relation extraction with gpt-2. External Links: 2004.13845 Cited by: §2.
  • F. Petroni, T. Rocktäschel, P. Lewis, A. Bakhtin, Y. Wu, A. H. Miller, and S. Riedel (2019) Language models as knowledge bases?. External Links: 1909.01066 Cited by: §2.
  • A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever (2019) Language models are unsupervised multitask learners. OpenAI blog 1 (8), pp. 9. Cited by: §1, §3.1.
  • C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu (2019) Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683. Cited by: Appendix B, §1, §3.1, §4.2, Table 4.
  • A. Roberts, C. Raffel, and N. Shazeer (2020) How much knowledge can you pack into the parameters of a language model?. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online, pp. 5418–5426. External Links: Link, Document Cited by: §2.
  • T. Schick and H. Schütze (2020) It’s not just size that matters: small language models are also few-shot learners. arXiv preprint arXiv:2009.07118. Cited by: Table 4.
  • D. Tam, R. R. Menon, M. Bansal, S. Srivastava, and C. Raffel (2021) Improving and simplifying pattern exploiting training. arXiv preprint arXiv:2103.11955. Cited by: Table 4.
  • A. Wang, Y. Pruksachatkun, N. Nangia, A. Singh, J. Michael, F. Hill, O. Levy, and S. R. Bowman (2019) Superglue: a stickier benchmark for general-purpose language understanding systems. arXiv preprint arXiv:1905.00537. Cited by: §4.2.
  • C. Wang, X. Liu, and D. Song (2020)

    Language models are open knowledge graphs

    .
    External Links: 2010.11967 Cited by: §2.
  • J. W. Wei and K. Zou (2019) EDA: easy data augmentation techniques for boosting performance on text classification tasks. In EMNLP-IJCNLP, K. Inui, J. Jiang, V. Ng, and X. Wan (Eds.), Cited by: §2.
  • Q. Xie, Z. Dai, E. Hovy, M. Luong, and Q. V. Le (2019) Unsupervised data augmentation for consistency training. arXiv preprint arXiv:1904.12848. Cited by: Appendix B, Table 2, §4.1, §4.1.
  • Z. Yang, Z. Dai, Y. Yang, J. Carbonell, R. Salakhutdinov, and Q. V. Le (2019) Xlnet: generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. Cited by: §1, Table 2, §3.1.
  • A. W. Yu, D. Dohan, M. Luong, R. Zhao, K. Chen, M. Norouzi, and Q. V. Le (2018) QANet: combining local convolution with global self-attention for reading comprehension. In ICLR, Cited by: §2.
  • X. Zhang, J. Zhao, and Y. LeCun (2015) Character-level convolutional networks for text classification. arXiv preprint arXiv:1509.01626. Cited by: §4.1.

Appendix A Noisy Label Annealing

Noisiness is a common issue for synthetic data generation. To mitigate this issue, prior work [CITE] utilize extensive filtering methods to select clean generated examples. While one key benefit of our method being high-quality synthetic data with minimal filtering, we do find some regularization during finetuning to be helpful for better performance, especially on tasks sensitive to noises. In particular, we obverse that the generated examples of the language model may be misaligned with the desired label class. Thus, we introduce a new training technique called Noisy Label Annealing (NLA), which gradually filter out noisy training signals as training progresses. Intuitively, we remove a specific training example if our model disagrees with its label with high confidence. Mathematically, at training step t, a given example is considered noisy and removed, if (1) the model’s predicted probability is higher than a threshold , and (2) the prediction differs from the synthetic label . We set the initial threshold to 0.9 and gradually anneal it to where is the number of classes. Intuitively, the model is less accurate at the early stage of the finetuning process and thus we demand a very high confidence level to filter noises, whereas we can safely decrease the “bar” as the model gets better trained. We explore different final annealing values in Table 5 and find a more aggressive strategy works often better.

None 0.90.8 0.90.7 0.90.6 0.90.5

 

95.95 96.03 96.08 96.17 96.29
Table 5: Comparison of different annealing thresholds on IMBd classification. We observe performance improves as we filter more aggresively.

Appendix B Finetuning Details

For text classifications, we mainly follow the experimental setups in Xie et al. (2019)

. We truncate the input to 512 subwords using BERT’s vocabulary, keeping the last tokens. For the finetuning process, we search the learning rate in {1e-5, 2e-5, 5e-5} and batch size in {32, 64, 128}. We also tune the number of epochs based on the size of generated data, ranging from 5 to 30. As with

Xie et al. (2019)

, we also fine-tune the BERT model on in-domain unsupervised data prior to the final training stage. For UDA hyperparameters, we tune the batch size and weight for both unsupervised and generated data, as well as different strategies of Training Signal Annealing (TSA). Notice that TSA is orthogonal to our NLA technique and thus we can apply them at the same time. Experiments are conducted on 32 v3 TPUs.

For tasks in SuperGLUE, we follow the pretraining and finetuning setups in Raffel et al. (2019). For the unsupervised setting, we pretrain our own T5-Large model using the exact same settings and hyperparameters as the original paper, except that we do not utilize the original downstream tasks data for multi-task pretraining. For all finetuning, we use a batch size of 8 and a sequence leangth of 512, and utilize the dev scores to pick the best checkpoint for submission to the test server for final evaluation. For more details, we refer readers to the T5 paper Raffel et al. (2019).

Appendix C Prompts & Generated Examples

c.1 Text Classification

c.1.1 IMDb

Input Prompt: Sample Movie Review: This is the most saccharine romance I ever sat through. The perfect film for an idle housewife in kerchief, housedress, and ostrich-trimmed high-heeled mules to watch in the afternoon, lying on the couch eating bonbons. In fact, bonbons play a prominent role in the movie. The only reason I was able to watch to the end, is that I finally was able to gaze at Keanu Reeves’ dreamy face in almost every scene. In most of his films, he moves too fast to get a good look. The only rapid action in this show is Giancarlo Giannini waving his hands with Latin emotionality - more Italian than Mexican, really.

Negative Movie Review:

Generated Output: This remake of the 1966 French film by François Truffaut and Claude Chabrol is just plain awful. The only positive I can say about the film is that the casting was just about perfect. Otherwise, the movie has no redeeming qualities whatsoever. The plot is a mess and is filled with numerous inconsistencies and continuity flaws throughout. It is as if the director just made it up on the spot and made it up bad!

c.1.2 Yelp-2

Input Prompt: Sample Business Review: I had the most delicious burger ever! I also ordered the truffle fries, which were amazing! The sauce they served with the fries was also very good! Definitely come here!! There was great service and wonderful food!

Positive Business Review:

Generated Output: We had our wedding at The Grand in October of 2011. The Grand and their staff are absolutely wonderful to work with. We had over 100 people at the wedding and the room was packed for the reception. We had a great time and the service, food, and atmosphere were fantastic. I can not say enough good things about working with and having our wedding at The Grand. We highly recommend The Grand for your wedding or special event.

c.1.3 Yelp-5

Input Prompt: Yelp Review Content: I waited FOR 35 MINUTES tonight just for 2 brisket meals to go. They weren’t even hot when I arrived home and I live less than 1 minute away. It was still a fine meal, saved by the amazing cornbread!! But, I have no intention of going back any time soon.

Yelp Review Rating: 3/5 Content:

Generated Output: I came here to eat after the ridiculous long lines died down a bit. I must say, I don’t get the hype about this place and felt a bit disappointed. I suggested coming here with a group of friends. I was pretty excited to eat here and found that nothing impressed me and the price tag could not be justified. Nothing stood out for me that would make me want to come here again. I think the highlight of my meal was the ginseng chicken soup. All other items, I’ve had better at other buffets. I didn’t think their selection was that impressive.

c.1.4 Amazon-2

Input Prompt: Sample Product Review Title: Nice to have Content: My dog loves this bed. I don’t like to have my dog sleep on the floor. I know I spoiled my dog. I put a huge pillow on top of this bed to give her the extra comfort. My dog loves sleeping on something soft. Now I have a happy dog that sleeps comfortably every night. Money well spend.

Negative Product Review Title:

Generated Output: Not worth it Content: I am so very disappointed. I bought this for my granddaughter for Christmas. I have a few concerns, but first and foremost, the box that the doll comes in says that it must be assembled by an adult. The instructions are very confusing. My mom and I put it together for her Christmas Eve. Then we realized that the doll has a small hole in the back of the head that is on the end of the seam. I don’t know where the hole came from, but since she is a collectible, I can’t return it.

c.1.5 Amazon-5

Input Prompt: Product Review Title: Good Book Content: Good book, easy to read, with a lot of suggestions about where read the exposure. Also if the book is more than 150 pages, there are a lot of big photos, everyone with a comment; instead of this I would have prefereed more photos with comment also if they would have been smaller. So, I read this book in too little time. My rate for this book is 3.5.

Product Review Rating: 3/5 Title:

Generated Output: An ok game Content: This game is good, but I wouldn’t buy this game unless you are a star wars fan. The game has lots of levels and it takes quite a while to beat. If you don’t know anything about star wars, you would be lost on the story. There are lots of good things and bad things about this game.

c.1.6 DBpedia

Input Prompt: Wikipedia Article Title: University of Georgia School of Law Article: The University of Georgia School of Law (also referred to as Georgia Law) is a graduate school of the University of Georgia.

Wikipedia Article Category: Educational Institution Title:

Generated Output: Vanung University Article: Vanung University (VNU) is a private university in Zhongli City Taoyuan County Taiwan. It is the first private university in Taiwan to be accredited by AACSB.

c.2 SuperGLUE

c.2.1 BoolQ

Input Prompt: Article: Russian Blue – Russian Blues are plush short-haired, shimmering pale blue-gray cats with emerald green eyes. Guard hairs are distinctly silver-tipped giving the cat a silvery sheen or lustrous appearance. They have been used on a limited basis to create other breeds such as the Havana Brown or alter existing breeds such as the Nebelung. They are being used in Italy as a way to make Oriental Shorthairs healthier and more robust called RUS4OSH in FIFe. Sample statement based on the article: do all russian blue cats have green eyes.

Article: Okinawa Churaumi Aquarium – The Okinawa Churaumi Aquarium (Okinawa Churaumi Suizokukan) is located within the Ocean Expo Park in Okinawa, Japan. It welcomed its 20 millionth visitor on 30 March 2010 and is a member of the Japanese Association of Zoos and Aquariums (JAZA). It was the largest aquarium in the world until it was surpassed by the Georgia Aquarium in 2005. The aquarium has the theme of “Encounter the Okinawan Sea”. False statement based on the article:

Generated Output: is okinawa churaumi aquarium the largest aquarium in the world.

c.2.2 Cb

Input Prompt: Input: Jim waited. He waited a long time, and when the young doctor finally came out, it was almost dark. Jim could nonetheless tell by his anxious face that something was wrong. Output: Something was wrong.

Input: She swallowed hard, unsure if she had the nerve to go ahead. The memory of the pain in Tara’s eyes last night decided her. Did he really expect her to believe that Tara was only the housekeeper? Output: Contradicting this,

Generated Output: Tara was only a temporary housekeeper.

c.2.3 Copa

Input Prompt: Input: My body cast a shadow over the grass. Output: The sun was rising.

Input: My computer screen went blank. Connection: Because Output:

Generated Output: The power went out.

c.2.4 MultiRC

Input Prompt: Article: On a day it celebrated 50 years of giving free legal advice, Southern Arizona Legal Aid announced it has received two grants totaling more than 7 million … Question: Who will the Web site benefit? Answer: Help bridge the "digital divide" that keeps the poor from using the Internet as a resource

Article: While this process moved along, diplomacy continued its rounds. Direct pressure on the Taliban had proved unsuccessful … Question:

Generated Output: Which country did President Bill Clinton visit on March 25, 2000? Correct Answer: Pakistan

c.2.5 ReCoRD

Input Prompt: Article: The harrowing stories of women and children locked up for so-called ’moral crimes’ in Afghanistan’s notorious female prison have been revealed after cameras were allowed inside. Mariam has been in Badam Bagh prison for three months after she shot a man who just raped her at gunpoint and then turned the weapon on herself - but she has yet to been charged. Nuria has eight months left to serve of her sentence for trying to divorce her husband. She gave birth in prison to her son and they share a cell together. Scroll down for video Nuria was jailed for trying to divorce her husband. Her son is one of 62 children living at Badam Bagh prison. Keyword: Badam Bagh prison ### Relevant Fact: The women and children are jailed in Badam Bagh prison.

Article: China’s space agency today distributed fresh new pictures of the Moon’s surface from its lunar lander and rover. The Jade Rabbit rover can be seen trundling across the pockmarked surface of Earth’s satellite in pictures released today by the Chinese Academy of Sciences. The successful soft-landing of the Chang’e 3 lander in December made China the third country to complete a lunar rover mission after the United States and Russia. Chasing the horizon: The sky over the Moon appears inky black in this picture showing the Jade Rabbit rover trundling across the lunar surface taken from the Chang’e 3 lander and released today by Chinese scientists. Keyword: China ### Relevant Fact:

Generated Output: Knock off or not, there’s no denying the ambition of the China’s Moon mission, which on December 14 achieved the first lunar soft landing in nearly four decades and aims at a manned mission by 2025.

c.2.6 Rte

Input Prompt: Input: No Weapons of Mass Destruction Found in Iraq Yet. Output: Weapons of Mass Destruction Found in Iraq.

Input: CBS newsman Harry Reasoner is returning to his Iowa hometown to get married Saturday. Output: In other words,

Generated Output: Harry Reasoner is getting married.

c.2.7 WiC

Input Prompt: Word: meet Example: The company agrees to meet the cost of any repairs.

Word: skin Definition 1:

Generated Output: the outer covering of the body of a vertebrate animal. Example: The skin of a snake.

c.2.8 Wsc

Input Prompt: Noun: Sam ### Pronoun: he ### Sample Sentence: Sam took French classes from Adam , because he was eager to speak it fluently.

Noun: Ann ### Pronoun: She ### Sentence where She refers to Ann:

Generated Output: It seems that Ann has had a run of bad luck. She never wins at cards.