Latent Universal Task-Specific BERT

05/16/2019
by   Alon Rozental, et al.
Amobee, Inc.
0

This paper describes a language representation model which combines the Bidirectional Encoder Representations from Transformers (BERT) learning mechanism described in Devlin et al. (2018) with a generalization of the Universal Transformer model described in Dehghani et al. (2018). We further improve this model by adding a latent variable that represents the persona and topics of interests of the writer for each training example. We also describe a simple method to improve the usefulness of our language representation for solving problems in a specific domain at the expense of its ability to generalize to other fields. Finally, we release a pre-trained language representation model for social texts that was trained on 100 million tweets.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

03/05/2020

What the [MASK]? Making Sense of Language-Specific BERT Models

Recently, Natural Language Processing (NLP) has witnessed an impressive ...
02/24/2021

Re-Evaluating GermEval17 Using German Pre-Trained Language Models

The lack of a commonly used benchmark data set (collection) such as (Sup...
04/15/2021

SINA-BERT: A pre-trained Language Model for Analysis of Medical Texts in Persian

We have released Sina-BERT, a language model pre-trained on BERT (Devlin...
02/27/2020

A Primer in BERTology: What we know about how BERT works

Transformer-based models are now widely used in NLP, but we still do not...
11/24/2020

Tackling Domain-Specific Winograd Schemas with Knowledge-Based Reasoning and Machine Learning

The Winograd Schema Challenge (WSC) is a common-sense reasoning task tha...
01/31/2019

Multi-Task Deep Neural Networks for Natural Language Understanding

In this paper, we present a Multi-Task Deep Neural Network (MT-DNN) for ...
02/13/2022

ET-BERT: A Contextualized Datagram Representation with Pre-training Transformers for Encrypted Traffic Classification

Encrypted traffic classification requires discriminative and robust traf...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Several state of the art results in multiple NLP tasks were obtained recently by relying on pre-trained language models (Peters et al., 2018; Cer et al., 2018). Amongst the most notable of these models is BERT (Devlin et al., 2018), which relies on the Transformer Encoder architecture (Vaswani et al., 2017)

and a unique pre-training objective known as “masked language model” (MLM). The objective of MLM is to predict missing vocabulary tokens based on their context; using this objective was shown to greatly improve the pre-training effectiveness of the model by allowing the self-attention layers of the Transformer to attend to tokens both before and after a missing token. An additional component of the loss function used in BERT is the “next sentence” (NS) prediction loss; the model generates a special classification vector from the text, which supports a classifier that decides whether the text is a single paragraph or a concatenation of two unrelated text sequences (sentences). Another approach that yielded impressive results over the last year focused on improving the self attention mechanism of language models

(Dehghani et al., 2018; So et al., 2019) and these improvements were often shown to produce better results compared to older Transformer based language models.

In addition to language models architectures, we explored the use of latent variables in modeling. Latent variables have various uses in NLP; a popular example of such use is the Latent Dirichlet Allocation algorithm for topic modeling (Blei et al., 2003). The latent variables are the unobserved topics; each document is a mixture of topics and each topic has some characteristic words. We introduce a simplified variation of latent variables use: when modeling language, specifically tweets, we assume each document is written by a single unknown author. We model the author classes creating the tweets using latent variables where each author class represents a distribution on the vocabulary words, thus helping to predict the missing words in a tweet.

We focus on Twitter as a source for short, social, text interactions and specifically, on predicting missing words in tweets. Regarding the use of latent variables, we feel that our assumption of one author per tweet is reasonably justified in this case.

Our original contribution is extending the BERT model by introducing two new mechanisms: dynamically calculating the number of iterations over tokens, similar to universal transformers (Dehghani et al., 2018). Secondly, implementation of latent variables representing the different “kind” of authors of the tweets, increasing the accuracy of missing word predictions, where the variables are represented in the last layer bias terms. Finally, we present a simple technique for specializing the training for specific tasks.

The paper is organized as following: Section 2 describes the latent variables modeling of topics, Section 3 describes the modifications to the BERT architecture. Section 4 presents a method of specializing pre-training for specific tasks. Sections 5 and 6 describe the experiments and results, respectively. Finally, in Section 7 we review and summarize.

2 Latent Topics

Knowing the topic or context of the text, as well as the interests of the author, can be very helpful in the prediction of missing words. For example, when reading an article about American history, the missing word in the sentence “He was born in <MISSING>” can be most reasonably guessed to be a city in the USA. However, when reading an article about Polish history the previous guess would be extremely unlikely. In language models based on the Transformer architecture (Vaswani et al., 2017), the context of a word is learnt through a self-attention mechanism, where each word queries all other words for relevant details. Such a mechanism requires a computational complexity of where is the number of words in the text; however there are relevant features that can be extracted from the text with a few parameters and a lower complexity with regards to the number of words in the text. Specifically, we believe that meaningful insights about the topic of a text can be inferred with an complexity and that doing so may reduce the inferential load from the computationally and parameter intensive part of the model.

We suggest a mechanism to learn the topic of a text by extending the final model’s bias vector with a matrix of size

, where is the vocabulary size and is the latent space dimension; in this work we take it to be

. The latent space represents the number of possible topics, or user personas. The latent matrix weights are learnt in the following way: we assume each example was written under a single latent category and generate a probability distribution over the L categories for each training example with the probability defined as:

(1)

where is the probability of latent category for the sentence, is the th bias term for the token , is the set of unmasked tokens in the sentence and the softmax normalization is over all latent topics. In order to calculate the MLM loss, we construct a per-example bias vector with component representing the th missing word in the example:

(2)

We then continue the loss calculation as described in Devlin et al. (2018).

In order to calculate the NS prediction loss, we first compute the Euclidean distance and KL-divergence between the probability distributions over latent categories of the first and second parts of the text (denoted as sentences and ). Those numbers are given as additional inputs to the classification token that was used to perform the NS prediction. Adding the distance between the distributions improves NS prediction in cases where it is likely that sentences and were written by different authors.

When extracting features from text, we extract the latent bias distribution of the text, in addition to the usual word vectors and the classification token vector. In many cases, the different bias vectors will diverge into human comprehensible categories. In these cases, the bias distribution of the text can yield meaningful insights. To illustrate this, we list in table 2 the top tweets for each category, i.e the tweets with the highest for category .

Parameters Masked LM accuracy Next sentence prediction accuracy
Base Model 110.1M 0.507 0.949
Latent Model 110.3M 0.514 0.950
Universal Model 46.3M 0.508 0.951
Latent Universal Model 46.5M 0.507 0.949
Table 1: Results for all models at the end of training. These results were obtained on a validation set of 100,000 tweets.

3 Universal Modification

In order to improve performance and reduce the number of model parameters, we have replaced the Transformer (Vaswani et al., 2017) model with a design, similar to the Universal Transformer (Dehghani et al., 2018), where the sequential self-attention blocks of the Transformer Encoder are replaced with a single recurring block.

This model also incorporates an Adaptive Computation Time (ACT) mechanism, similar to Graves (2016), which adjusts the number of times the representation of each position in a sequence is revised. Furthermore, we extended the recurrent part to have three sequential self-attention layers, instead of one. When using this model, an ACT ponder-time regularization loss was added to the overall loss of the model. This loss increases linearly with the number of revisions to each token’s representation. The total number of parameters in this model is 46.3 million, significantly less than the 110.1 million parameters used by the original model.

4 Task Specific Preprocessing

In the past, there have been several successful attempts to create “task specific” word embeddings and classifiers (Tang et al., 2014; Rozental et al., 2018). These embeddings usually outperform similar classifiers that use a general-purpose word embedding. Recently, several state of the art results were achieved by contextual, language model based, word embeddings such as Cer et al. (2018); Peters et al. (2018) outperforming both general and task-specific word embeddings.

We suggest a method of improving contextual word embeddings by adding a class weight to each token in the vocabulary; while training a language model, we multiply its loss by a pre-determined parameter. More concretely, we choose a large parameter when predicting words that are known to be relevant for the task, and a small parameter when predicting irrelevant words. For example, emojis are important for emotion classification and various NLP tasks so we took this parameter to be 2 for emojis. On the other hand, we multiplied the weights of URLs—which are compressed by Twitter in to a random sequence of characters–and Twitter mentions—which often look like @IlovePizza, and have very little to do with the text of the tweet—by .

We found that treating URLs and mentions as any other part of the text introduces a lot of noise to our model, while trying to replace them with unweighted special tokens such as _URL_ results in models that only predict these frequent tokens. Overall, the weighting step helps reduce noise and focus on the features that are important to us.

5 Experiment

We trained 4 model variants; we will refer to them as “Base Model”, ”Latent Model”, “Universal Model” and “Latent Universal Model”. The Base Model was trained using the pre-training script provided by the original BERT code. The Latent Model was augmented with 8 latent bias layers as described in section 2. The Universal Model uses the model described in section 3 and the Latent Universal Model implements both of the two improvements.

All the models were trained for 5 million batches with 20 tweets per batch. The maximum tweet length was set to 96 tokens where the vocabulary contained the original BERT vocabulary augmented with the 824 most common emoticons on Twitter. In order to have the NS prediction loss, tweets were split into 2 parts: the first sentence and the rest of the tweet. Emoticons were considered to be a sentence splitter for this purpose but unlike characters such as [. ! ?] the emoticon was considered to be the first character of the new sentence instead of the last character of the previous sentence. All models were trained on a single Tesla V100 GPU.

6 Results

After training the four aforementioned models, we compared them by examining their accuracy on the pre-training sub-tasks. The “Latent Model” achieved the highest MLM accuracy and is significantly better than the other models (p-value < 0.001). All other tests did not yield significantly different results in both the MLM and NS prediction tasks. Notably, the “Universal Model” was able to achieve performance equal to the base model while using less than half of the trainable parameters. Accuracy and number of parameters are shown in table 1.

Another noticeable finding is that the recurrent models (Universal and Latent Universal) become slower over time as they learn to perform more repetitions of the recurrent part of the model. This is a result of having a regularization factor in these models for the number of recurrent repetitions; as the recurrent part gets better, it becomes more worthwhile to “pay” the regularization cost. Interestingly, adding the aforementioned latent bias variables weakens this tendency. While the Universal model ends up ~25% slower than the corresponding Base model, the Latent Universal model is just as fast as the corresponding Latent model, showcasing that adding the latent bias variables alleviates some of the inferential load from the self-attention part of the model. For loss over time see figure 1; for examples rate over time see figure 2.

Figure 1: This figure shows the loss over time for the different models. Losses for the Universal Model and Latent Universal Model also include a small ACT regularization term.
Figure 2: The figure shows the running speed of the different models. The recurrent models become slower over time as they learn to repeat the self attention step of the model more times, though this tendency is weaker when having latent bias variables.

7 Summary and Conclusions

In this paper we described a system that combines the loss function derived from Devlin et al. (2018) with a recurrent variant of the Transformer architecture, called Universal Transformer. In addition, we modeled independent authors as latent variables by expanding the bias term and modifying the loss. We have shown that making the described changes can improve both the words prediction accuracy and reduce the complexity of the model when preforming the pre-training phase on social textual data.

The code used to produce the above results and the trained Latent model can be found in111https://s3.amazonaws.com/amobee-research-public/language-model/latent_5M_bert.zip.

References