has demonstrated great power for providing general-purpose vector embeddings of natural language: its representations have served as the basis of many successful deep Natural Language Processing (NLP) models on a variety of tasks(e.g., Liu et al., 2019, 2019; Zhang et al., 2019). Recent studies (Coenen et al., 2019; Hewitt and Manning, 2019; Lin et al., 2019; Tenney et al., 2019) have shown that BERT representations carry considerable information about grammatical structure, which, by design, is a deep and general encapsulation of linguistic information. Symbolic computation over structured symbolic representations such as parse trees has long been used to formalize linguistic knowledge. To strengthen the generality of BERT’s representations, we propose to import into its architecture this type of computation.
Symbolic linguistic representations support the important distinction between content and form information. The form consists of a structure devoid of content, such as an unlabeled tree, a collection of nodes defined by their structural positions or roles (Newell, 1980), such as root, left-child-of-root, right-child-of-left-child-of root, etc. In a particular linguistic expression such as “Kim referred to herself during the speech”, these purely-structural roles are filled with particular content-bearing symbols, including terminal words like Kim and non-terminal categories like NounPhrase. These role fillers have their own identities, which are preserved as they move from role to role across expressions: Kim retains its referent and its semantic properties whether it fills the subject or the object role in a sentence. Structural roles too maintain their distinguishing properties as their fillers change: the root role dominates the left-child-of-root role regardless of how these roles are filled.
Thus it is natural to ask whether BERT’s representations can be usefully factored into content form, i.e., filler role, dimensions. To answer this question, we recast it as: can BERT’s representations be usefully unpacked into Tensor-Product Representations (TPRs)? A TPR is a collection of constituents, each of which is the binding of a filler to a structural role. Specifically, we let BERT’s final-layer vector-encoding of each token of an input string be factored explicitly into a filler bound to a role: both the filler and the role are embedded in a continuous vector space, and they are bound together according to the principles defining TPRs: with the tensor product. This factorization effectively untangles the fillers from their roles, these two dimensions having been fully entangled in the BERT encoding itself. We then see whether disentangling BERT representations into TPRs facilitates their general use in a range of NLP tasks.
Concretely, as illustrated in Figure 1, we create HUBERT by adding a TPR layer on top of BERT; this layer takes the final-layer BERT embedding of each input token and transforms it into the tensor product of a filler embedding-vector and a role embedding-vector. The model learns to separate fillers from roles in an unsupervised fashion, trained end-to-end to perform an NLP task.
If the BERT representations truly are general-purpose for NLP, the TPR re-coding should reflect this generality. In particular, the formal, grammatical knowledge we expect to be carried by the roles should be generally useful across a wide range of downstream tasks. We thus examine transfer learning, asking whether the roles learned in the service of one NLP task can facilitate learning when carried over to another task.
In brief, overall we find in our experiments on the NLP benchmarks of GLUE (Wang et al., 2018) and HANS (McCoy et al., 2019) that HUBERT’s recasting of BERT encodings as TPRs does indeed lead to effective knowledge transfer across NLP tasks, while the bare BERT encodings do not. Specifically, after pre-training on the MNLI dataset in GLUE, we observe positive gains ranging from 0.60% to 12.28% when subsequently fine-tuning on QNLI, QQP, RTE, SST, and SNLI tasks. This is due to transferring TPR knowledge—in particular the learned roles—relative to transferring just BERT parameters which have gains ranging from minus 0.33% to positive 2.53%.
Additionally, on average, we gain 5.7% improvement on the demanding non-entailment class of the HANS challenge dataset. Thus TPR’s disentangling of fillers from roles, motivated by the nature of symbolic representations, does yield more general deep linguistic representations as measured by cross-task transfer.
2 Related work
Building on the successes of symbolic AI and linguistics since the mid-1950s, there has been a long line of work exploiting symbolic and discrete structures in neural networks since the 1990s. Along with Holographic Reduced Representations(Plate, 1995) and Vector-Symbolic Architectures (Levy and Gayler, 2008), Tensor Product Representations (TPRs) provide the capacity to represent the discrete linguistic structure in a continuous, distributed manner, where grammatical form and semantic content can be disentangled (Smolensky, 1990; Smolensky and Legendre, 2006). In Lee et al. (2016), TPR-like representations were used to solve the bAbI tasks (Weston et al., 2016), achieving close to 100% accuracy in all but one of these tasks. Schlag and Schmidhuber (2018)
also achieved success on the bAbI tasks, using third-order TPRs to encode and process knowledge-graph triples. InPalangi et al. (2018), a new structured recurrent unit (TPRN) was proposed to learn grammatically-interpretable representations using weak supervision from (context, question, answer) triplets in the SQuAD dataset (Rajpurkar et al., 2016). In Huang et al. (2018)
, unbinding operations of TPRs were used to perform image captioning. None of this previous work, however, examined the generality of learned linguistic knowledge through transfer learning.
Transfer learning for transformer-based models has been studied recently: Keskar et al. (2019) and Wang et al. (2019) report improvements in accuracy over BERT after training on an intermediate task from GLUE; an approach which has come to be known as Supplementary Training on Intermediate Labeled data Tasks (STILTs). However, as shown in more recent work (Phang et al., 2018), the results do not follow a consistent pattern when using different corpora for fine-tuning BERT, and degraded downstream transfer is often observed. Even for data-rich tasks like QNLI, regardless of the intermediate task and multi-tasking strategy, the baseline results do not improve. This calls for new model architectures with better knowledge transfer capability.
3 Model Description
Applying the TPR scheme to encode the individual words (or sub-word tokens) fed to BERT, a word is represented as the tensor product of a vector embedding its content—its filler (or symbol) aspect—and a vector embedding the structural role it plays in the sentence. Given the results of Palangi et al. (2018), we expect the symbol to capture the semantic contribution of the word while the structural role captures its grammatical role:
Assuming we have symbols with dimension and roles with dimension , is the tensor representation for token , is the (presumably semantic) symbol representation and is the (presumably grammatical) role representation for token . may be either the embedding of one symbol or a linear combination of different symbols using a softmax symbol selector, and similarly for . In other words, Eq. 1 can also be represented as where and are matrices the columns of which contain the global symbol and role embeddings, common for all tokens, and either learned from scratch or initialized by transferring from other tasks, as explained in Section 4. is the binding matrix which selects specific roles and symbols (embeddings) from and and binds them together. We assume that for a single-word representation, the binding matrix is rank , so we can decompose it into two separate vectors, one soft-selecting a symbol and the other a role, and rewrite equation (1) as where and can respectively be interpreted as attention weights over different roles (columns of ) and symbols (columns of ). For each input token , we get its contextual representations of grammatical role () and semantic symbol () by fusing the contextual information from the role and symbol representations of its surrounding tokens.
We explore two options for mapping the input token from the current time-step, and the tensor representation from the previous time-step, to and
: a Long Short-Term Memory (LSTM) architecture(Hochreiter and Schmidhuber, 1997) and a one-layer Transformer. Our conclusion based on initial experiments was that the Transformer layer results in better integration and homogeneous combination with the other Transformer layers in BERT, as will be described shortly.
The TPR layer with LSTM architecture works as follows (see also Figure 2, discussed in Sec. 4.3). We calculate the hidden states (, ) and cell states (,) and for each time-step according to the following equations:
where is the final-layer BERT embedding of the -th word, and flattens the input tensor into a vector. Each LSTM’s input cell state is the previous LSTM’s cell output state. Each LSTM’s input hidden state, however, is calculated by binding the previous cell’s role and symbol vectors.
In the TPR layer with Transformer architecture, we calculate the output representations (, ) using a Transformer Encoder layer:
Each Transformer layer consists of a multi-head attention layer, followed by a residual block (with dropout), a layer normalization block, a feed-forward layer, another residual block (with dropout), and a final layer normalization block. (See Figure 3, discussed in Sec. 4.3, and the original Transformer paper, Vaswani et al. (2017), for more details.)
Given that each word is usually assigned to a few grammatical roles and semantic concepts (ideally one), an inductive bias is enforced using a softmax temperature () to make and sparse. Note that in the limit of very low temperatures, we will end up with one-hot vectors which pick only one filler and one role.333Note that bias parameters are omitted for simplicity of presentation.
Here and are linear-layer weights. For the final output of the transformer model, we explored different aggregation strategies to construct the final sentence embedding:
is a probability distribution over class labels,is the final sentence representation,
is the classifier weight matrix, andis the maximum sequence length.
defines the merging strategy. We experimented with different aggregation strategies: max-pooling, mean-pooling, masking all but the input-initial [CLS] token, and concatenating all tokens and projecting down using a linear layer. InDevlin et al. (2018), the final representation for the [CLS] token is used as the sentence representation. However, during our experiments, we observed better results when concatenating the final embeddings for all tokens and then projecting down to a smaller dimension, as this exposes the classifier to the full span of token information.
The formal symmetry between symbols and roles evident in Eq. 1 is broken in two ways.
First, we choose hyper-parameters so that the number of symbols is greater than the number of roles. Thus each role is on average used more often than each symbol, encouraging more general information (such as structural position) to be encoded in the roles, and more specific information (such as word semantics) to be encoded in the symbols. (This effect was evident in the analogous TPR learning model of Palangi et al. (2018).)
Second, to enable the symbol that fills any given role to be exactly recoverable from a TPR in which it appears along with other symbols, the role vectors should be linearly independent: this expresses the intuition that distinct structural roles carry independent information. Fillers, however, are not expected to be independent in this sense, since many fillers may have similar meanings and be quasi-interchangeable. So for the role matrix , but not the filler matrix , we add a regularization term to the training loss which encourages the matrix to be orthogonal:
indicates the loss function,
is the identity matrix with k rows and k columns, andis the indicator function: it is 1 when the predicted class matches the correct class label, and 0 otherwise. Following the practice in Bansal et al. (2018) we use double soft orthogonality regularization to handle both over-complete and under-complete matrices .
We performed extensive experiments to answer the following questions:
Does adding a TPR layer on top of BERT (as in the previous section) impact its performance positively or negatively? We are specifically interested in MNLI for this experiment because it is large-scale compared to other GLUE tasks and is more robust to model noise (i.e., different randomly-initialized models tend to converge to the same final score on this task). This task is also used as the source task during transfer learning. This experiment is mainly a sanity check to verify that the specific TPR decomposition added does not hurt source-task performance.
Does transferring the BERT model’s parameters, fine-tuned on one of the GLUE tasks, help the other tasks in the Natural Language Understanding (NLU) benchmarks (Bowman et al., 2015; Wang et al., 2018)? Based on our hypothesis of the advantage of disentangling content from form, the learned symbols and roles should be transferable across natural language tasks. Does transferring role () and/or symbol () embeddings (described in the previous section) improve transfer learning on BERT across the GLUE tasks?
Is the ability to transfer the TPR layer limited to GLUE tasks? Can it be generalized? To answer this question we evaluated our models on a challenging diagnostic dataset outside of GLUE called HANS (McCoy et al., 2019).
In this section, we briefly describe the datasets we use to train and evaluate our model. GLUE is a collection of 9 different NLP tasks that currently serve as a good benchmark for different proposed language models. The tasks can be broadly categorized into single sentence tasks (e.g. CoLA and SST) and paired sentence tasks (e.g. MNLI and QQP). In the former setting, the model makes a binary decision on whether a single input satisfies a certain property or not. For CoLA, the property is grammatical acceptability; for SST, the property is positive sentiment.
The 7 other tasks in GLUE are paired sentence tasks in which the model strives to find a relationship (binary or ternary) between two sentences. QNLI, WNLI, and RTE are inference tasks, in which given a premise and a hypothesis, the model predicts whether the hypothesis is congruent with the premise (i.e. entailment) or not (i.e. conflict). Although QNLI and WNLI are not originally designed as inference tasks, they have been re-designed to have a similar configuration as other NLI tasks. This way, a single classifier can be used to judge whether the right answer is in the hypothesis (e.g. for QNLI) or whether a pronoun is replaced with the correct antecedent (e.g. for WNLI). MNLI is an additional NLI task in which three classes are being used instead of two to represent the relation between two sentences. The third class shows neutrality when the model is not confident that the relation is either entailment or contradiction. The last three tasks measure sentence similarity. In MRPC the model decides if two sentences are paraphrases of each other. In QQP, given two questions, the model decides whether they are equivalent and are asking for the same information. All the tasks discussed so far fall under the classification category, where the model produces a probabilistic distribution over the possible class outcomes and the highest value is selected. STS-B, however, is a regression task where the model produces a real number between 1 and 5, indicating the two sentences’ semantic similarity. Since our model is designed only for classification tasks, we skip this dataset.
|Corpus||Task||single\pair||# Train||# Dev||# Test||# Labels|
We observed a lot of variance in the accuracy () for models trained on WNLI, MRPC, and CoLA. As mentioned in the GLUE webpage444https://gluebenchmark.com/faq, there are some issues with the dataset, which makes many SOTA models perform worse than majority-voting. We found that MRPC results are highly dependent on the initial random seed and order of sentences in the shuffled training data which is mainly caused by the small number of training samples (Table 1). CoLA is the only task in GLUE which examines grammatical correctness rather than sentiment, and thus it makes it harder to benefit from the knowledge learned from other tasks. The train and test set are also constructed in an adversarial way which makes it very challenging. For example, the sentence “Bill pushed Harry off the sofa for hours.” is labeled as incorrect in the train split but a very similar sentence “Bill pushed Harry off the sofa.” is labeled as correct in the test split. Hence, we only conduct our experiments on the remaining 5 datasets from GLUE.
We also take advantage of an additional NLI dataset called SNLI. It is distributed in the same format as MNLI and recommended by Wang et al. (2018) to be used in conjunction with MNLI during training. However, in our experiments, we treat this dataset as a separate corpus and report our results on it individually.
To further test the capabilities of our model, we evaluate our model on a probing dataset (McCoy et al., 2019)
. It introduces three different syntactic heuristics and claims that most of SOTA neural NLI models exploit these statistical clues to form their judgments on each example. It shows through extensive experiments that these models obtain very low accuracies for sentences cleverly crafted to defeat the models which exploit these heuristics. Lexical overlap, Subsequence, and Constituent are the three categories examined, each containing 10 sub-categories.
We conduct three major experiments in this work: a comparison of architectures on the MNLI dataset, which we then use to study transfer learning between GLUE tasks (Wang et al., 2018), and finally model diagnosis using HANS (McCoy et al., 2019); these are discussed in Sections 4.3, 4.4, and 4.5, respectively.
4.2 Implementation Details
Our implementations are in PyTorch and based on the HuggingFace555https://github.com/huggingface/pytorch-pretrained-BERT repository which is a library of state-of-the-art NLP models, and BERT’s original codebase666https://github.com/google-research/bert. In all of our experiments, we used bert-base-uncased model which has 12 Transformer Encoder layers with 12 attention heads each and the hidden layer dimension of 768. BERT’s word-piece tokenizer was used to preprocess the sentences. We used Adamax (Kingma and Ba, 2014) as our optimizer with a learning rate of and used a linear warm-up schedule for 0.1 proportion of training. In all our experiments we used the same value for dimension and number of roles and symbols (: 32, : 32, : 50,
: 35). These parameters were chosen from the best performing BERT models over MNLI. We used the gradient accumulation method to speed up training (in which we accumulate the gradients for two consecutive batches and then update the parameters in one step). Our models were trained with a batch size of 256 distributed over 4 V100 GPUs. Each model was trained for 10 epochs, both on the source task and the target task (for transfer learning experiments).
4.3 Architecture comparison on MNLI
Our experiments are done with four different model architectures. All the models share the general architecture depicted in Figure 1 except for BERT and BERT-LSTM, where the TPR layer is absent. In the figure, the BERT model indicates the pre-trained off-the-shelf BERT base model which has 12 Transformer encoder layers. The aggregation layer computes the final sentence representation (see Eq. 5). The linear classifier is task-specific and is not shared between tasks during transfer learning.
BERT: This is our baseline model which consists of BERT, an aggregation layer on top, and a final linear classifier.
BERT-LSTM: We augment the BERT model by adding a unidirectional LSTM Recurrent layer (Hochreiter and Schmidhuber, 1997; Sutskever et al., 2014) on top. The inputs to the LSTM are token representations encoded by BERT. We then take the final hidden state of the LSTM and feed it into a classifier to get the final predictions. Since this model has an additional layer augmented on top of BERT, it can serve as a baseline for TPR models introduced below.
HUBERT (LSTM): We use two separate LSTM networks to compute symbol and role representation for each token. Figure 2 shows how the final token embedding () is constructed at each time step: this plays the role of the LSTM hidden state . (In the figures, ‘ ’ denotes matrix-vector multiplication.) The results (Table 2) show that this decomposition improves the accuracy on MNLI compared to both the BERT and BERT-LSTM models. Training recurrent models is usually difficult, due to exploding or vanishing gradients, and has been studied for many years (Le et al., 2015; Vorontsov et al., 2017). With the introduction of the gating mechanism in LSTM and GRU cells, this problem was alleviated. In our model, we have a tensor-product operation which binds role and symbol vectors. We observed that during training the values comprising these vectors can reach very small numbers (< ), and after binding, the final embedding vectors have values roughly in the order of
. This makes it difficult for the classifier to distinguish between similar but different sentences. Additionally, backpropagation is not effective since the gradients are too small. We avoided this problem by linearly scaling all values by a large number (1K) and making that scaling value trainable so that the model can adjust it for better performance.
HUBERT (Transformer): In this model, instead of using a recurrent layer, we deploy the power of Transformers (Vaswani et al., 2017) to encode roles and symbols (see Figure 3). This lets us attend to all the tokens when calculating and
and thus better capture long-distance dependencies. It also speeds up training as all embeddings are computed in parallel for each sentence. Furthermore, it naturally solves the vanishing and exploding gradients problem, by taking advantage of residual blocks(He et al., 2015) to facilitate backpropagation and Layer Normalization (Lei Ba et al., 2016) to prohibit value shifts. It also integrates well with the rest of the BERT model and presents a more homogeneous architecture.777The results reported here correspond to an implementation using an additional Transformer encoder layer on top of the TPR layer; we scale the input values to this layer only. Future versions of the model will omit this layer.
We first do an architecture comparison study on the four models, each built on BERT (base model). We fine-tune BERT on the MNLI task, which we will then use as our primary source training task for testing transfer learning. We report the final accuracy on the MNLI development set.
Table 2 summarizes the results. Both HUBERT models are able to maintain the same performance as our baseline (BERT). This confirms that adding TPR heads will not degrade the model’s accuracy and can even improve it (in our case when evaluated on MNLI matched development set). Although HUBERT (Transformer) and HUBERT (LSTM) have roughly the same accuracy, we choose HUBERT (Transformer) to perform our transfer learning experiments, since it eliminates the limitations of HUBERT (LSTM) (as discussed above) and reduces the training and inference time significantly (> 4X).
|Model||BERT||BERT-LSTM||HUBERT (LSTM)||HUBERT (Transformer)|
4.4 Transfer Learning
We compare the transfer-learning performance of HUBERT (Transformer) against BERT. We follow the same training procedure for each model and compare the final development set accuracy on the target corpus. The training procedure is as follows: For Baseline, we train three instances of each model on the target corpus and then select the one with the highest accuracy on target dev set (We vary the random seed and the order in which the training data is sampled for each instance.) These results are reported for each model in the Baseline Acc. column in Table 3. For Fine-tuned, in a separate experiment, we first fine-tune one instance of each model on the source corpus and use these updated parameters to initialize a second instance of the same model. The initialized model will then be trained and tested on the target corpus. In this setting, we have three subsets of parameters to choose from when transferring values from the source model to the target model: BERT parameters, the Role embeddings , and the Filler embeddings . Each of these subsets can independently be transferred or not, leading to a total of 7 combinations excluding the option in which none of them are transferred. We chose the model which has the highest absolute accuracy on the target dev dataset. These results are reported for each model under Fine-tuned Acc. Note that the transferred parameters are not frozen, but updated during training on the target corpus.
MNLI as source: Table 3 summarizes the results for these transfer learning experiments when the source task is MNLI. Gain shows the difference between Fine-tuned model’s accuracy and Baseline’s accuracy. For HUBERT (Transformer), we observe substantial gain across all 5 target corpora after transfer. However, for BERT we have a drop for QNLI, QQP, and SST.
These observations confirm our hypothesis that recasting the BERT encodings as TPRs leads to better generalization across down-stream NLP tasks.
|Model||Target Corpus||Transfer BERT||Transfer Filler||Transfer Role||Baseline Acc. (%)||Fine-tuned Acc. (%)||Gain (%)|
|HUBERT (Transformer)||QNLI||True||True||False||90.56||91.16||+ 0.60|
|HUBERT (Transformer)||QQP||False||False||True||90.81||91.42||+ 0.61|
|HUBERT (Transformer)||RTE||True||True||True||61.73||74.01||+ 12.28|
|HUBERT (Transformer)||SNLI||True||False||True||90.66||91.36||+ 0.70|
|HUBERT (Transformer)||SST||True||False||True||91.28||92.43||+ 1.15|
Almost all tasks benefit from transferring roles except for QNLI. This may be due to the structure of this dataset, as it is a modified version of a question-answering dataset (Rajpurkar et al., 2016) and has been re-designed to be an NLI task. Transferring the filler embeddings helps with only QNLI and RTE. Transferring BERT parameters in conjunction with fillers or roles surprisingly boosts accuracy for QNLI and SST, where we had negative gains for the BERT model, suggesting that TPR decomposition can also improve BERT’s parameter transfer.
QQP as source: The patterns here are quite different as the source task is now a paraphrase task (instead of inference) and TPR needs to encode a new structure. Again transferring roles gives positive results except for RTE. Filler vectors learned from QQP are more transferable compared to MNLI and gives a boost to all tasks except for SNLI. Surprisingly, transferring BERT parameters is hurting the results now even when TPR is present. However, for cases in which we also transferred BERT parameters (not shown), the Gains were still higher than for BERT, confirming the results obtained when MNLI was the source task.888Baseline results slightly differ from Table 3 due to using a different scaling value for this each source task.
|Model||Target Corpus||Transfer BERT||Transfer Filler||Transfer Role||Baseline Acc. (%)||Fine-tuned Acc. (%)||Gain (%)|
|HUBERT (Transformer)||QNLI||False||True||True||88.32||90.55||+ 2.23|
|HUBERT (Transformer)||MNLI||False||True||True||84.30||85.24||+ 0.94|
|HUBERT (Transformer)||RTE||False||True||False||61.73||65.70||+ 3.97|
|HUBERT (Transformer)||SNLI||False||False||True||90.63||91.20||+ 0.57|
|HUBERT (Transformer)||SST||True||True||True||86.12||91.06||+ 4.94|
We also verified that our TPR layer is not hurting the performance by comparing the test set results for HUBERT (Transformer) and BERT. The results are obtained by submitting models to the GLUE evaluation server. The results are presented in Table 5.
|Source Corpus||Target Corpus||Transfer BERT||Transfer Filler||Transfer Role||BERT Acc. (%)||HUBERT Acc. (%)|
4.5 Model Diagnosis
We also evaluated HUBERT (Transformer) on a probing dataset outside of GLUE called HANS (McCoy et al., 2019) Results are presented in Table 6. HANS is a diagnosis dataset that probes various syntactic heuristics which many of the state-of-the-art models turn out to exploit, and thus they perform poorly on cases that don’t follow those heuristics. There are three heuristics measured in HANS which are as follows: Lexical overlap where a premise entails any hypothesis built from a subset of words in the premise, Subsequence where a premise entails any contiguous subsequences of it, and Constituent where a premise entails all complete subtrees in its parse tree. Our results indicate that TPR models are less prone to adopt these heuristics, resulting in versatile models with better domain adaptation. Following McCoy et al. (2019), we combined the predictions of neutral and contradictory into a non-entailment class, since HANS uses two classes instead of three. Note that no subset of the HANS data is used for training.999We observed high variance in the results on HANS for both BERT and HUBERT. For instance, two models that achieve similar scores on the MNLI dev set can have quite different accuracies on HANS. To account for this, we ran our experiments with at least 3 different seeds and reported the best scores for each model.
We observed that our HUBERT (Transformer) model trained on MNLI did not diminish BERT’s near-perfect performance on correctly-entailed cases (which follow the heuristics). In fact, it increased the accuracy of Lexical and Subsequence heuristics. On the problematic Non-Entailment cases, however, BERT outperforms HUBERT (Transformer). Since HUBERT has more parameters than BERT it can better fit the training data. Thus, we suspect that HUBERT attends more to the heuristics that MNLI has in its design, and gets a lower score on sentences that don’t follow those heuristics. But to examine the knowledge-transfer power of TPR, we additionally fine-tuned each model on SNLI and tested again on HANS. (For HUBERT (Transformer), we only transfer roles and fillers). On Non-entailment cases, for the HUBERT model, the Lexical accuracy improved drastically: by 61.62% (6,162 examples). Performance on cases violating the Subsequence heuristic improved by 1.44% (144 examples) and performance on those violating the Constituent heuristic improved by 5.4% (540 examples). These improvements on Non-entailment case came at the cost of small drops in Entailment accuracy. This pattern of transfer is in stark contrast with the BERT results. Although the results on Entailment cases are improved, the accuracies for Subsequence and Constituent Non-Entailment cases drop significantly, showing that BERT is failing to integrate new knowledge gained from SNLI with previously learned information from MNLI. This shows that here, HUBERT (Transformer) can leverage information from a new source of data efficiently. The huge improvement on the Lexical Non-entailment cases speak to the power of TPRs to generate role-specific word embeddings: the Lexical heuristic amounts essentially to performing inference on a bag-of-words representation, where mere lexical overlap between a premise and a hypothesis yields a prediction of entailment.
|Correct: Entailment||Correct: Non-Entailment|
|Model||Acc. (%)||Lex. (%)||Sub. (%)||Const. (%)||Lex. (%)||Sub. (%)||Const. (%)|
|HUBERT (Transformer) +||63.22||95.52||99.76||99.32||70.02||3.76||10.92|
In this work we showed that BERT cannot effectively transfer its knowledge across NLP tasks, even if the two tasks are fairly closely related. To resolve this problem, we proposed HUBERT: this adds a decomposition layer on top of BERT which disentangles symbols from their roles in BERT’s representations. The HUBERT architecture exploits Tensor-Product Representations, in which each word’s representation is constructed by binding together two separated properties, the word’s (semantic) content and its structural (grammatical) role. In extensive empirical studies, HUBERT showed consistent improvement in knowledge-transfer across various linguistic tasks. HUBERT+ outperformed BERT+ on the challenging HANS diagnosis dataset, which attests to the power of its learned, disentangled structure. The results from this work, along with recent observations reported in Kovaleva et al. (2019); McCoy et al. (2019); Clark et al. (2019); Michel et al. (2019), call for better model designs enabling synergy between linguistic knowledge obtained from different language tasks.
We would like to thank R. Thomas McCoy from Johns Hopkins University and Alessandro Sordoni from Microsoft Research for sharing and discussing their recent results on HANS, and Xiaodong Liu from Microsoft Research for thoughtful discussions.
- Can we gain more from orthogonality regularizations in training deep cnns?. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 4266–4276. Cited by: §3.
- A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326. Cited by: item 2.
- What Does BERT Look At? An Analysis of BERT’s Attention. arXiv e-prints. External Links: Cited by: §5.
- Visualizing and measuring the geometry of BERT. arXiv preprint arXiv:1906.02715. Cited by: §1.
- BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §1, §3.
- Deep Residual Learning for Image Recognition. arXiv e-prints. External Links: Cited by: §4.3.
- A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4129–4138. Cited by: §1.
- Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: §3, §4.3.
- Tensor product generation networks for deep NLP modeling. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pp. 1263–1273. Cited by: §2.
- Unifying question answering and text classification via span extraction. arXiv preprint arXiv:1904.09286. Cited by: §2.
- Adam: A Method for Stochastic Optimization. arXiv e-prints. External Links: Cited by: §4.2.
- Revealing the Dark Secrets of BERT. arXiv e-prints. External Links: Cited by: §5.
A Simple Way to Initialize Recurrent Networks of Rectified Linear Units. arXiv e-prints. External Links: Cited by: §4.3.
- Reasoning in vector space: an exploratory study of question answering. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, Cited by: §2.
- Layer Normalization. arXiv e-prints. External Links: Cited by: §4.3.
- Vector symbolic architectures: a new building material for artificial general intelligence. In Proceedings of the 2008 Conference on Artificial General Intelligence 2008: Proceedings of the First AGI Conference, pp. 414–418. Cited by: §2.
- Open sesame: getting inside BERT’s linguistic knowledge. arXiv preprint arXiv:1906.01698. Cited by: §1.
- Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, pp. 4487–4496. External Links: Cited by: §1.
- Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Cited by: §1.
- Right for the wrong reasons: diagnosing syntactic heuristics in natural language inference. arXiv preprint arXiv:1902.01007. Cited by: §1, item 3, §4.1, §4.1, §4.5, §5.
- Are Sixteen Heads Really Better than One?. arXiv e-prints. External Links: Cited by: §5.
- Physical symbol systems. Cognitive science 4 (2), pp. 135–183. Cited by: §1.
- Question-answering with grammatically-interpretable representations. In AAAI, Cited by: §2, §3, §3.
- Sentence encoders on stilts: supplementary training on intermediate labeled-data tasks. arXiv preprint arXiv:1811.01088. Cited by: §2.
- Holographic reduced representations. IEEE Transactions on Neural networks 6 (3), pp. 623–641. Cited by: §2.
- SQuAD: 100,000+ Questions for Machine Comprehension of Text. arXiv e-prints. External Links: Cited by: §4.4.
- SQuAD: 100, 000+ questions for machine comprehension of text. In EMNLP, Cited by: §2.
- Learning to reason with third order tensor products. In Advances in Neural Information Processing Systems 31, pp. 9981–9993. Cited by: §2.
- The harmonic mind: from neural computation to optimality-theoretic grammarvolume i: cognitive architecture (bradford books). The MIT Press. External Links: Cited by: §2.
- Tensor product variable binding and the representation of symbolic structures in connectionist systems. Artificial Intelligence 46 (1), pp. 159 – 216. Cited by: §2.
- Sequence to Sequence Learning with Neural Networks. arXiv e-prints. External Links: Cited by: §4.3.
- BERT rediscovers the classical NLP pipeline. arXiv preprint arXiv:1905.05950. Cited by: §1.
- Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: §1, §3, §4.3.
- On orthogonality and learning recurrent networks with long term dependencies. arXiv e-prints. External Links: Cited by: §4.3.
- Can you tell me how to get past sesame street? sentence-level pretraining beyond language modeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 4465–4476. Cited by: §2.
- Glue: a multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. Cited by: §1, item 2, §4.1, §4.1.
- Towards ai-complete question answering: A set of prerequisite toy tasks. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, Cited by: §2.
- Semantics-aware bert for language understanding. arXiv preprint arXiv:1909.02209. Cited by: §1.