Enhancing Language Models with Plug-and-Play Large-Scale Commonsense

09/06/2021 ∙ by Wanyun Cui, et al. ∙ 0

We study how to enhance language models (LMs) with textual commonsense knowledge. Previous work (e.g., KnowBERT) has focused on the integrating entity knowledge from knowledge graphs. In order to introduce the external entity embeddings, they learn to jointly represent the original sentences and external knowledge by pre-training on a large scale corpus. However, when switching to textual commonsense, unlike the light entity embeddings, the encoding of commonsense descriptions is heavy. Therefore, the pre-training for learning to jointly represent the target sentence and external commonsense descriptions is unaffordable. On the other hand, since pre-trained LMs for representing the target sentences alone are readily available, is it feasible to introduce commonsense knowledge in downstream tasks by fine-tuning them only? In this paper, we propose a plug-and-play method for large-scale commonsense integration without pre-training. Our method is inspired by the observation that in the regular fine-tuning for downstream tasks where no external knowledge was introduced, the variation in the parameters of the language model was minor. Our method starts from a pre-trained LM that represents the target sentences only (e.g., BERT). We think that the pre-training for joint representation learning can be avoided, if the joint representation reduces the impact of parameters on the starting LM. Previous methods such as KnowBERT proposed complex modifications to the vanilla LM to introduce external knowledge. Our model (Cook-Transformer, COmmOnsense Knowledge-enhanced Transformer), on the other hand, hardly changes the vanilla LM except adding a knowledge token in each Transformer layer. In a variety of experiments, COOK-Transformer-based BERT/RoBERTa improve their effect without any pre-training.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Although unsupervised language models have achieved big success on many tasks devlin2018bert, they are incapable of learning low-frequency knowledge. In Fig. 1, we found that even if we replace “Kevin was” (left) with “Jim was” (right), BERT devlin2018bert still predicts the masked word as sick, crying, dying etc. This is because similar texts in its training corpus rarely describe the subject of “comforted”. To improve the model’s ability to generalize and understand low-frequency knowledge, we propose to incorporate commonsense into language models. In Fig. 1, to make correct predictions, we need to enhance the language model with the commonsense . It can be seen that commonsense like is more suitable to be represented in textual form, instead of the structured knowledge graphs. In this paper, we focus on integrating commonsense in the text form as in Fig. 1.

Figure 1: The prediction of [MASK] by BERT. BERT cannot distinguish between Jim and Kevin in Jim comforted Kevin because.

We analogize introducing textual commonsense knowledge to the previous studies of introducing entity knowledge from knowledge graphs zhang2019ernie; peters2019knowledge. They typically represent each entity via an embedding. They modify the structure of Transformer-based LMs to allow the interaction between the target sentence and the corresponding entities, instead of the original Transformer that only represents the target sentence. In order to “accustom” the joint representation, their models need to be pre-trained on large-scale corpora with objectives like masked language modeling.

However, following these methods to introduce commonsense knowledge will be unaffordable. Unlike discrete symbolic knowledge that can be represented by independent embeddings, textual knowledge uses more complicated encoders (e.g., BERT). The costs of both feedforward and backpropagation of text encoders are much higher than for standalone symbolic knowledge embeddings. This makes the pre-training on a large scale corpus unaffordable.

In this paper, we propose to introduce commonsense knowledge so that existing pre-trained LMs (e.g. BERT) can leverage the newly introduced commonsense knowledge without further pre-training. We denote this setting as integrating plug-and-play commonsense into LMs. We are inspired by an observation of fine-tuning LMs radiya2020fine, i.e., the parameters after fine-tuning are very close to those before fine-tuning. This phenomenon is easy to explain because a pre-trained LM can be fine-tuned in much fewer steps to perform specific tasks than the number of steps required for pre-training.

Reviewing the previous LMs which introduce external entity knowledge, we find that all of them have made significant changes to the existing language models. For example, KnowBERT peters2019knowledge, Ernie zhang2019ernie, and BERT-MK  are typical models based on BERT. In order to introduce knowledge, they all propose additional knowledge fusion layers. In these knowledge fusion layers, the entity embeddings and the target sentence tokens are combined via self-attention. Undoubtedly, after these drastic model modifications, the parameters of the original pre-trained LM (i.e. BERT) are no longer applicable. Therefore it is necessary for them to learn to jointly represent the target sentences and entity embeddings through pre-training.

Based on the phenomenon of minor variations of parameters after fine-tuning, we propose to avoid the expensive pre-training. The new proposed commonsense-enhanced LM consists of an existing LM (i.e. BERT or RoBERTa in this paper) to represent the target sentence. We denote the LM for the target sentence as the meta language model (meta LM). We further propose that if we can introduce external commonsense while reducing the impact on the meta LM, the commonsense-enhanced LM can accustom to the external commonsense more easily without pre-training. Thus, we keep most of the structure of the meta LMs and only add a new knowledge token to represent the commonsense knowledge. The commonsense embeddings are introduced through residuals  to this token. In this way, the meta LM is minimally affected. In particular, if the external commonsense embedding is zero vectors, the commonsense-enhanced LM will be the same as the meta LM. Compared with other knowledge integration models, our model has two advantages. First, it only adds one knowledge token to the target sentence to retain the meta LM. Second, our model still allows multiple layers of attentions between the commonsense and the meta LM. We refer to this new model as the Cook-Transformer.

Some other ways of introducing textual knowledge include continuing pre-training on textual knowledge gururangan2020don; sun2019finetune, and directly using common knowledge as additional textual input (e.g., ExpBERT murty2020expbert). Continue pre-training the LM on the corpus of the external knowledge requires external knowledge and downstream tasks to have similar domains gururangan2020don. However, as can be seen in Fig. 1, the commonsense and the downstream task has obvious distribution discrepancy, which leads to a decrease in effectiveness. We will verify this empirically in Sec 6.3. If we introduce commonsense knowledge by ExpBERT, we will meet the challenge of the scale of the commonsense. Because ExpBERT only captures small-scale commonsense with a fixed size. We will also verify this empirically in Sec 6.3.

2 Related work

In this section, we compare different ways to introduce knowledge into language models. We divide the knowledge introduction methods into (1) continuing pre-training method gururangan2020don; sun2019finetune and (2) explicit introduction in the downstream task guu2020realm; murty2020expbert.

Continuing pre-training the language model is effective when the external knowledge is similar to the downstream task gururangan2020don; sun2019finetune. However, commonsense and downstream tasks are OOD, so continuing pre-training is unsuitable for commonsense introduction. We have empirically verified this in Sec 6.3.

Introducing explicit knowledge in downstream tasks

We classify the knowledge into structured knowledge, plain text, and semi-structured knowledge, depending on its form. The entries of

structured knowledge are represented as individual embeddings peters2019knowledge; zhang2019ernie; guan2020knowledge; zhou2018commonsense, while commonsense descriptions in this paper can be represented more accurately by the contextual information of their word sequences.

Plain text knowledge A typical work of introducing plain text knowledge is REALM guu2020realm, which contains an expensive knowledge retriever. The knowledge retriever determines whether external knowledge can be introduced. However, the commonsense we use is not plain text. Each commonsense description is associated with a verb/verb phrase, therefore the retrieval is much easier. This motivates us to propose new models to represent the commonsense.

Semi-structured commonsense knowledge Note that the commonsense knowledge base used in this paper is semi-structured. That is, each commonsense description is associated with corresponding one verb or verb phrase. In Fig. 1, are associated with comforted. With the association, we can find all candidate commonsense descriptions efficiently and precisely. Therefore, we focus on the knowledge integration for semi-structured knowledge, rather than the knowledge retrieval in REALM. ExpBERT murty2020expbert also integrates several candidate commonsense descriptions as inputs. However, ExpBERT directly concatenates the representations of these commonsense descriptions. Such knowledge integration cannot be scaled up to the large scale commonsense knowledge base in this paper. We verified this empirically in Sec 6.3. So we need to propose a new representation that makes full use of the contextual information for large-scale commonsense knowledge base.

3 Problem Setup: Integrate Plug-and-Play Commonsense into Language Models

We consider a text classification task where the text and its label are provided for training. Assuming that the commonsense descriptions come from a large-scale commonsense knowledge base (e.g. ATOMIC2020 in this paper), we retrieve all relevant commonsense for , denoted as , where each is a commonsense description. The retrieval process will be shown in Sec 6.

Our model is based on a meta LM , where is its parameter. To avoid any other pre-training, the commonsense-enhanced model consists of . As we stated in the introduction, we tried to minimize the variation of ’s parameters after fine-tuning for specific tasks. To this end, the commonsense-enhanced does not directly change the structure of the . Instead, it introduces commonsense through the transformation of . Our new model is defined as:

(1)

That is, the model introduces commonsense knowledge in the model by transforming to , rather than modify the language model as in KnowBERT. The goal of training is to find parameters which minimize the loss of training examples given the texts and candidate commonsense descriptions:

(2)

where

is the loss function.

4 Cook-Transformer

In this section, we propose Cook-Transformer based on Transformer to introduce extra commonsense descriptions. We first show Cook-Transformer on an abstract level in Sec 4.1. Then we elaborate two modules within it, i.e. knowledge enhancement and knowledge integration, in Sec 4.2 and Sec 4.3, respectively.

4.1 Framework

In this subsection, we show how our Cook-Transformer works at an abstract level. For the target sentence , Cook-Transformer takes both and as inputs. To incorporate all the information of and , the Cook-Transformer contains three vanilla Transformers, denoted by . is the Transformer of the meta LM, which is used to model the target sentence . To avoid changing the parameters of the meta LM, we only add an additional knowledge token to the target sentence to represent the commonsense knowledge, without modifying others structure of Transformer. We will describe how this knowledge token interacts with other tokens of and enhances knowledge in Sec 4.2. Each commonsense description will be represented as a knowledge encoding by and then introduced into the knowledge token by . This knowledge integration model will be introduced in Sec 4.3. The framework is shown in Fig. 2.

Figure 2: Cook-Transformer. from the meta LM encodes the target text with enhanced commonsense . encodes each individual commonsense description. integrates all candidate commonsense descriptions and transfers knowledge to .

4.2 Knowledge Enhancement Module

The knowledge enhancement module represents the target sentence while allowing commonsense to enhance its representation.

Interaction between words and commonsense. We use to represent the interaction between words of the target text . In addition, we introduce a special token to represent the commonsense knowledge. We denote it as the knowledge token. encodes all words and the knowledge token together via multi-head attention. Formally, given word sequence , accepts a sequence of word-piece tokens: . We denote the knowledge embedding and word embeddings produced by the -th layer of as and , respectively. They are computed by a vanilla Transformer:

(3)

where means appending at the front of . is used as the query, key, and value in the multi-head attention.

Knowledge update Compared to the vanilla Transformer, we use an extra update operation to update the knowledge token by adding the integrated commonsense embedding . This can be formulated by:

(4)

where is the embedding of the commonsense computed by the knowledge integration module in Sec 4.3.

4.3 Knowledge Integration Module

The knowledge integration module encodes all candidate commonsense descriptions and integrates them. We first use to represent each candidate commonsense description. Then, we use to integrate all candidate commonsense, and transfer the integrated knowledge to the knowledge enhancement module.

Representing single commonsense We use a vanilla Transformer as to model each candidate commonsense description. For all the retrieved commonsense , we compute the embedding of each commonsense description by:

(5)

Knowledge integration We integrate all candidate commonsense by . Since not all the candidate commonsense leads to high confidence prediction as we have discussed in Sec 1, we need to select relevant commonsense and ignore irrelevant commonsense. Transformer is adequate to conduct this selection. Specifically, in the query-key-value mechanism in Transformer, we use the embedding of the knowledge token in as the query of . and the commonsense embeddings by as keys and values of . Then, we integrate representations of all different commonsense descriptions based on their similarities with the knowledge token.

also uses multi-head attention to allow the knowledge token to interact with the candidate commonsense in multiple ways. The output of multi-head self-attention is followed by a residual connection and a layer normalization.

(6)

where denotes the sequence of embeddings of all candidate commonsense descriptions.

Null Commonsense Some target texts may not have valid commonsense from the commonsense knowledge base to enhance their representations. Therefore, we refer to the settings of REALM guu2020realm to add a null commonsense into the candidate commonsense of all target texts. We denote the null commonsense as . Matching to the null commonsense indicates that the commonsense knowledge base cannot help enhance the target text.

5 Adaptation to Pre-trained Language Models

In this section, we take BERT as an example to illustrate how we adapt Cook-Transfomer to existing pre-trained language models. In addition to using BERT as the meta language model, we also use Transformer layers in BERT as to represent common knowledge more accurately.

We denote the adapted model as Cook-BERT. An important manifestation of the effectiveness of the Transformer structure is its applications in large-scale pre-trained models (e.g. BERT, RoBERTa). In order to introduce external knowledge, many other studies conduct pre-training over large-scale unsupervised corpus peters2019knowledge; xiong2019pretrained. However, Cook-Transformer is able to directly adapt to the existing pre-trained language models without the pre-training. In other words, when adapting Cook-Transformer to Cook-BERT, we directly use the parameters of each Transformer layer of BERT to initialize the Cook-Transformer layers of Cook-BERT. This property greatly improves the applicability of Cook-BERT. In the rest of this section, we will describe how , , and are adapted respectively in Sec 5.1, and how to fine-tune Cook-BERT in Sec 5.2.

5.1 Layer-by-Layer Connection and Adaptation

As we have described in Sec 4.1, will use the original pre-trained language model. In addition, in order to allow to provide more accurate commonsense encodings as well, we use a separate pre-trained language model to represent multiple layers. We denote the two pre-trained language model as BERT1 and BERT2. We connect the and in the corresponding layer of each BERT by . Therefore, Cook-BERT makes full use of the multi-layer structure of BERT, while allowing commonsense in the knowledge token to fully interact with the target text in each layer. The architecture is shown in Fig. 3.

Figure 3: Architecture of Cook-BERT. We only draw edges which connect to the -th layer.

We adapt the Transformer of BERT1 to in the knowledge enhancement module of Cook-Transformer. Note that the original BERT’s tokens are (for a single sentence) or (for a sentence pair). We follow wang2020cross and use a special token as the knowledge token. When tokenizing sentences, we insert the token after the token for each given text. In this way, the input tokens become or , respectively. This simple modification allows us to use as the knowledge token in the knowledge enhancement module.

In addition, contains a knowledge update step so that we can add the from the same layer of to the knowledge token.

We adapt each Transformer layer of BERT2 to the layer. The adaptation is straightforward since uses the vanilla Transformer structure. We use the encoding of the token in each corresponding layer as the commonsense representation to enhance the representation of corresponding layer in BERT1.

For each pair of corresponding and from the same layer, we use one to connect them to transfer the information from BERT2 to BERT1.

In summary, when adapting to BERT-base with 12 Transformer layers, Cook-BERT contains 12 layers for BERT1, 12 layers for BERT2, and 12 layers for layer-wise knowledge integration.

5.2 Parameter Initialization and Model Training

In our implementation, BERT1 and BERT2 have independent parameters. We use the parameters of BERT to initialize both BERT1 and BERT2. The parameters of layers are randomly initialized. For downstream tasks, we then fine-tune all the parameters in the fashion of end2end.

6 Experiments

We evaluate the effectiveness of our proposed models in three scenarios: cloze-style commonsense reasoning, text classification, and low-resource commonsense settings. All the experiments run over a computer with 4 Nvdia Tesla V100 GPUs.

Models We consider adapting Cook-Transformer to BERT and RoBERTa, which are denoted as Cook-BERT and Cook-RoBERTa, respectively. We use the BERT-base and RoBERTa-large from the HuggingFace Transformer library wolf2020transformers.

Implementation details for candidate knowledge retrieval For a given text , we retrieve candidate commonsense from ATOMIC2020. We use the if-then descriptions in ATOMIC2020 (e.g. Fig. 1). Since these descriptions cover 173k different verb phrases – one of the fundamental elements of language – the retrieval is applicable to a broad range of downstream text understanding tasks.

We use a simple retrieval method. We simply consider word segments with window size 5 of the input text . All the commonsense descriptions matching one of these text segments will be regarded as the candidate commonsense descriptions .

6.1 Natural Language Commonsense Reasoning

Datasets We consider the following commonsense reasoning benchmarks: WSC273 levesque2012winograd, PDP morgenstern2016planning, Winogender rudinger2018gender, WinoGrande sakaguchi2019winogrande.

Model details Due to the different implementations between kocijan19acl and sakaguchi2019winogrande, in this paper we also follow their settings to compare with them, respectively. For kocijan19acl

, we conduct disambiguation task directly through masked language modeling in Cook-BERT. For the latter one, we convert cloze-style problems to multiple-choice classification problems in Cook-RoBERTa. In particular, we replace the target pronoun of one query sentence with each candidate reference, then put the new sentences into the language model. We use a single linear layer and a softmax layer over the encoding of its

token to compute the probability of each new sentence, and select the one with highest probability as the pronoun disambiguation result.

Hyperparameters of pre-training We follow kocijan19acl; sakaguchi2019winogrande

to first pre-train models for 30 and 3 epochs over WSCR 

kocijan19acl or WinoGrande sakaguchi2019winogrande, respectively. Then we fine-tune models over specific tasks. We use AdamW as the optimizer with learning rate 5e-6, which is selected from . We set the batch size to 8.

Model WSC PDP
KEEliu2016commonsense 52.8 58.3
WKH emami2018generalized 57.1 -
MAS klein2019attention 60.3 68.3
DSSM wang2019unsupervised 63.0 75.0
LMtrinh2018simple 63.8 70.0
CSS klein2020contrastive 69.6 90.0
GPT2 radford2019language 70.7 -
BERT-large+WSCR kocijan19acl 71.4 79.2
HNN he2019hybrid 75.1 90.0
Human sakaguchi2019winogrande 96.5 92.5
BERT+WSCR 66.3 85.0
Cook-BERT+WSCR 67.4 86.7
RoB.+WinoGrande 90.1 87.5
Cook-RoB.+WinoGrande 91.6 91.7
Table 1: Results on WSC and PDP. RoB. denotes RoBERTa.
Model WinoGender WinoGrande
BERT+WikiCREM kocijan2019wikicrem 82.1 -
WinoGrande sakaguchi2019winogrande 94.6 79.3
BERT+WSCR 68.2 51.4
Cook-BERT+WSCR 71.5 52.6
RoB.+WinoGrande 94.6 79.3
Cook-RoB.+WinoGrande 96.2 79.6
Table 2: Results on WinoGender and WinoGrande.

Results

We compare our models with state-of-the-art commonsense reasoning models in Table 12. It can be seen that our models outperform other models in most settings. This verifies the effectiveness of our proposed models for commonsense reasoning.

Ablations In Table 12, we also compare Cook-BERT with BERT. We found that Cook-BERT with Cook-Transformers effectively improved the accuracy of BERT with Transformers. Similar results can be found between Cook-RoBERTa and RoBERTa. This shows that the proposed Cook-Transformer improves pre-trained language models by adapting to them for free, i.e. without retraining on large-scale unsupervised corpora.

6.2 General Text Classification

We use MRPC, CoLA, RTE, STS-B, SST-2, and QNLI in the GLUE dataset 

wang2018glue to verify the effectiveness of the proposed models on general text classification tasks. We did not evaluate over MNLI, because our model needs to represent the corresponding commonsense for each sentence, which is too costly for MNLI. We believe that this efficiency problem can be solved by further applying model compression iandola2020squeezebert, but this is beyond the scope of this paper.

It can be seen from Table 3 that Cook-BERT and Cook-RoBERTa outperform their baselines. Compared with Cook-RoBERTa, Cook-BERT has a greater improvement. We think this is because RoBERTa’s pre-training data already contained a commonsense corpus (i.e. STORIES trinh2018simple). Therefore some commonsense in ATOMIC2020 is already learned in RoBERTa.

MRPC CoLA RTE QNLI STS-B SST-2
BERT 86.52/90.66 59.50 71.43 91.20 89.35/88.93 91.97
Cook-BERT 87.50/91.04 58.29 72.20 91.58 89.82/89.46 92.66
RoB. 90.49/93.07 66.84 86.28 93.37 91.83/91.95 95.64
Cook-RoB. 91.91/94.24 66.89 86.28 94.71 92.19/92.36 96.44
Table 3: Results on text classification tasks. Models are evaluated by the dev split from GLUE.

6.3 Comparison of Different Commonsense Introduction Methods

Continue pre-train In the introduction section, we mentioned that a typical methods of introducing textual knowledge is continuing pre-training gururangan2020don; sun2019finetune. However, due to the OOD nature of commonsense, this method will cause catastrophic forgetting. To verify this intuition, in this subsection we compare with the continuing pre-trained model. We first continue pre-training the language model over ATOMIC2020, then fine-tune it over the target task.

ExpBERT murty2020expbert We also compare with ExpBERT, another model that is able to introduces textual knowledge. In Sec 1

, we mentioned that ExpBERT is not applicable to large-scale commonsense knowledge bases for its disability to select related commonsense and ignore unrelated commonsense. To verify this, we use the retrieved candidate commonsense descriptions from ATOMIC2020 as the additional explanations for ExpBERT. ExpBERT concatenates all the embedding of fixed number of commonsense, which is inflexible for ATOMIC2020. For this reason, we fix the number of commonsense to 48. If there are more than 48 candidate commonsense descriptions for one sample, we will randomly select 48 of them. Otherwise we will pad null commonsense to it. In our experiments, we also apply ExpBERT to RoBERTa 

liu2019roberta (i.e. ExpRoBERTa).

MRPC CoLA RTE QNLI STS-B SST-2 WSC273
BERT 86.52/90.66 59.50 71.43 91.20 89.35/88.93 91.97 66.30
BERT-continue 83.58/88.81 54.70 62.09 90.24 87.41/87.46 90.25 63.00
ExpBERT 85.78/89.79 58.29 62.82 87.06 84.78/84.67 89.75
Cook-BERT 87.50/91.04 58.29 72.20 91.58 89.82/89.46 92.66 67.40
RoBERTa 90.49/93.07 66.84 86.28 93.37 91.83/91.95 95.64 90.10
RoBERTa-continue 87.01/90.38 61.74 74.01 93.61 89.57/89.66 94.75 87.91
ExpRoBERTa 89.46/92.22 66.90 83.39 93.78 89.81/89.94 96.25
Cook-RoBERTa 91.91/94.24 66.89 86.28 94.71 92.19/92.36 96.44 91.58
Table 4: Comparison of different commonsense introduction approaches. Continuing pre-training even injures the effectiveness. On the other hand, using Cook-Transformers to introduce external knowledge achieves better results than using Transformer.

We show the results in Table 4. We do not report the results of ExpBERT on WSC273, as ExpBERT cannot solve the cloze-style problems. It can be seen that the performance of language models was suffered when we simply continue pre-training the models on commonsense knowledge base. This verifies that the continuing pre-training on the OOD commonsense will cause catastrophic forgetting and injure the effectiveness. On the other hand, using Cook-Transformer to introduce OOD commonsense as the extra input significantly improves the accuracy. The results also suggest that ExpBERT is not applicable to large-scale commonsense knowledge bases.

6.4 Effect in Low-Resource Commonsense Settings

Since there is a large number of commonsense descriptions in ATOMIC2020, a large portion of descriptions only occur a few times in the training set. In this subsection, we want to verify for these rare descriptions, can the model still benefit from it? If so, we think it means that the model uses the contextual information of the commonsense to improve the understanding of the commonsense.

Figure 4: Effect in low-resource commonsense settings with different s over SST-2. Cook-BERT successfully introduces commonsense descriptions with low frequencies to improve the effectiveness. Each result is averaged over 5 runs.

To do this, we proposed a low-resource commonsense setting. We evaluate the effect of the model if the training dataset only contains samples. Therefore the frequency of the appeared commonsense descriptions is low. In order to exclude the influence of other samples, we only use test samples whose candidate commonsense descriptions have already occurred in the training samples. For example, when , we randomly select samples from the training set for training, and use all samples in the test set which contains the commonsense of the training samples for evaluation. We show the results over the SST-2 dataset in Fig. 4. It can be seen that our models still benefit from low-frequency commonsense.

6.5 Does Cook-Transformer Provide Interpretability?

In this subsection, we try to answer if the integration of candidate commonsense descriptions by Cook-Transformer is interpretable. To answer this question, we calculate the influence of different commonsense descriptions on the model’s predictions. We follow wu2020perturbed to quantify the influence of a commonsense description as: If is removed from , how much will the prediction change? This change is measured by the Euclidean distance between the prediction by and by . The greater the change in the prediction, the greater the influence of this commonsense.

John promised Bill to leave, so an hour later [John] left.
PersonX promises PersonY.
1. As a result, PersonX wants to fulfill his promise.
2. PersonX is seen as truthful
3. PersonX is seen as trustworthy.
4. Before, PersonX needed to talk to PersonY.
5. Before, PersonX needed to go to PersonY’s house.
Table 5: A case study of top 5 commonsense descriptions.

Through the case studies of the samples in WSC273, we found that although commonsense with higher influence is somewhat interpretable for people, the interpretability is not significant. We show some examples in Table 5. We believe that this is because some commonsense for people has been learned in pre-training. Therefore, the OOD commonsense that these pre-trained language models need to incorporate for downstream tasks is inconsistent with human understanding.

7 Conclusion

In this paper, we study how to use commonsense to enhance the general text representation. We first analyzed the challenges brought by the OOD nature of commonsense. Then, we propose Cook-Transformer to allow commonsense integration and enhancement. In the experiments, we verified the effectiveness of our proposed models in a variety of scenarios, including commonsense reasoning, general text classification, and low-resource commonsense. Our models consistently outperform the baselines. We have also empirically analyzed other properties (e.g. interpretability) of the model.

References