comet-commonsense
Code for ACL 2019 Paper: "COMET: Commonsense Transformers for Automatic Knowledge Graph Construction" https://arxiv.org/abs/1906.05317
view repo
We present the first comprehensive study on automatic knowledge base construction for two prevalent commonsense knowledge graphs: ATOMIC (Sap et al., 2019) and ConceptNet (Speer et al., 2017). Contrary to many conventional KBs that store knowledge with canonical templates, commonsense KBs only store loosely structured open-text descriptions of knowledge. We posit that an important step toward automatic commonsense completion is the development of generative models of commonsense knowledge, and propose COMmonsEnse Transformers (COMET) that learn to generate rich and diverse commonsense descriptions in natural language. Despite the challenges of commonsense modeling, our investigation reveals promising results when implicit knowledge from deep pre-trained language models is transferred to generate explicit knowledge in commonsense knowledge graphs. Empirical results demonstrate that COMET is able to generate novel knowledge that humans rate as high quality, with up to 77.5 approaches human performance for these resources. Our findings suggest that using generative commonsense models for automatic commonsense KB completion could soon be a plausible alternative to extractive methods.
READ FULL TEXT VIEW PDFCode for ACL 2019 Paper: "COMET: Commonsense Transformers for Automatic Knowledge Graph Construction" https://arxiv.org/abs/1906.05317
None
When reading text, humans make commonsense inferences that frame their understanding of the narrative being presented. For machines to achieve this capability, they must be able to acquire relevant and correct commonsense for an unbounded set of situations. In this work, we cast commonsense acquisition as knowledge base construction and investigate whether large-scale language models can effectively learn to generate the knowledge necessary to automatically construct a commonsense knowledge base (KB).
Automatic KB construction is a long-standing goal of artificial intelligence research due to the difficulty of achieving high concept coverage in high-precision curated KBs Lenat (1995); Miller (1995). Previous work has developed models capable of reading and extracting semi-structured text Suchanek et al. (2007); Hoffart et al. (2013); Auer et al. (2007); Bollacker et al. (2008) and unstructured text Dong et al. (2014); Carlson et al. (2010); Nakashole et al. (2011, 2012); Niu (2012) into relational schemas that can be queried for downstream applications. A common thread of these approaches, however, is the focus on encyclopedic knowledge, which lends itself to a well-defined space of entities and relations that can be modeled.
Commonsense knowledge, however, does not cleanly fit into a schema comparing two entities with a known relation, leading current approaches to model “entities" as natural language phrases and relations as any concept that can link them Li et al. (2016); Sap et al. (2019). OpenIE approaches display this property of open text entities and relations Etzioni et al. (2011); Fader et al. (2011); Mausam et al. (2012), but being extractive, they only capture knowledge that is explicitly mentioned in text, limiting their applicability for capturing commonsense knowledge, which is often implicit Gordon and Van Durme (2013).
Meanwhile, recent progress in training deep contextualized language models Peters et al. (2018); Radford et al. (2018); Devlin et al. (2018) provides an opportunity to explore beyond extractive methods as an avenue for commonsense KB construction.
These large-scale language models display impressive performance when their underlying representations are tuned to solve end tasks, achieving state-of-the-art results on a variety of complex problems. In this work, we define the COMmonsEnse Transformer (ℂ𝕆𝕄𝔼𝕋), which constructs commonsense KBs by using existing tuples as a seed set of knowledge on which to train.
Using this seed set,
a pre-trained language model learns to adapt its learned representations to knowledge generation, and produces novel
tuples that are high quality.
We summarize our contributions in this work as follows. First, we develop a generative approach to knowledge base construction. A model must learn to produce new nodes and identify edges between existing nodes by generating phrases that coherently complete an existing seed phrase and relation type111Demo is available at https://mosaickg.apps.allenai.org/. Second, we develop a framework for using large-scale transformer language models to learn to produce commonsense knowledge tuples222Code is available at https://github.com/atcbosselut/comet-commonsense. Finally, we perform an empirical study on the quality, novelty, and diversity of the commonsense knowledge produced by our approach for two domains, Atomic and ConceptNet, as well as an efficiency study on the number of seed tuples needed to learn an effective knowledge model. The results indicate that ℂ𝕆𝕄𝔼𝕋 is able to produce high quality tuples as human judges find that 77.5% of generated tuples for Atomic events and 91.7% of generated tuples for ConceptNet relations are correct.
ℂ𝕆𝕄𝔼𝕋 is an adaptation framework for constructing commonsense knowledge bases from language models by training the language model on a seed set of knowledge tuples. These tuples provide ℂ𝕆𝕄𝔼𝕋 with the KB structure and relations that must be learned, and ℂ𝕆𝕄𝔼𝕋 learns to adapt the language model representations learned from pre-training to add novel nodes and edges to the seed knowledge graph.
More specifically, the problem assumes ℂ𝕆𝕄𝔼𝕋 is given a training knowledge base of natural language tuples in format, where is the phrase subject of the tuple, is the relation of the tuple, and is the phrase object of the tuple. For example, a ConceptNet tuple relating to “taking a nap" would be: (=“take a nap", =Causes, =“have energy"). The task is to generate given and as inputs.
We define as the tokens that make up the subject of the relation, as the tokens that make up the relation of the tuple, and as the tokens that make up the object of the tuple. The embedding for any word is denoted as .
While ℂ𝕆𝕄𝔼𝕋 is agnostic to the language model with which it is initialized, in this work, we use the transformer language model architecture introduced in Radford et al. (2018) (GPT), which uses multiple transformer blocks of multi-headed scaled dot product attention and fully connected layers to encode input text Vaswani et al. (2017). Figure 2 depicts different components of the GPT architecture and we define each component in more depth below.
As shown in Figure 2(b), each transformer layer contains an architecturally identical transformer block (though with unique trainable parameters) that applies the following transformations to the input to the block:
(1) | ||||
(2) | ||||
(3) | ||||
(4) |
where MultiAttn is a multi-headed self-attention mechanism (defined below), FFN is a two-layer feed-forward network, and LayerNorm represents a layer normalization Ba et al. (2016) operation that is applied to the output of the self-attention and the feedforward network. Note that the inputs to the LayerNorm
operations contain a residual connection that sums the output of and input to the previous operation.
The multi-headed attention module of each transformer block, shown in Figure 2(a), is identical to the one originally defined by Vaswani et al. (2017). The attention function receives three inputs, a query , key , and value . The attention is made of multiple that each compute a unique scaled dot product attention distribution over using and :
(5) |
where
is the dimensionality of the input vectors representing the query, key and value. For each of the heads,
, , and are uniquely projected prior to the attention being computed:(6) |
where is the output of a single attention head and , , and are head-specific projections for , , and , respectively. The outputs of the attention heads are then concatenated:
MultiH(Q, K, V) | (7) |
where is an output projection of the concatenated outputs of the attention heads. As shown in Figure 2(c), we follow Radford et al. (2018) and use the output of the previous layer’s transformer block as the query input for the multi-headed attention of the next block. The keys and values are outputs of the previous layer’s block for all preceding time steps:
(8) |
where is the set of previous layer transformer block outputs for time steps preceding .
As input to the model, we represent a knowledge tuple as a concatenated sequence of the words of each item of the tuple:
(9) |
Since the transformer (a self-attention model) has no concept of ordering of tokens, a position embedding
is initialized for each absolute position in the sequence (Vaswani et al., 2017). For any input word , our encoding of the input is the sum of its word embedding, with a position embedding encoding its absolute position in the sequence :(10) |
where is the position embedding for time step , and is the input to the first transformer layer.
ℂ𝕆𝕄𝔼𝕋 is trained to learn to produce the phrase object of a knowledge tuple given the tuple’s phrase subject and relation . More specifically, given the concatenation of the tokens of and : as input, the model must learn to generate the tokens of : (See §2.1 for definitions of these variables).
To achieve this goal, ℂ𝕆𝕄𝔼𝕋 is trained to maximize the conditional loglikelihood of predicting the phrase object tokens, :
(11) |
where , , and are the number of tokens in the subject phrase, relation, and object phrase, respectively. Figure 3 outlines how the tokens in , , and are organized for different training tasks.
ℂ𝕆𝕄𝔼𝕋 relies on a seed set of knowledge tuples from an existing KB to learn to produce commonsense knowledge. In this work, we use Atomic and ConceptNet as knowledge seed sets, but other commonsense knowledge resources could have been used as well as ℂ𝕆𝕄𝔼𝕋 is domain-agnostic.
Model | PPL555Sap et al. (2019)’s models were trained with a different vocabulary so a direct perplexity comparison is not possible. | BLEU-2 | N/T 666All test set do not appear in the training set so all full tuples must be novel. | N/T | N/U |
---|---|---|---|---|---|
9Enc9Dec Sap et al. (2019) | - | 10.01 | 100.00 | 8.61 | 40.77 |
NearestNeighbor Sap et al. (2019) | - | 6.61 | - | - | - |
Event2(In)Volun Sap et al. (2019) | - | 9.67 | 100.00 | 9.52 | 45.06 |
Event2PersonX/Y Sap et al. (2019) | - | 9.24 | 100.00 | 8.22 | 41.66 |
Event2Pre/Post Sap et al. (2019) | - | 9.93 | 100.00 | 7.38 | 41.99 |
ℂ𝕆𝕄𝔼𝕋 (- pretrain) | 15.42 | 13.88 | 100.00 | 7.25 | 45.71 |
ℂ𝕆𝕄𝔼𝕋 | 11.14 | 15.10 | 100.00 | 9.71 | 51.20 |
Model | oEffect | oReact | oWant | xAttr | xEffect | xIntent | xNeed | xReact | xWant | Avg |
9Enc9Dec Sap et al. (2019) | 22.92 | 32.92 | 35.50 | 52.20 | 47.52 | 51.70 | 48.74 | 63.57 | 51.56 | 45.32 |
Event2(In)voluntary Sap et al. (2019) | 26.46 | 36.04 | 34.70 | 52.58 | 46.76 | 61.32 | 49.82 | 71.22 | 52.44 | 47.93 |
Event2PersonX/Y Sap et al. (2019) | 24.72 | 33.80 | 35.08 | 52.98 | 48.86 | 53.93 | 54.05 | 66.42 | 54.04 | 46.41 |
Event2Pre/Post Sap et al. (2019) | 26.26 | 34.48 | 35.78 | 52.20 | 46.78 | 57.77 | 47.94 | 72.22 | 47.94 | 46.76 |
ℂ𝕆𝕄𝔼𝕋 (- pretrain) | 25.90 | 35.40 | 40.76 | 48.04 | 47.20 | 58.88 | 59.16 | 64.52 | 65.66 | 49.50 |
ℂ𝕆𝕄𝔼𝕋 | 29.02 | 37.68 | 44.48 | 57.48 | 55.50 | 68.32 | 64.24 | 76.18 | 75.16 | 56.45 |
Parameters are initialized to the final language model weights from Radford et al. (2018). Additional special tokens that are added to the vocabulary for fine tuning (e.g., relation embeddings such as oReact for Atomic and IsA
for ConceptNet) are initialized by sampling from the standard normal distribution.
Following Radford et al. (2018)’s design of the GPT model, we initialize ℂ𝕆𝕄𝔼𝕋 with 12 layers, 768-dimensional hidden states, and 12 attention heads. We use a dropout rate of 0.1 and use GeLU Hendrycks and Gimpel (2016)
units as activation functions. During training, our batch size is 64. Other dataset-specific hyperparameters are provided in Appendix
A.1.The Atomic dataset333https://homes.cs.washington.edu/~msap/atomic/, released by Sap et al. (2019), contains 877K tuples covering a variety of social commonsense knowledge around specific event prompts (e.g., “X goes to the store”). Specifically, Atomic distills its commonsense in nine dimensions, covering the event’s causes (e.g., “X needs to drive there”), its effects on the agent (e.g., “to get food”) and its effect on other direct (or implied) participants (e.g., “Others will be fed”). More details about Atomic can be found in Appendix D. For our experiments, Atomic events (e.g., “X goes to the store”) are phrase subjects, , the dimension (e.g., xIntent) is the phrase relation, , and the causes/effects (e.g., “to get food”) are phrase objects, . We use the training splits from Sap et al. (2019), resulting in 710k training, 80k development, and 87k test tuples respectively.
Following Sap et al. (2019)
, we evaluate our method using BLEU-2 as an automatic evaluation metric. We also report the perplexity of the model on its gold generations. The remaining automatic metrics in Table
1 measure the proportion of generated tuples and generated objects which are not in the training set. We report the proportion of all generated tuples that are novel (% N/T ) and that have a novel object (% N/T )444a new represents a new node in the knowledge graph. To show that these novel objects are diverse (i.e., the same novel object is not the only one being generated), we also report the number of novel objects as a function of the set of unique objects produced for all test set events (% N/U ).Finally, we perform a human evaluation using workers from Amazon Mechanical Turk (AMT). Workers are asked to identify whether a model generation of Atomic commonsense adequately completes a plausible tuple of phrase subject, relation, and phrase object. Following the setup of Sap et al. (2019), we evaluate 100 randomly selected events from the test set. For each event and relation type, 10 candidates are generated using beam search and the full beam is evaluated by five different workers. Overall, n=5000 ratings are produced per relation (100 events 5 workers 10 candidates). The reported Avg in Table 2 is an average of these scores, yielding n=45000 total ratings for each model. We use Pitman’s test Noreen (1989) with 100k permutations to test for statistical significance. Because 50 different hypotheses are tested (9 relations + the total), the Holm-Bonferroni method Holm (1979) is used to correct significance thresholds. Example events from the development set and their generated phrase objects are available in Table 5.
To evaluate how pre-training on a large corpus helps the model learn to produce knowledge, we train a version of ℂ𝕆𝕄𝔼𝕋 that is not initialized with pre-trained weights (ℂ𝕆𝕄𝔼𝕋 (- pretrain)). We also evaluate the data efficiency of our method by training models on different proportions of the training data. Finally, because the ultimate goal of our method is to be able to perform high-quality, diverse knowledge base construction, we explore how various decoding schemes affect the quality of candidate knowledge tuples. We present the effect of the following generation strategies: argmax greedy decoding, beam search with beam sizes, b=2, 5, 10, and top- sampling with k = 5, 10. For each decoding method, we conduct the human evaluation on the number of final candidates produced by each method.
ℂ𝕆𝕄𝔼𝕋 Decoding method | oEffect | oReact | oWant | xAttr | xEffect | xIntent | xNeed | xReact | xWant | Avg |
---|---|---|---|---|---|---|---|---|---|---|
Top-5 random sampling (n=2500 per relation) | 34.60 | 44.04 | 35.56 | 64.56 | 55.68 | 58.84 | 46.68 | 80.96 | 58.52 | 53.27 |
Top-10 random sampling (n=5000 per relation) | 25.20 | 37.42 | 27.34 | 49.20 | 47.34 | 47.06 | 38.24 | 72.60 | 48.10 | 43.61 |
Beam search - 2 beams (n=1000 per relation) | 43.70 | 54.20 | 47.60 | 84.00 | 51.10 | 73.80 | 50.70 | 85.80 | 78.70 | 63.29 |
Beam search - 5 beams (n=2500 per relation) | 37.12 | 45.36 | 42.04 | 63.64 | 61.76 | 63.60 | 57.60 | 78.64 | 68.40 | 57.57 |
Beam search - 10 beams (n=5000 per relation) | 29.02 | 37.68 | 44.48 | 57.48 | 55.50 | 68.32 | 64.24 | 76.18 | 75.16 | 56.45 |
Greedy decoding (n=500 per relation) | 61.20 | 69.80 | 80.00 | 77.00 | 53.00 | 89.60 | 85.60 | 92.20 | 89.40 | 77.53 |
Human validation of gold Atomic | 84.62 | 86.13 | 83.12 | 78.44 | 83.92 | 91.37 | 81.98 | 95.18 | 90.90 | 86.18 |
The BLEU-2 results in Table 1 indicate that ℂ𝕆𝕄𝔼𝕋 exceeds the performance of all baselines, achieving a 51% relative improvement over the top performing model of Sap et al. (2019). More interesting, however, is the result of the human evaluation, where ℂ𝕆𝕄𝔼𝕋 reported a statistically significant relative Avg performance increase of 18% over the top baseline, Event2In(Volun). This performance increase is consistent, as well, with an improvement being observed across every relation type. In addition to the quality improvements, Table 1 shows that ℂ𝕆𝕄𝔼𝕋 produces more novel tuple objects than the baselines, as well.
% train data | PPL | BLEU-2 | N/T | N/U |
---|---|---|---|---|
1% train | 23.81 | 5.08 | 7.24 | 49.36 |
10% train | 13.74 | 12.72 | 9.54 | 58.34 |
50% train | 11.82 | 13.97 | 9.32 | 50.37 |
Full (- pretrain) | 15.18 | 13.22 | 7.14 | 44.55 |
Full train | 11.13 | 14.34 | 9.51 | 50.05 |
Significant differences were also observed between the performance of the model whose weights were initialized with the pre-trained parameters from the GPT model of Radford et al. (2018) and a model with the same architecture that was trained from random initialization. This 14% relative improvement in overall human performance confirms that the language representations learned by the GPT model are transferable to generating natural language commonsense knowledge.
In Table 3, we show the effect of different generation policies on knowledge quality. The most interesting result is that using greedy decoding to produce knowledge tuples only results in a 10% relative performance gap compared to a human evaluation of the Atomic test set, showing that the knowledge produced by the model approaches human performance. While producing more total candidates does lower overall performance, quality assessments still hover around 55%777This number is partially low due to the many “none" references in the oEffect, oReact, oWant categories. In any set of 10 candidates, “none" can only be predicted once, which causes most candidates in the beam to be incorrect if “none" is the appropriate answer. for a beam size of 10. This result suggests that ℂ𝕆𝕄𝔼𝕋 could be effective with human evaluators in the loop to confirm the correctness of generated tuples.
Seed Concept | Relation | Generated | Plausible |
---|---|---|---|
X holds out X’s hand to Y | xAttr | helpful | ✓ |
X meets Y eyes | xAttr | intense | ✓ |
X watches Y every ___ | xAttr | observant | ✓ |
X eats red meat | xEffect | gets fat | ✓ |
X makes crafts | xEffect | gets dirty | ✓ |
X turns X’s phone | xEffect | gets a text | |
X pours ___ over Y’s head | oEffect | gets hurt | ✓ |
X takes Y’s head off | oEffect | bleeds | ✓ |
X pisses on Y’s bonfire | oEffect | gets burned | |
X spoils somebody rotten | xIntent | to be mean | |
X gives Y some pills | xIntent | to help | ✓ |
X provides for Y’s needs | xIntent | to be helpful | ✓ |
X explains Y’s reasons | xNeed | to know Y | ✓ |
X fulfils X’s needs | xNeed | to have a plan | ✓ |
X gives Y everything | xNeed | to buy something | ✓ |
X eats pancakes | xReact | satisfied | ✓ |
X makes ___ at work | xReact | proud | ✓ |
X moves house | xReact | happy | ✓ |
X gives birth to the Y | oReact | happy | ✓ |
X gives Y’s friend ___ | oReact | grateful | ✓ |
X goes ___ with friends | oReact | happy | ✓ |
X gets all the supplies | xWant | to make a list | ✓ |
X murders Y’s wife | xWant | to hide the body | ✓ |
X starts shopping | xWant | to go home | ✓ |
X develops Y theory | oWant | to thank X | ✓ |
X offer Y a position | oWant | to accept the job | ✓ |
X takes ___ out for dinner | oWant | to eat | ✓ |
Because not all domains will have large available commonsense KBs on which to train, we explore how varying the amount of training data available for learning affects the quality and novelty of the knowledge that is produced. Our results in Table 4 indicate that even with only 10% of the available training data, the model is still able to produce generations that are coherent, adequate, and novel. Using only 1% of the training data clearly diminishes the quality of the produced generations, with significantly lower observed results across both quality and novelty metrics. Interestingly, we note that training the model without pre-trained weights performs comparably to training with 10% of the seed tuples, quantifying the impact of using pre-trained language representations.
The ConceptNet dataset888https://ttic.uchicago.edu/~kgimpel/commonsense.html, provided by Li et al. (2016), consists of tuples obtained from the Open Mind Common Sense (OMCS) entries in ConceptNet 5 Speer et al. (2017). Tuples are in the standard form – (e.g., take a nap, Causes, have energy). The most confident 1200 tuples were used to create the test set, while the next 1200 tuples were used to create two development sets, which we combine in this work. The 100k version of the training set was used to train models, which contains 34 relation types.
We evaluate our models that generate ConceptNet relations using the following metrics. First, we report the perplexity of the gold relations in the test set (PPL). To evaluate the quality of generated knowledge, we also report the number of generated positive examples in the test set that are scored as correct by the pre-trained Bilinear AVG model developed by Li et al. (2016).999 A pre-trained model can be found at https://ttic.uchicago.edu/~kgimpel/comsense_resources/ckbc-demo.tar.gz For a given
tuple, this model produces a probability for whether the tuple is correct. We threshold scores at 50% probability to identify positive predictions. On the completion task originally proposed in
Li et al. (2016), this model achieved 92.5% accuracy on the test set, indicating that it is a strong proxy for automatically evaluating whether a generated tuple is correct. Finally, we report the same novelty metrics as for Atomic: N/T and N/T .As a baseline, we re-implement the BiLSTM model proposed by Saito et al. (2018) with minor modifications outlined in Appendix A.2. This model is trained to learn to encode knowledge in both directions: and to help augment a knowledge base completion model. It is only evaluated on the tuple generation task, however. For posterity, we also include the result from a LSTM model that is only trained on the task (LSTM - ).
We include the following ablations of our full model. First, we evaluate how pre-training on a large-scale corpus Radford et al. (2018) helps performance by training a comparison model from scratch, denoted ℂ𝕆𝕄𝔼𝕋 (- pretrain) in Table 6. Second, in our main model, we map relation names to natural language (e.g., IsA “is a”; HasSubevent “has subevent”) so the model can learn to represent these concepts with language, as opposed to learning a special embedding from scratch for each relation Levy et al. (2017). As an ablation, we train a model without converting relation tokens to natural language (e.g., IsA “is a”), which we denote ℂ𝕆𝕄𝔼𝕋 - RelTok.
Model | PPL | Score | N/T | N/T | Human |
---|---|---|---|---|---|
LSTM - | - | 60.83 | 86.25 | 7.83 | 63.86 |
CKBG Saito et al. (2018) | - | 57.17 | 86.25 | 8.67 | 53.95 |
ℂ𝕆𝕄𝔼𝕋 (- pretrain) | 8.05 | 89.25 | 36.17 | 6.00 | 83.49 |
ℂ𝕆𝕄𝔼𝕋 - RelTok | 4.39 | 95.17 | 56.42 | 2.62 | 92.11 |
ℂ𝕆𝕄𝔼𝕋 | 4.32 | 95.25 | 59.25 | 3.75 | 91.69 |
Our results indicate that high-quality knowledge can be generated by the model: the low perplexity scores in Table 6
indicate high model confidence in its predictions, while the high classifier score (95.25%) indicates that the KB completion model of
Li et al. (2016) scores the generated tuples as correct in most of the cases. While adversarial generations could be responsible for this high score, a human evaluation (following the same design as for Atomic) scores 91.7% of greedily decoded tuples as correct. Randomly selected examples provided in Table 7 also point to the quality of knowledge produced by the model.In addition to being high quality, the generated tuples from ℂ𝕆𝕄𝔼𝕋 are also novel, with 59.25% of the tuples not being present in the training set, showing that the model is capable of generating new edges between nodes, and even creating new nodes – 3.75% of nodes are novel – to extend the size of the knowledge graph. One shortcoming, however, is that novel generations are sometimes simplified forms of tuples from the training set. In Table 7, for example, the tuple “doctor CapableOf save life” is not present in the training set, but “doctor CapableOf save person life” is. Many tuples, however, are completely novel, such as “bird bone HasProperty fragile” and “driftwood AtLocation beach”, which have no related tuples in the training set.
To explore further, we investigate by how much novel tuples from the development set differ from training set phrase objects for the same using minimum edit distance of phrase objects.
We measure the edit distance of phrase object in the tuple to the from the nearest training tuple . Edit distance is measured using word tokens (excluding stop words) and normalized by the maximum number of words in or . The maximum edit distance is one (i.e., entirely different word sequences) and the minimum edit distance is zero (i.e., the same sequence excluding stopwords). Figure 4 shows the percentage of novel development set tuples that have an edit distance from the closest training set tuple of at least the value on the x-axis. Over 75% of the novel tuples have objects that are a normalized edit distance of from the training phrase objects, indicating that most of the novel phrase objects have significantly different word sequences from their closest analogues in the training set.
Similarly to Atomic, we explore how pre-training ℂ𝕆𝕄𝔼𝕋 on a large language corpus affects its ability to generalize commonsense. This effect is apparent in Table 6, with a clear improvement on automatic and human evaluations by the pretrained ℂ𝕆𝕄𝔼𝕋 over the randomly initialized model. Qualitatively, we observe this effect in Table 7 with the generated example tuple “mango IsA fruit", which is not present in the training set. The only tuple containing the “mango" entity in the training set is “mango UsedFor salsa", which is not informative enough. As confirmation, we observe that the output from ℂ𝕆𝕄𝔼𝕋 (- pretrain) is “mango IsA spice”, which could be a reasonable inference given the information about “mango" in the seed set of knowledge.
While the automatic metrics point to insignificant differences when comparing models with symbol relations and those with natural language relations (Table 6), examples can provide qualitative insights into the benefits of representing relations as language. While the only non-ornithological reference to a “dove" in the ConceptNet training set is “dove CapableOf fly”, our model learns to generalize to produce the tuple “dove SymbolOf purity”. The model that uses symbol relation embeddings only manages to produce the relation “dove SymbolOf submarine”, which seems to relate “submarine" to a more nautical (and unrelated) word sense of “dove".
Seed | Relation | Completion | Plausible |
---|---|---|---|
piece | PartOf | machine | ✓ |
bread | IsA | food | ✓ |
oldsmobile | IsA | car | ✓ |
happiness | IsA | feel | ✓ |
math | IsA | subject | ✓ |
mango | IsA | fruit | ✓ |
maine | IsA | state | ✓ |
planet | AtLocation | space | ✓ |
dust | AtLocation | fridge | |
puzzle | AtLocation | your mind | ![]() |
college | AtLocation | town | ✓ |
dental chair | AtLocation | dentist | ✓ |
finger | AtLocation | your finger | |
sing | Causes | you feel good | ✓ |
doctor | CapableOf | save life | ✓ |
post office | CapableOf | receive letter | ✓ |
dove | SymbolOf | purity | ✓ |
sun | HasProperty | big | ✓ |
bird bone | HasProperty | fragile | ✓ |
earth | HasA | many plant | ✓ |
yard | UsedFor | play game | ✓ |
get pay | HasPrerequisite | work | ✓ |
print on printer | HasPrerequisite | get printer | ✓ |
play game | HasPrerequisite | have game | ✓ |
live | HasLastSubevent | die | ✓ |
swim | HasSubevent | get wet | ✓ |
sit down | MotivatedByGoal | you be tire | ✓ |
all paper | ReceivesAction | recycle | ✓ |
chair | MadeOf | wood | ✓ |
earth | DefinedAs | planet | ✓ |
Previous work has looked at constructing knowledge bases as relational schemas using expert knowledge Lenat (1995); Bodenreider (2004); Miller (1995), semi-structured text extraction Suchanek et al. (2007); Hoffart et al. (2013); Auer et al. (2007); Bollacker et al. (2008) and unstructured text extraction Dong et al. (2014); Carlson et al. (2010); Nakashole et al. (2011, 2012); Niu (2012). In our work, we focus on construction of commonsense knowledge bases which require the use of open-text events rather than a well-defined relational schema structure. Other work in information extraction can also be applied to knowledge base construction with open-text entities Soderland et al. (2010); Etzioni et al. (2011); Fader et al. (2011); Mausam et al. (2012); Fan et al. (2010); Cui et al. (2018), but these methods typically extract explicitly stated text relations. Conversely, our approach generates new knowledge that is often unstated in text, as commonsense information typically is Gordon and Van Durme (2013).
Existing work on generation of novel commonsense knowledge has also used ConceptNet and Atomic as underlying KBs. Specifically, Li et al. (2016)
proposed a set of neural network models for scoring tuples in ConceptNet. Our work differs from this approach as their models evaluate full tuples rather than learning to generate the phrases to make new nodes in the knowledge graph.
Saito et al. (2018) builds upon this work by proposing a joint model for completion and generation of commonsense tuples. Their work, however, focuses on using tuple generation to augment their KB completion model, rather than to increase coverage in commonsense KB construction. Finally, Sap et al. (2019) use LSTM encoder-decoder models to generate commonsense knowledge about social situations. We use transformers and investigate the effect of using pre-trained language representations Radford et al. (2018) to initialize them.Finally, our work builds on previous work on adapting pre-trained language models for various sequence labeling, classification, and NLI end tasks Radford et al. (2018); Peters et al. (2018); Devlin et al. (2018). Our research investigates how pre-trained language models can be used for large-scale commonsense KB construction by generating new graph nodes and edges between nodes.
We introduce COMmonsense Transformers (ℂ𝕆𝕄𝔼𝕋) for automatic construction of commonsense knowledge bases. ℂ𝕆𝕄𝔼𝕋 is a framework for adapting the weights of language models to learn to produce novel and diverse commonsense knowledge tuples. Empirical results on two commonsense knowledge bases, Atomic and ConceptNet, show that ℂ𝕆𝕄𝔼𝕋 frequently produces novel commonsense knowledge that human evaluators deem to be correct. These positive results point to future work in extending the approach to a variety of other types of knowledge bases, as well as investigating whether ℂ𝕆𝕄𝔼𝕋 can learn to produce OpenIE-style knowledge tuples for arbitrary knowledge seeds.
We thank Thomas Wolf, Ari Holtzman, Chandra Bhagavatula, Peter Clark, Rob Dalton, Ronan Le Bras, Rowan Zellers and Scott Yih for helpful discussions over the course of this project, as well as the anonymous reviewers for their insightful comments. This research was supported in part by NSF (IIS-1524371, IIS-1714566, NRI-1525251), DARPA under the CwC program through the ARO (W911NF-15-1-0543), and Samsung Research. This material is based, in part, upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1256082.
Proceedings of the conference on empirical methods in natural language processing
, pages 1535–1545. Association for Computational Linguistics.For Atomic, we use a maximum learning rate of 6.25e-5 with a warmup period of 100 minibatches. After, we decay the learning rate linearly until the end of training. We train for 50k minibatches and use early stopping. We clip gradients when their norm is greater than 1. The remainder of our hyperparameters are the same as in Radford et al. (2018). We use the public HuggingFace implementation of the GPT model as a base for our experiments available at: https://github.com/huggingface/pytorch-openai-transformer-lm.
For ConceptNet, we use a maximum learning rate of 1e-5 and a warm-up period of 200 minibatches. The learning rate is decayed linearly until the end of training, which lasts for 100k minibatches. All other hyperparameters are the same as for training on the Atomic corpus.
We train the ConceptNet baseline with a learning rate of 1e-4 for 100k minibatches. Early stopping is used with the validation loss. Similarly to Saito et al. (2018), we use 200-dimension hidden states and 200-dimensional word embeddings. We use a single-layer bidirectional LSTM Hochreiter and Schmidhuber (1997) to encode the first phrase and a single-layer unidirectional LSTM to decode the target phrase. Relation embeddings are concatenated with the word embeddings of the decoder before being input to the decoder LSTM. We set the dropout rate to 0.2 before the output projection layer and after the word embedding layers. We outline the following differences between our re-implementation of the model of Saito et al. (2018) and their original implementation and the reason for the change.
We use Glove Pennington et al. (2014) embeddings rather than fastText embeddings Bojanowski et al. (2017) to initialize word embeddings. Because the model indicated that 200-dimensional word embeddings were used, we could not use the pretrained embeddings provided by the fastText group111https://fasttext.cc/. In Saito et al. (2018), the authors described training their fastText embeddings on Wikipedia. With no reference to the precise corpus used, we opted to use Glove embeddings to initialize the word embeddings of the encoder and decoder instead.
We use the Adam optimizer with learning rate of 0.0001, rather than SGD with a learning rate of 1.0 because after training both models, we found that the Adam-trained model performed better on development set perplexity. We also do not use weight decay, as this seemed to lower validation performance, as well.
We do not train the generation model jointly with the completion model. We only train an individual generator. The results of Saito et al. (2018) did not show a significant difference in generation performance between the two on the ConceptNet dataset.
We train a second baseline (LSTM - ) that does not learn to produce relations in both directions (i.e., and ). Instead if only learns parameters that can produce relations in the forward direction ()
We do not decay the learning rate because it was unclear from the original paper what the exact learning rate schedule was.
We used Amazon Mechanical Turk to get ratings of model output accuracy. We selected seed concepts and relations from the test set and generated completions using each model to create tuples. For Atomic, we selected tuples by choosing all possible relations (9) for each of 100 randomly selected seed concepts (900 total pairs) following the procedure from Sap et al. (2019). For ConceptNet, we used the full test set (1200 total pairs).
For Beam-2/5/10 and top-5/10 sampling generations, we used the model to generate 2, 5, or 10 (respectively) possible completions () per pair. Workers were shown the full set and asked to select all of the that are valid completions for the pair. Each set of tuples was rated by 5 workers.
For greedy sampling generations, we used the model to generate one possible completion () per pair. Workers were shown the completed tuple and asked whether it is valid or not. Each tuple was rated by 5 workers.
We measure accuracy as the percentage of distinct worker responses where the tuple is marked as valid (i.e., ).
Additional examples can be seen in Figures 5, 6, and 7 that are produced using the demo at https://mosaickg.apps.allenai.org.
In addition to the more naive setups for knowledge graph completion, we explore various multi-task and hierarchical learning setups on top of the taxonomy of commonsense relations given by Sap et al. (2019), which group together along various axes (e.g., related to agent/theme, related to causes/effects, etc.).
For the Atomic corpus, we experiment with multiple multi-task training setups, similar to Sap et al. (2019). First, we train an individual model for each relation type (oReact, oEffect, etc.), which we denote as ℂ𝕆𝕄𝔼𝕋 - 9LM in the Table 9. We also experiment with various information-sharing dataset configurations that organize different relations across common dimensions. We outline these dimensions and the makeup of each split in Table 9. For ConceptNet, all models are always trained on all relation types jointly. Results on automatic evaluation metrics are provided in Table 11. Because there did not seem to be significant differences between these performances and that of ℂ𝕆𝕄𝔼𝕋 - Full, we did not run additional experiments on these ablations.
Leveraging the prior knowledge that certain relation types in the Atomic knowledge graph are linked to each other, we explore providing these group identities as additional tokens in the relation. For example, when generating the completion of a xReact relation, the model would receive as input the following meta-tokens: <xReact>, <X>, <POST>, <Involuntary> – thereby providing common context with other relations that are part of the same groupings (e.g., generating a phrase for a xWant relation would receive the <X> and <POST> tokens as input, but not <Involuntary>). Depending on the relation for a particular training example (e.g., xReact), a set of meta-tokens are appended to the relation tokens, , that provide hierarchical relational information, allowing the model to share information across relation types. We provide a more in-depth description of the category hierarchy training combinations in Table 10. Results on human evaluation metrics are provided in Table 12. Because the model with the hierarchical meta-tokens performed worse than the regular ℂ𝕆𝕄𝔼𝕋, we did not run additional experiments on this ablations.
Event | Description | Example Completion: |
---|---|---|
Person X puts Person X’s trust in Person Y | ||
oEffect | The effect the event has on others besides Person X |
is considered trustworthy
is believed gains Person X’s loyalty |
oReact | The reaction of others besides Person X to the event |
trusted
honored trustworthy |
oWant | What others besides Person X may want to do after the event |
work with Person X
partner with Person X to help Person X |
xAttr | How Person X might be described given their part in the event |
faithful
hopeful trusting |
xEffect | The effect that the event would have on Person X |
gets relieved
stays faithful Is betrayed |
xIntent | The reason why X would cause the event |
to be trusting
his or her help/guidance/advice to be friends |
xNeed | What Person X might need to do before the event |
to be friends with Person Y
to have heard a lot of good things about Person Y to get to know Person Y |
xReact | The reaction that Person X would have to the event |
trusting
safe, not alone understood |
xWant | What Person X may want to do after the event |
to rely on Person Y
to go into business with Person Y to make sure that their heart feeling is right |
Organization | Description | Relations |
---|---|---|
Person X/Y | The training set is split into relations for the subjects of the event (Person X) and relations for other participants in the event |
{xAttr, xEffect, xIntent,
xNeed, xReact, xWant} {oEffect, oReact, oWant} |
Pre/Post | Event preconditions are jointly trained (i.e., intentions, needs). Event postconditions are jointly trained. |
{xIntent, xNeed}
{oEffect, oReact, oWant, xEffect, xReact, xWant} |
(In)Volun | Involuntary relations are trained jointly, such as reactions and effects. Voluntary relations are trained jointly, such as needs, wants, and intents. |
{oWant, xIntent, xNeed, xWant}
{oEffect, oReact, xAttr, xEffect, xReact} |
Full | The training set is made up of all relations and the model is trained jointly on all of them |
{oEffect, oReact, oWant, xAttr,
xEffect, xIntent, xNeed, xReact, xWant} |
Meta-Token | Description | Relations |
---|---|---|
<X> | Appended to relations that describe an attribute of Person X | xAttr, xEffect, xIntent, xNeed, xReact, xWant |
<Y> | Appended to relations that describes an attribute of a participant that is not Person X | oEffect, oReact, oWant |
<Pre> | Appended to relations that correspond to pre-conditions of the event | xIntent, xNeed |
<Post> | Appended to relations that correspond to post-conditions of the event | oEffect, oReact, oWant, xEffect, xReact, xWant |
<Voluntary> | Appended to relations that correspond to voluntary dimensions of the situation | oWant, xIntent, xNeed, xWant |
<Involuntary> | Appended to relations that correspond to involuntary dimensions of the situation | oEffect, oReact, xAttr, xEffect, xReact |
Model | PPL33footnotemark: 3 | BLEU-2 | N/T 44footnotemark: 4 | N/T | N/U |
---|---|---|---|---|---|
ℂ𝕆𝕄𝔼𝕋- 9LM | 11.72 | 14.89 | 100.00 | 9.45 | 49.89 |
ℂ𝕆𝕄𝔼𝕋- (In)Volun | 11.38 | 14.99 | 100.00 | 8.60 | 48.36 |
ℂ𝕆𝕄𝔼𝕋- PersonX/Y | 11.30 | 15.21 | 100.00 | 9.12 | 49.59 |
ℂ𝕆𝕄𝔼𝕋- Pre/Post | 11.35 | 14.88 | 100.00 | 9.86 | 51.86 |
ℂ𝕆𝕄𝔼𝕋- Full (- pretrain) | 15.42 | 13.88 | 100.00 | 7.25 | 45.71 |
ℂ𝕆𝕄𝔼𝕋- Full | 11.14 | 15.10 | 100.00 | 9.71 | 51.20 |
ℂ𝕆𝕄𝔼𝕋- Full (+ hierarchy meta-tokens) | 10.98 | 15.27 | 100.00 | 10.03 | 51.97 |
Model | oEffect | oReact | oWant | xAttr | xEffect | xIntent | xNeed | xReact | xWant | Total |
---|---|---|---|---|---|---|---|---|---|---|
ℂ𝕆𝕄𝔼𝕋 | 29.02 | 37.68 | 44.48 | 57.48 | 55.50 | 68.32 | 64.24 | 76.18 | 75.16 | 56.45 |
ℂ𝕆𝕄𝔼𝕋 (+ hierarchy meta-tokens) | 28.46 | 38.96 | 43.64 | 51.90 | 50.84 | 63.00 | 63.98 | 66.20 | 75.82 | 53.64 |