Winograd Schema Challenge (WSC) was proposed as an AI-complete problem in testing computers’ intelligence on common sense reasoning (Levesque et al., 2012; Morgenstern and Ortiz, 2015; Marcus et al., 2016). WSC elegantly embeds common sense reasoning into a simple form: a binary classification on coreference resolution. A typical WSC example was proposed in Winograd (1972): for the sentence “The city councilmen refused the demonstrators a permit because they feared violence.” and a corresponding question “Who does the word they refers to?”, a computer is expected to find the answer, i.e., the city councilmen but not the demonstrators.
While answering WSC questions is usually very simple for common human beings, it presents a great challenge for machines. Recently, there have been many different efforts attempting to solve the problem (refer to Section 2 for a brief survey).
In this paper, we present the new state-of-the-art model for WSC, achieving an accuracy of 71.1%. We demonstrate that the leading result benefits from jointly introducing sentence structures into modelling, utilizing external knowledge learned from cutting-edge pretraining, together with fine-tuning using the Raham-Ng dataset (Rahman and Ng, 2012).
In addition, we conduct detailed analyses. We observed that that fine-tuning is critical for achieving the performance, but it helps more on solving the simpler associative problems (Trichelair et al., 2018). Modelling sentence dependency structures, however, consistently helps on the harder non-associative subset of WSC. Analysis also shows that larger fine-tuning datasets yield better performances, suggesting the potential benefit of future work on annotating more Winograd schema sentences, which may help further complement and leverage the pretrained models.
2 Related Work
Some previous efforts on resolving the Winograd Schema problem relied heavily on the annotated knowledge, hand-crafted features, and (or) rule-based reasoning (Peng et al., 2015; Bailey et al., 2015; Schüller, 2014; Liu et al., 2017b, a). In particular, Rahman and Ng (2012) employed human annotators to build more supervised training data, in which the models utilized nearly 70 thousand hand-crafted features, including querying data from Google Search API.
More recently Sharma et al. (2015) utilized a semantic parser on the WSC sentences, queried texts through Google Search, and reasoned on the graph produced by the parser. Emami et al. (2018) showed better performance along the same direction. Schüller (2014)
formalized a knowledge-graph data structure and a reasoning process based on cognitive linguistics theories.Bailey et al. (2015) introduced a framework for reasoning using expensive annotated knowledge bases as axioms, while Liu et al. (2017b) incorporated several knowledge bases into the training process of skip-gram word embeddings, and the resulting knowledge-enhanced embeddings were used to better score the candidates.
Unlike the above work, we are first curious about the effectiveness of cutting-edge pretrained models. We show they do help achieve the state-of-the-art performance, but we observe that they help more on the simpler associative problems. We propose to incorporate sentence structures into the pretraining and fine-tuning framework, which consistently helps on the harder non-associative subset of WSC.
For research on unsupervised pretraining, there exists considerable work in the literature but the typical models proposed in the most recent years include GPT, ELMo, and BERT (Radford et al., 2018; Peters et al., 2018; Devlin et al., 2018), which achieved impressive performance on a wide variety of tasks. Among the models, we choose BERT (Devlin et al., 2018), which is among the-state-of-art.
3 Understanding the Roles of Pretraining and Fine-Tuning for WSC
While the unsupervised pretrained models on large text corpus have recently achieved impressive performance on various NLP tasks, it remains a fundamental question on how such pretrained model learn common sense to help solve hard common sense reasoning problems. For simple inference problems, BERT has achieved performance comparable to that of human being’s, e.g, on the recently published SWAG dataset (Zellers et al., 2018). This invites a further investigation on the harder WSC problems.
While BERT has been tested on the GLUE dataset (Wang et al., 2018), there have no conclusive results on WSC due to data splitting issues (Devlin et al., 2018). This paper is the first to perform a detailed study on the state-of-the-art pretraining-finetuning framework for WSC.
WSC as Next Sentence Prediction
It is reasonable to utilize BERT to learn and encode common sense knowledge existing in large text corpora in an unsupervised manner. While BERT has two targets, i.e., optimizing masked language models (LMs) and next-sentence prediction (refer to (Devlin et al., 2018) for details), to solve the WSC problems, we formulate the WSC sentences in the next-sentence prediction framework; a specific example is presented as follows:
Original WSC sentence: The trophy doesn’t fit into the brown suitcase because it is too large.
Candidate sentence 1: [CLS] The trophy doesn’t fit into the brown suitcase because [SEP] the trophy is too large . [SEP]
Candidate sentence 2: [CLS] The trophy doesn’t fit into the brown suitcase because [SEP] the brown suitcase is too large. [SEP]
In the formulation, we replace the pronoun “it” in the original WSC sentence with the two candidate entities ”the trophy” and “the brown suitcase”, respectively, to derive two candidate sentences, where a special classification token [CLS] and delimiter token [SEP] are inserted in following Devlin et al. (2018).
Then we score these two candidate sentences using pretrained next-sentence-prediction. The substitution that results in a more probable candidate sentence will be the correct answer.
Fine-tuning BERT for WSC
As we will discuss later, we find fine-tuning is critical for WSC. The original WSC dataset has only 273 sentences. We did not observed any performance gain using the 273 sentences to fine-tune BERT with 10-fold cross validation. It is therefore interesting to understand if more Winograd schema sentences will benefit the framework. Fortunately, Rahman and Ng (2012) gathered a dataset consisting of sentences of pronoun resolution problems111available at http://www.hlt.utdallas.edu/~vince/data/emnlp12/. To ensure that there is no overlapping sentences between this dataset and the WSC dataset, we manually investigated and removed from the sentences sentences that are overlapping with the WSC set. The remaining sentences, referred to as the Raham-Ng dataset in the remainder of this paper, will be used to fine-tune the pretrained BERT models to develop our state-of-the-art models and analyze how pretraining and fine-tuning help WSC.
4 Modelling Sentence Structures
In addition to investigating common sense knowledge learned with pretraining and encoded in the Transformer-based data structures, we believe carefully modelling WSC sentences themselves are important, as human rely heavily on sentence syntax and semantics to solve the WSC problems. Unlike previous models (Liu et al., 2017b; Sharma et al., 2015), in this paper we explore modelling sentence structures together with the pretraining-finetuning framework, to leverage and combine their complementary strengths.
BERT is mainly built on deep-stacked bidirectional Transformers (Vaswani et al., 2017). A typical Transformer works purely on attention mechanism and does not have an explicit notion of word order beyond marking each word with its absolute position embedding. It has been found that the RNNs are superior to the full attention network, i.e, Transformer, on modelling some hierarchical structures in sentences (Tran et al., 2018). In this paper we will explicitly incorporate dependency structure into BERT. For many tasks that are sensitive to sentence structures where a relatively small training data are provided, an explicit utilization of sentence structures often benefits. We wonder if WSC can benefit from both sentence structure and pre-learned knowledge jointly, and if so how?
Adding Dependency into Transformers
Transformers in BERT consist of multiple layers (Vaswani et al., 2017), among which the multi-head self-attention layer serves as the most important component. The self-attention layer in Transformer is presented as following, while the detailed discussion can be found in (Vaswani et al., 2017).
For input hidden states corresponding to tokens in the sequence, which can be the output of last Transformer layer or the embedding layer, one self-attention head, i.e., , derive its output as follows:
where , , represents the query, key, value matrices respectively, and , , are three projection matrices.
Then a dot-product attention and softmax-weighted sum is applied to , , to derive the output :
where is the scaling factor. Finally, outputs from multiple heads will be concatenated to form the output of the self-attention layer.
For the original Transformers in BERT, as illustrated above, each word will attend to all words in the sequence when in the process of self-attention. To add explicit structural constraints, we propose to combine the dependency structures with self-attention in Transformer.
We propose a Dependency Mask technology to constrain a word to specially attend to its head word, child words, and itself. As shown in Figure 1, for sentence “A cat sits on the desk.”, we first derive its dependency tree and then convert it to a simpler dependency mask matrix, denoted as here, each row in represents the relation between the -th word and all words in the sentence, the positions indicating its head word, child words, and itself are set to . The dependency mask are added to the softmax-weighted sum as follows:
We explore two approaches to adding the dependency structures into BERT: the inside and outside approach.
Inside: In the inside approach, the dependency mask is added to Transformer layers inside BERT—we mask the attention weights in the last, middle, or first Transformer layers.
Outside: While BERT is powerful, it may be very sensitive to the modification that is made only in the fine-tuning phase (but not in pretraining), particularly directly in the internal layers. Motivated by that, we also propose the outside incorporation. Specifically we define layers of new Transformers, denoted as TransformerRNN- here, upon the last Transformer layer of the original BERT model. These new layers share the same set of parameters. The TransformerRNN- works in a recurrent manner, motivated by the universal Transformers (Dehghani et al., 2018). With this, the dependency mask is added to all layers in TransformerRNN-.
5 Evaluation Protocols
Due to the difficulty to acquire high-quality samples for common-sense reasoning under the Winograd Schema settings, the WSC dataset comprises only 273 test instances.222Recently, 12 new sentences have been added. Previous state-of-the-art on the full WSC set for single-model performance is around 55% (Trinh and Le, 2018; Emami et al., 2018)
. Indeed, it has been found that there is more than a 1-in-3 chance of scoring above 55% accuracy with a set of random classifiers(Trichelair et al., 2018). So achieving above random accuracy on the WSC does not necessarily corresponding to success on common sense reasoning. In particular, Trichelair et al. (2018) proposed two new evaluation protocols to alleviate above evaluation problems by subdivide and augment the WSC data according to two properties: associativity and swithchability.
In this paper we use these effective protocols to help probe the roles of pretraining-funetuning frameworks and our proposed models in WSC.
As one of the designing principle (Levesque et al., 2012), the sentence in WSC should not be resolvable via simple statistics that associate a candidate antecedent to certain components in the sentences. For example, in the above formulation, the statement “The lions ate the zebras because they are predators” (Rahman and Ng, 2012), we have two pairs of candidate sentences “The lions ate the zebras because lions are predators” and “The lions ate the zebras because zebras are predators”. Such sentences can be resolved based on a much stronger association/collocation of lions with predators than that of zebras. Trichelair et al. (2018) released a dataset in which the sentences in WSC are manually annotated either as associative or non-associative. Table 1 lists some examples and the size of these two subsets.
|Previous state-of-art models|
|Ensemble 14 LMs||63.7%||83.8%||60.6%||63.4%||53.4%||28.0%|
|BERT without fine-tuning|
|BERT-base + dependency||52.7%||59.5%||51.7%||51.9%||54.2%||12.2%|
|BERT-large + dependency||52.7%||48.6%||53.4%||54.2%||56.5%||25.2%|
|BERT with fine-tuning|
|BERT-base + dependency||67.4%||78.4%||65.7%||66.4%||61.8%||53.4%|
|BERT-large + dependency||71.1%||81.1%||69.5%||74.1%||72.5%||66.4%|
A switchable sentence in WSC (Trichelair et al., 2018) means that switching the two antecedents does not obscure the sentence nor affect the rationale to make the resolution decision. A typical example from the WSC is as follows:
Original sentence: Paul tried to call George on the phone, but [Paul/George] wasn’t successful.
Switched sentence: George tried to call Paul on the phone, but [Paul/George] wasn’t successful.
When switching the antecedents Paul and George, the correct answer changes from Paul to George as well. A system that can correctly resolves both the original and the switched sentence indicates it learns the reasoning better than a system that is confused by the switching. The switchable subset contains 131 instances, which accounts for 47% of the original WSC set (Trichelair et al., 2018). For the evaluation, we report not only accuracy on the original unswitched and switched sentences in switchable subset but also consistent accuracy. Our consistent accuracy is computed as the number of correctly answered pairs (i.e., correctly answered WSC sentences both before and after a switch) divided by the total number of switchable sentences (i.e., 131), to reflect the absolute differences of systems’ performance on the switchable subset.
6 Experiment Results and Analysis
6.1 Baselines & Training Details
We use two state-of-the-art systems as our baselines:
Pretrained LMs: These are specially trained and ensembled language models (LMs) (Trinh and Le, 2018), in which the language models are used to score the two sentences obtained by replacing the pronoun with the two candidate entities. The sentence that is assigned a higher probability is chosen as the answer. We will use in our experiments the best Single LM and ensembling of 14 LMs from Trinh and Le (2018).
All sentences in the Raham-Ng dataset are used as training set to fine-tune the pretrained BERT. The dependency parsing for the sentences are conducted by using spaCy tool333https://spacy.io/usage/linguistic-features#section-dependency-parse
. Concerning the final dependency mask matrix input to the BERT, for the subword after tokenization, we just duplicate its original word’s dependency relation vectoras its own. And for the special token [CLS] and [SEP] we set all elements in their corresponding to .
For the hyperparameters during the fine-tuning process, we use most default settings. Specifically, the learning rate is. The batch size for BERT-base and BERT-large are set to and respectively. The warmup rate for BERT-base is and that for BERT-large is . The max sequence length is set to
, max training epochs is. Dropout rates for both BERT-base and BERT-large are set
. We use the PyTorch implementation of BERT with the pretrained model filesbert-base-uncased and bert-large-uncased provided by Google.444https://github.com/huggingface/pytorch-pretrained-BERT#Fine-tuning-with-BERT-running-the-examples
6.2 Overall Performance
Table 2 shows the overall performances of different models. The results of the baselines (Single LM, Ensemble 14 LMs, and Knowledge Hunter) are copied from Trichelair et al. (2018) and the consistent accuracies are computed from the consistency scores in Trichelair et al. (2018). Together with these baselines, we present the performances of different models with regard to whether adopting fine-tuning or not, across different evaluation protocols and subsets. Note that BERT-base and BERT-large are pretrained on the same corpus but with different model sizes: BERT-base has 110M parameters and BERT-large has 348M. 555 Note that when we submit the paper, the GPT-2 (
Note that when we submit the paper, the GPT-2 (https://blog.openai.com/better-language-models/) has just been posted, which is pretrained on much larger text corpora and has much more parameters than BERT-large; however, the corresponding code and model have not been fully released.
We can see that our proposed model that leverages dependency structures with the fine-tuned BERT-large framework achieves the best performance on all metrics and across different evaluation protocols. Particularly it achieves the new state-of-the-art accuracy, 71.1%, on the full WSC dataset. The detailed analysis of adding dependency information will be discussed later in this section.
We also observe that the pretraining frameworks (BERT-large) with proper fine-tuning (using the Raham-Ng dataset) achieves an accuracy of 68.1%, which outperforms all previous (baseline) models already. Fine-tuning plays a critical role in achieving the performance, and it helps significantly more on the associative subset. We also see that incorporating dependency structures consistently helps on the harder non-associative subset.
Effects of Fine-Tuning
We can find that all BERT models without fine-tuning perform worse than all previous state-of-the-art models, but have a significant performance improvement after fine-tuning (i.e., about - accuracy improvement on the full WSC set), which shows that pure pretrained BERT models are not competent on WSC, while fine-tuning is critical, which combines knowledge from Winograd annotation and that from pretrained models.
Effects of Adding Dependency
Both fine-tuned BERT-base and BERT-large models benefit from leveraging dependency structures and achieve an accuracy improvement for about on the full WSC set, which shows the effectiveness of incorporating sentence structures into BERT for the WSC problem.
Effects of Pretrained Model Sizes
As BERT-large has a bigger model size than BERT-base, it can potentially accommodate to encode more knowledge during the pretraining process, with its advantages having been shown in many recent research efforts. Specifically for WSC, Table 2 shows that fine-tuned BERT-large models significantly outperform the corresponding fine-tuned BERT-base models on across all metrics and evaluation protocols.
Associative v.s. Non-Associative
As discussed before, the associative WSC sentences are likely to be resolved by utilizing statistics such as co-occurency found in large text corpora. However, the non-associative WSC sentences are much more challenging.
From the results of BERT models in Table 2 and as we have already highlighted above, the associative sentences benefit more from the fine-tuning strategy than non-associative sentences. We due this to the capability of pretraining and fine-tuning frameworks in capturing simple statistics to judge that ”lions are predators” is more plausible than ”zebras are predators”, as in the example we discussed earlier. However, such statistics seems to be less effective for solving harder non-associative problems.
With the associative knowledge being effectively captured in the pretraining-finetuning mechanism, dependency knowledge can further help solve the harder non-associative problems consistently.
Switchable & Consistency
For the results on switchable subset in WSC, from Table 2, we can observe that all models have significant performance drops on the consistent accuracy, compared to performance on the unswitched accuracy. Among these, the fine-tuned BERT-large models and Knowledge Hunter behave the most stable on the switchable subset and decrease by less than from the unswitched accuracy to consistent accuracy, which demonstrates that the BERT-large models do not only outperform all other models, but also have robustness comparable to the rule-based Knowledge Hunter.
|+ mask first 5 layers (0-4)||63.7%|
|+ mask middle 5 layers (3-7)||67.0%|
|+ mask last 5 layers (7-11)||67.4%|
|+ mask all 12 layers (0-11)||63.7%|
|+ mask first 5 layers (0-4)||67.8%|
|+ mask middle 5 layers (10-14)||65.6%|
|+ mask last 5 layers (19-23)||71.1%|
|+ mask all 24 layers (0-23)||66.3%|
6.3 Detailed Analysis
Adding Dependency into BERT
As discussed in Section 4, we explore two approaches to leveraging the dependency structure into pretrained models, i.e., the inside and outside modelling. Specifically, for the inside method, we show here the performances of different ways to add the dependency information: masking the first, middle, last 5 layers, or masking all Transformer layers. For the outside approach, we also show results of different settings: TransformerRNN-, TransformerRNN-, TransformerRNN-, and TransformerRNN- respectively.
We fine-tune the BERT models on the Raham-Ng dataset and these results are present in Table 4. For the inside method, we can see that both BERT-base and BERT-large with masking the last 5 layers achieve the best accuracy, and all other masking settings except masking middle 5 layers do not improve the performance for both BERT-base and BERT-large. For the outside approach, we can find that only TransformerRNN- benefits the BERT-base’s performance on the WSC, which is still inferior to the best setting of inside manner. We do not experiment the outside method on the BERT-large model due to its poor performance on BERT-base and the much larger model size of BERT-large.
The Effects of Fine-Tuning Data Sizes
Our experiments have shown that pretrained BERT models can obtain the state-of-the-art performances on WSC with fine-tuning on a relative small dataset, i.e., the Raham-Ng dataset. A natural question is that how the sizes of fine-tuning data affect the performances.
We randomly selected different percentages of data from the Raham-Ng dataset to fine-tune our best “BERT-large + dependency” model and test their performances on the WSC dataset. The results are presented in Table 3, which shows that a larger tuning dataset yields a better performance, suggesting the potential benefit of future work on annotating more Winograd schema sentences.
7 Conclusions and Discussion
We report the new state-of-the-art performance, a 71.1% accuracy, on the Winograd Schema Challenge (WSC). The proposed state-of-the-art solver for WSC benefits from jointly modelling sentence structures, utilizing knowledge learned from cutting-edge pretraining models, and performing proper fine-tuning. We conduct detailed analyses, showing that fine-tuning is critical for achieving the performance, but it helps more on the simpler associative problems. Modelling sentence dependency structures, however, consistently helps on the harder non-associative subset of WSC. Analysis also shows that larger fine-tuning datasets yield better performances, suggesting the potential benefit of future work on annotating more Winograd schema sentences. Although this work focuses on exploring distributed representation for WSC, caution should certainly be taken on if that by itself will result in a final solution.
- Bahdanau et al.  Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
- Bailey et al.  Daniel Bailey, Amelia J Harrison, Yuliya Lierler, Vladimir Lifschitz, and Julian Michael. The winograd schema challenge and reasoning about correlation. In 2015 AAAI Spring Symposium Series, 2015.
- Dehghani et al.  Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Łukasz Kaiser. Universal transformers. arXiv preprint arXiv:1807.03819, 2018.
- Devlin et al.  Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
- Emami et al.  Ali Emami, Noelia De La Cruz, Adam Trischler, Kaheer Suleman, and Jackie Chi Kit Cheung. A knowledge hunting framework for common sense reasoning. arXiv preprint arXiv:1810.01375, 2018.
- Krizhevsky et al.  Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
- Lample et al.  Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. Unsupervised machine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043, 2017.
- Levesque et al.  Hector Levesque, Ernest Davis, and Leora Morgenstern. The winograd schema challenge. In Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning, 2012.
Liu et al. [2017a]
Quan Liu, Hui Jiang, Andrew Evdokimov, Zhen-Hua Ling, Xiaodan Zhu, Si Wei, and
Cause-effect knowledge acquisition and neural association model for
solving a set of winograd schema problems.
Proceedings of the 26th International Joint Conference on Artificial Intelligence, pages 2344–2350. AAAI Press, 2017.
- Liu et al. [2017b] Quan Liu, Hui Jiang, Zhen-Hua Ling, Xiaodan Zhu, Si Wei, and Yu Hu. Combing context and commonsense knowledge through neural networks for solving winograd schema problems. In 2017 AAAI Spring Symposium Series, 2017.
- Marcus et al.  Gary Marcus, Francesca Rossi, and Manuela Veloso. Beyond the turing test. Ai Magazine, 37(1):3–4, 2016.
- Mikolov et al.  Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119, 2013.
- Mnih et al.  Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
- Morgenstern and Ortiz  Leora Morgenstern and Charles Ortiz. The winograd schema challenge: evaluating progress in commonsense reasoning. In Twenty-Seventh IAAI Conference, 2015.
- Peng et al.  Haoruo Peng, Daniel Khashabi, and Dan Roth. Solving hard coreference problems. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 809–819, 2015.
- Peters et al.  Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 2227–2237, 2018.
- Radford et al.  Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai-assets/research-covers/languageunsupervised/language understanding paper. pdf, 2018.
Rahman and Ng 
Altaf Rahman and Vincent Ng.
Resolving complex cases of definite pronouns: the winograd schema
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 777–789. Association for Computational Linguistics, 2012.
- Schüller  Peter Schüller. Tackling winograd schemas by formalizing relevance theory in knowledge graphs. In Fourteenth International Conference on the Principles of Knowledge Representation and Reasoning, 2014.
- Sharma et al.  Arpit Sharma, Nguyen H Vo, Somak Aditya, and Chitta Baral. Towards addressing the winograd schema challenge—building and using a semantic parser and a knowledge hunting module. In Twenty-Fourth International Joint Conference on Artificial Intelligence, 2015.
- Tran et al.  Ke Tran, Arianna Bisazza, and Christof Monz. The importance of being recurrent for modeling hierarchical structure. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4731–4736, 2018.
- Trichelair et al.  Paul Trichelair, Ali Emami, Jackie Chi Kit Cheung, Adam Trischler, Kaheer Suleman, and Fernando Diaz. On the evaluation of common-sense reasoning in natural language understanding. arXiv preprint arXiv:1811.01778, 2018.
- Trinh and Le  Trieu H Trinh and Quoc V Le. A simple method for commonsense reasoning. arXiv preprint arXiv:1806.02847, 2018.
- Vaswani et al.  Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008, 2017.
- Wang et al.  Alex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461, 2018.
- Winograd  Terry Winograd. Understanding natural language. Cognitive psychology, 3(1):1–191, 1972.
- Zellers et al.  Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. Swag: A large-scale adversarial dataset for grounded commonsense inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 93–104, 2018.