DeepAI
Log In Sign Up

A Review of Winograd Schema Challenge Datasets and Approaches

The Winograd Schema Challenge is both a commonsense reasoning and natural language understanding challenge, introduced as an alternative to the Turing test. A Winograd schema is a pair of sentences differing in one or two words with a highly ambiguous pronoun, resolved differently in the two sentences, that appears to require commonsense knowledge to be resolved correctly. The examples were designed to be easily solvable by humans but difficult for machines, in principle requiring a deep understanding of the content of the text and the situation it describes. This paper reviews existing Winograd Schema Challenge benchmark datasets and approaches that have been published since its introduction.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

01/07/2022

The Defeat of the Winograd Schema Challenge

The Winograd Schema Challenge – a set of twin sentences involving pronou...
09/30/2018

On the Winograd Schema Challenge: Levels of Language Understanding and the Phenomenon of the Missing Text

The Winograd Schema (WS) challenge has been proposed as an alternative t...
07/25/2019

Using Answer Set Programming for Commonsense Reasoning in the Winograd Schema Challenge

The Winograd Schema Challenge (WSC) is a natural language understanding ...
10/08/2020

Precise Task Formalization Matters in Winograd Schema Evaluations

Performance on the Winograd Schema Challenge (WSC), a respected English ...
07/24/2019

WINOGRANDE: An Adversarial Winograd Schema Challenge at Scale

The Winograd Schema Challenge (WSC), proposed by Levesque et al. (2011) ...
11/05/2018

On the Evaluation of Common-Sense Reasoning in Natural Language Understanding

The NLP and ML communities have long been interested in developing model...
09/27/2020

A Brief Survey and Comparative Study of Recent Development of Pronoun Coreference Resolution

Pronoun Coreference Resolution (PCR) is the task of resolving pronominal...

1 Introduction

The Winograd Schema Challenge was introduced by Hector Levesque [Levesque et al.2012] both as an alternative to the Turing Test [Turing1950] and as a test of a system’s ability to do commonsense reasoning.

An example of a Winograd schema is the pair of sentences introduced by Terry Winograd WinogradUnderstandingNL:
The city councilmen refused the demonstrators a permit because they [feared/advocated] violence.
Question: Who [feared/advocated] violence?
Answer: the city councilmen / the demonstrators

The word they refers to the city councilmen or the demonstrators, depending on whether the word feared or advocated

is used in the sentence. To correctly identify the referent, a human would probably need to know a good deal about demonstrators, permits, city councilmen, and demonstrations.

Levesque’s insight was that one can construct many other pairs of sentences, which appear to rely on commonsense reasoning, and for which sentence structure does not help disambiguate the sentence. He claimed that a system that could achieve human performance in solving such sentences would have the commonsense knowledge that humans use when doing such disambiguation. Such pairs of sentences would have to be constructed to have certain properties, including being identical except for one or two “special” words and not be solvable by selectional restriction. An important constraint was that the Winograd schemas be “Google-proof” or non-associative [Trichelair et al.2018], meaning that one could not use statistical associations to disambiguate the pronouns. As we discuss below, this is the least achievable constraint, as indicated by the success of statistical language models described in the survey.

The Winograd Schema Challenge was appealing, because the task of pronoun disambiguation is easy and automatic for humans, the evaluation metrics were clear, and the trick of using twin sentences seemed to eliminate using structural techniques to get to the right answer in ways that avoided using commonsense reasoning. In the years following its publication, the challenge became a focal point of research for both the commonsense reasoning and natural language processing communities.

A great deal of progress has been made in the last year. In this paper, we review the nature of the test itself, its different benchmark datasets, and the different techniques that have been used to address them.

2 Winograd Schema Challenge Datasets

Several Winograd Schema Challenge datasets have been introduced; for the most part, they can be split into two categories: performance-measuring and diagnostic datasets.

2.1 Original Collection of Winograd Schemas

The first collection of Winograd schemas were published together with the introduction of the Winograd Schema Challenge [Levesque et al.2012]222https://cs.nyu.edu/faculty/davise/papers/WinogradSchemas/WS.html. Examples are constructed manually by AI experts, with the exact source for each example available. At the time of writing, there are examples available; however, the last examples were only added recently. To ensure consistency with earlier models, several authors often prefer to report the performance on the first examples only. These datasets are usually referred to as Wsc285 and Wsc273, respectively.

Subclasses of the original collection

Trichelair et al. WSCAnalysis have observed that sentences in the Wsc273 dataset () are conceptually easier than the rest. The correct candidate is commonly associated with the rest of the sentence, while the incorrect candidate is not. An example of such a sentence is

In the storm, the tree fell down and crashed through the roof of my house. Now, I have to get it [repaired/removed].

The roof is commonly associated with being repaired, while the tree is not. They call these examples associative and name the rest non-associative. Moreover, they find that models often perform much better on the associative subsets.

Additionally, sentences ( of Wsc273) were found to form meaningful examples if the candidates in the sentence are switched. An example of such sentence is

Bob collapsed on the sidewalk. Soon he saw Carl coming to help. He was very [ill/concerned].

In this sentence, Bob and Carl can be switched to obtain an equivalent example with the opposite answers. Such sentences were named switchable. Trichelair et al. WSCAnalysis encourage future researchers to additionally report the consistency on the switchable dataset, when the candidates are switched, and when they are not.

The list of associative and switchable examples together with their switched counterparts have been made public333https://github.com/ptrichel/How-Reasonable-are-Common-Sense-Reasoning-Tasks.

Winograd Schema Challenge in other languages.

While the inspiration and original design of the challenge was in English, translations into other languages exist. Amsili and Seminck WSCfrench translated the collection of Winograd schemas into French, and original Winograd schemas were translated into Portugese by Melo et al. WSCportugese. Additionally, translations to Japanese444http://arakilab.media.eng.hokudai.ac.jp/~kabura/collection_katakana.html and Chinese555https://cs.nyu.edu/faculty/davise/papers/WinogradSchemas/WSChinese.html are available on the official webpage of the challenge. Authors of French and Portugese translation both report having to make some changes to the content to avoid unintended cues, such as grammatical gender. In the case of Portugese, sentences had to be dropped, as no appropriate translation could be found.

2.2 Definite Pronoun Resolution Dataset

The Definite Pronoun Resolution (Dpr) dataset is an easier variation of the Winograd Schema Challenge [Rahman and Ng2012]. The constraints on the Winograd schemas have been relaxed, and several examples in the dataset are not Google-proof. The dataset consists of training examples and test examples, constructed manually. examples in the training set reappear in Wsc273 in a very similar form. These should be removed when training on Dpr and evaluating on Wsc273. This dataset is also referred to as WscR, as named by Opitz and Frank WSCRanking.

An expanded version of this dataset, called WinoCoref, has been released by WinoCoref WinoCoref, who further annotate all previously ignored mentions (in their work, a mention can be either a pronoun or an entity) in the sentences that were not annotated in the original work. In this way, they add mentions to the dataset, of which are pronouns. Moreover, WinoCoref WinoCoref argue that accuracy is not an appropriate metric of performance on the WinoCoref dataset and introduce a new one, called AntePre.

They define AntePre as follows: Suppose there are pronouns in the dataset, and each pronoun has antecedents. We can treat finding the correct candidate for each pronoun as a binary classification for each antecedent-pronoun pair. Let be the number of correct binary decisions. AntePre is then computed as .

2.3 Pronoun Disambiguation Problem Dataset

The Pronoun Disambiguation Problem (PDP) dataset consists of 122 problems of pronoun disambiguation collected from classic and popular literature, newspapers, and magazines. Because constructing Winograd schemas according to Levesque’s original guidelines was a difficult, manual process, PDPs, which were collected and vetted rather than constructed, were intended to be used as a gateway set before administration of the Winograd Schema Challenge [Morgenstern et al.2016]. Each PDP, once collected, was vetted (and sometimes modified) to ensure that like Winograd schemas, the problems were of the sort that humans use commonsense knowledge to disambiguate, and were “Google-proof.” Although each PDP was vetted to remove examples where sentence structure would help find the answer, there was no “special” word, and thus, unlike Winograd schemas, no guarantee that sentence structure could not be exploited. PDPs were therefore expected to be easier than Winograd schemas.

Example: Do you suppose that Peter is responsible for the captain’s illness? Maybe he bribed the cook to put something in his food.
The referent of his is: (a) Peter or (b) the captain.

62 examples of PDPs were published before the Winograd Schema Challenge was administered 666http://commonsensereasoning.org/disambiguation.html, and 60 PDPs were included in the Winograd Schema Challenge that was administered at IJCAI 2016 777https://cs.nyu.edu/faculty/davise/papers/PDPChallenge.xml [Davis et al.2017]. A corpus of 400 sentences was collected semi-automatically from online text, with less vetting, by Davis and Pan PDPlarge888https://cs.nyu.edu/faculty/davise/annotate/corpus.xml.

2.4 Winograd Natural Language Inference Dataset

The Winograd Natural Language Inference (Wnli) dataset is part of the GLUE benchmark [Wang et al.2019b] and is a textual entailment variation of the Winograd Schema Challenge. An example from Wnli is given below with the goal to determine whether the hypothesis follows from the premise.

Premise: The city councilmen refused the demonstrators a permit because they feared violence.

Hypothesis: The demonstrators feared violence.

Answer: true / false

The dataset consists of training examples, validation examples, and test examples. Training and validation sets contain a major overlap with the Wsc273 dataset, while test samples come from a previously unreleased collection of Winograd schemas. Not all examples in this dataset contain the special word and therefore do not come in pairs. Kocijan et al. MaskedWiki note that examples are much easier to approach if the Winograd schemas are transformed from the textual entailment back into the pronoun resolution problem, and approached as such.

The same collection of examples is used for the SuperGLUE benchmark [Wang et al.2019a] as a pronoun resolution problem to begin with. For the purpose of this survey paper, Wnli and SuperGlue Wsc are considered the same dataset. They consist of the same examples and all approaches to Wnli described in this paper transform the examples as noted in the previous paragraph.

2.5 WinoGender Dataset

Unlike the previous datasets, WinoGender was created as a diagnostic dataset and is aimed to measure gender bias of the systems for pronoun resolution [Rudinger et al.2018]. WinoGender consists of hand-written sentence templates, together with candidates and pronouns that can be inserted into the templates to create valid sentences.

In each sentence, one of the candidates is an occupation, usually one with a high imbalance in gender ratio (e.g., surgeon). The other candidate is a participant (e.g., patient) or a neutral someone. For each sentence, either of the pronouns he, she, or they can be included to create a valid sentence, as the candidates are gender-neutral. All together, this gives Winograd schemas. An example from the dataset is

The surgeon operated on the child with great care; [his/her] [tumor/affection] had grown over time.

Note that the gender of the pronoun does not affect the expected answer; however, a biased system that associates the pronoun his

with the surgeon is likely to answer one of them incorrectly. The aim of this dataset is not to measure model performance, as its data distribution is highly skewed, but to help analyse the models for gender bias.

2.6 WinoBias Dataset

WinoBias was created by Zhao et al. WinoBias, which tries to identify gender bias in pronoun resolution models. WinoBias and WinoGender were created concurrently but independently, despite the same objective. They introduce a dataset with sentences, split equally into development and test. Each sentence contains two candidates that are selected from a list of jobs with highly imbalanced gender ratio.

Two different templates are used to create Winograd schemas. Type 1 sentences follow a structure that does not give away any syntactic cues. The authors thus estimate these sentences to be more challenging. An example of such a sentence is

The farmer knows the editor because [he/she] [is really famous/likes the book].

Type 2 sentences can be answered based on the structure of the sentence. The authors thus expect the models to perform better. An example of such a sentence is

The accountant met the janitor and wished [her/him] well.

Its “twin pair” has the candidates swapped. As the structure of the sentence gives the answer away, there is no special word.

Moreover, the authors evenly split the whole dataset into pro-stereotypical and anti-stereotypical, depending on whether the gender of the pronoun matches the most common gender of the referent occupation or not. They observe that publicly available models for co-reference resolution exhibit a major difference (up to ) in performance on pro- and anti- subsets of the dataset.

2.7 WinoGrande Dataset

The WinoGrande dataset is a large-scale Winograd Schema Challenge dataset ( examples) [Sakaguchi et al.2020] collected via crowdsourcing on Amazon Mechanical Turk. To prevent the crowd from creating lexically and stylistically repetitive examples, the workers are primed by a randomly chosen topic from a WikiHow article as a suggestive context. Finally, the authors use an additional crowd of workers to ensure that the sentences are hard but not ambiguous to humans. These measures were taken to ensure that there is no instance-level bias that models could exploit.

However, checking for instance-level cues is often not enough, as models tend to pick on dataset-level biases. The authors additionally introduce the AfLite adversarial filtering algorithm. They use a fine-tuned RoBERTa language model [Liu et al.2019]

to gain contextualized embeddings for each instance. Using these embeddings, they iteratively train an ensemble of linear classifiers, trained on random subsets of the data and discard top-

instances that were correctly resolved by more than of the classifiers. By iteratively applying this algorithm, the authors identify a subset ( instances), called WinoGrandedebiased. Finally, they split this dataset into training (), development (), and test () sets. They also released the unfiltered training set WinoGrandeall with examples.

2.8 WinoFlexi Dataset

Similarly to WinoGrande, Isaak and Michael WinoFlexi aim to construct a dataset through crowdsourcing. They build their own system and collect pairs of Winograd schemas ( examples). Unlike workers on WinoGrande, workers on WinoFlexi are not presented with any particular topic and are free to pick it on their own. Despite this, authors find the collected schemas to have decent quality achieved through manual supervision between workers.

3 Approaches to Winograd Schema Challenge

At least three different methods have been used to try to solve the Winograd Schema Challenge. One class of approaches consists of feature-based approaches, typically extracting information such as semantic relations. Additional commonsense knowledge

is usually included in form of explicitly written rules from knowledge bases, web searches, or word co-occurrences. The collected information is then used to make a decision, using rule-based systems, various types of logics, or discrete optimization algorithms. We observe that the extraction of relevant information from the sentence is usually the bottleneck of these approaches. Given the nature of the challenge, even the slightest noise in the feature collection can make the problem unsolvable.

The second group of approaches are neural approaches, excluding language-model-based approaches, which we consider as a separate group. Neural-network-based approaches usually read the sentence as a whole, removing the bottleneck of information extraction. To incorporate background information, these networks or their components are usually pre-trained on unstructured data, usually unstructured text, or other datasets for coreference resolution. Common approaches to the tasks in this group take advantage of semantic similarities between word embeddings or use recurrent neural networks to encode the local context. We find this group of approaches to lack reasoning capabilities, as semantic similarity or local context usually do not contain sufficient information to solve Winograd schemas.

The third group includes approaches that make use of large-scale pre-trained language models, trained with deep neural networks, extensively pre-trained on large corpora of text. Some of the approaches then additionally fine-tune the model on Winograd-Schema-Challenge-style data to maximize their performance. Approaches in this group achieve visibly better performance than approaches from the first two groups.

Due to a scattered nature of the results, we decided not to combine them into one large table. Not all methods are evaluated on the same set of examples. Moreover, choices non-crucial to the idea, such as the choice of word embeddings or a language model can significantly affect the results, making the direct comparison unfair.

3.1 Feature-based Approaches

This section covers the approaches that collect knowledge in form of explicit rules from knowledge bases, internet search queries, and use logic-based systems or optimization techniques to deduce the answer. We emphasize that results of methods that rely on search engines, such as Google, can be irreproducible, as they strongly depend on the search results.

The first model was introduced by Rahman and Ng DPR together with the Dpr dataset. The features that consist of Google queries, narrative chains, and semantic compatibility, were used to rank candidates with an SVM-based ranker. Their approach achieved accuracy on the Dpr test set. Peng et al. WinoCoref achieved

accuracy on this same dataset using integer linear programming and manually constructed schemas to learn conditioning from unstructured text. They additionally evaluated their model on

WinoCoref, where they achieved AntePre score.

Sharma et al. Kparser constructed a general-purpose semantic parser and use it to parse and answer Winograd schemas. The parser is used to extract relevant information from the sentence and internet search queries. Answer set programming (ASP) [Gelfond and Lifschitz1988, Baral2003] is then used to define the rules and constructs for reasoning. Due to their focus on specific types of reasoning, the authors only evaluate their approach on examples from Wsc285 where such reasoning is present, with their systems correctly answering examples ( accuracy). As noted by Zhang and Song DistributedWSC, this same approach achieves accuracy on a different subset of examples. Isaak and Michael WinoFlexi report this system to correctly solve of WinoFlexi examples, to incorrectly solve , and to make no decision on the remaining examples. As shown by Sharma ASP3, the sentence parsing and the knowledge collection are the bottleneck of this process. Sharma ASP3 develops an ASP-based algorithm, called WiSCR, which correctly solves out of Wsc285 examples, if the input and background knowledge are provided by a human. On the other hand, this same algorithm only solves of the examples, if it uses K-Parser for input parsing and a search engine for knowledge hunting.

Emami et al. KnowledgeHunter developed the first model to achieve a better-than-chance accuracy () performance on the entire Wsc273. Their system is completely rule-based and focuses on high-quality knowledge hunting, rather than reasoning, showing the importance of the former. Unlike neural approaches from later sections, this model is not negatively affected by switching candidates.

Isaak and Michael WinoSense take a similar approach and use a collection of heuristics and external systems for text processing, information extraction, and reasoning. The final system correctly resolves

of the examples from an older collection of Winograd schemas999https://cs.nyu.edu/faculty/davise/papers/OldSchemas.xml and of WinoFlexi.

An interesting approach to reasoning was proposed by Fähndrich et al. MarkerPassing, who build a graph for each example by combining knowledge about words from several knowledge bases with semantic and syntactic information. They place a collection of markers on the pronoun and iteratively distribute them across the graph according to a manually designed set of rules. The candidate with the greatest number of markers after steps is considered the answer. The approach is evaluated on Pdp, where it obtains accuracy.

3.2 Neural Approaches

This section contains approaches that rely on neural networks and deep learning, but do not use pre-trained language models. Models in this section are usually designed, built, and trained from scratch, while models that use language models are usually built on top of an off-the-shelf pre-trained neural network. We find that several ideas introduced in this section are later adjusted and scaled to language models; see Section 

3.3. Note that each work comes with a collection of model-specific architecture designs that are not covered in detail.

Liu et al. NAM were the first to use neural networks to approach the challenge. They introduce a neural association model to model causality and automatically construct a large collection (around ) of cause-effect pairs, that are used to train the model. The model is then trained to predict whether the second part of the schema is the consequence of the first one. For evaluation, Liu et al. NAM manually select Winograd schemas from the Wsc273 dataset that rely on cause-effect reasoning. Their best model achieves accuracy on this selected subset. In their subsequent work, Liu et al. KEE extend this approach and use it at the Winograd Schema Challenge 2016 [Davis et al.2017]. They develop their own pre-trained word embeddings, whose semantic similarity should correlate with cause-effect pairs, and train the final model on Ontonotes dataset for coreference resolution [Hovy et al.2006]. This method achieved the final score of on the Pdp dataset and on Wsc273.

Zhang and Song DistributedWSC similarly try to augment word embeddings that can take advantage of dependencies in the sentence. Unlike Liu et al. KEE, their model is completely unsupervised and is not additionally trained on any labelled data. They modify the Skip-Gram objective for word embedding pre-training to additionally use and predict semantic dependencies, which can thus be used as additional information at test time. The introduced approach is tested on a manually selected set of easy Winograd schemas from the Wsc273 dataset, achieving a accuracy. Wang et al. UDSSM take a step further with the unsupervised deep semantic similarity model (UDSSM). Instead of augmenting the word embedding, they train BiLSTM modules to compute contextualized word embeddings. The best performing ensemble of their models achieves accuracy on Pdp and accuracy on Wsc273.

Opitz and Frank WSCRanking are the first to try to generalize from Dpr to Wsc273 by training on the former and testing on the latter. We note that authors do not mention removing the overlap between them. In their approach, they replace the pronoun in question with one of the candidates. They design several Bi-LSTM-based models and train them to rank the sentence with the correct candidate better than the sentence with the incorrect candidate. Their best approaches achieve on Dpr and an accuracy of on Wsc273, showing that generalizing from Dpr to Wsc273 is not trivial.

3.3 Language Model Approaches

This section covers the approaches that use neural language models to tackle the Winograd Schema Challenge. Most of them use one or more language models that were trained on a large corpus of text. Several authors use large pre-trained language models, such as BERT [Devlin et al.2019], and have to tailor their approach accordingly. Many works thus focus on better fine-tuning of such language models instead of inventing new architectures.

Trinh and Le WinogradGoogle were the first to use pre-trained language models. Similarly to Opitz and Frank WSCRanking, two sentences are created from each example by replacing a pronoun with each of the two candidates. A language model, implemented as an LSTM and pre-trained on a large corpus of text is used to assign them a probability. The ensemble of such language models obtained was evaluated on the Pdp ( accuracy) and Wsc273 datasets ( accuracy). Trichelair et al. WSCAnalysis have shown that this ensemble is highly inconsistent in the case of swapped candidates and mainly works well on the associative subset of Wsc273. Radford et al.

 GPT2 apply the same method to evaluate their GPT-2 language model and achieve

accuracy on Wsc273. Melo et al. WSCportugese use this method on their Portugese version of the Winograd Schema Challenge. They use an LSTM-based language model, trained on text from Portugese Wikipedia, but only achieve a chance-level performance.

Prakash et al. KnowledgeHuntingLMs extend this approach with knowledge hunting. They find sentences on the web that describe a similar situation, but may be easier to resolve. They assume that the pronoun refers to the same candidate. They use the same method as Trinh and Le WinogradGoogle to compute probabilities of each candidate for each pronoun. The assumption that the pronoun in the mined sentence and the pronoun in the Winograd schema refer to the same entity is described and imposed with probabilistic soft logic [Kimmig et al.2012]. That is, all pronouns are resolved to the same candidate in the most probable way. The best model obtained by combining language models and knowledge hunting in this way achieves accuracy on Wsc273 and accuracy on Wsc285.

Klein and Nabi AttentionWSC analyse inner attention layers of a pre-trained BERT-base language model [Devlin et al.2019] to find the best referent. They define a maximum attention score, which computes how much the model has attended to each candidate across all the layers and attention heads. The model is evaluated on both Pdp ( accuracy) and Wsc273 ( accuracy).

Kocijan et al. MaskedWiki adapt scores from Trinh and Le WinogradGoogle to masked language models, i.e., BERT [Devlin et al.2019]. They additionally introduce an unsupervised pre-training dataset MaskedWiki from English Wikipedia, which is constructed by masking repeated occurrences of nouns (M examples, downscaled to M). When fine-tuned on both MaskedWiki and Dpr, BERT-large achieves performance on Wsc273 and on Wnli. By transforming Wnli examples as introduced in Section 2.4, this was the first model to beat the majority-class baseline.

Several authors have used this approach to Wnli as part of the GLUE benchmark [Wang et al.2019b] with best performance achieved by T5 at  [Raffel et al.2019]. The improvement over Kocijan et al. MaskedWiki usually comes from more extensive pre-training, and training on the training set of Wnli, which was not used by Kocijan et al. MaskedWiki, due to its overlap with Wsc273.

In their subsequent work, Kocijan et al. WikiCREM introduce a dataset called WikiCREM (M examples), generated in the same way as MaskedWiki, but restricted to masking personal names. By pre-training on WikiCREM and fine-tuning on other coreference datasets they achieve accuracy on Dpr, on Wsc273, on Wnli, and on Pdp.

Ye et al. AMS introduce an align, mask, and select (AMS) pre-training method for masked language models. They find sentences that contain entities that are directly connected in the ConceptNet knowledge base [Speer and Havasi2012]. They mask one of them and train the model to pick it from a list of candidates over other similar candidates. They fine-tune the obtained model in the same way as Kocijan et al. MaskedWiki to achieve and accuracy on Wsc273 and Wnli, respectively.

He et al. HNN combine the masked token prediction model by Kocijan et al. MaskedWiki with the semantic similarity model by Wang et al. UDSSM to create a hybrid neural network model. The combined model achieves accuracy on Wsc285, accuracy on Pdp, and on Wnli. The Wnli result was achieved by using an ensemble.

A different use of the BERT language model was used by Ruan et al. WSCNSP, who take advantage of the BERT next sentence prediction feature. In addition to replacing the pronoun with a candidate, they split the sentence into two parts, predicting whether the second part semantically follows the first one. To improve the performance, Ruan et al.

 WSCNSP encode syntactic dependency by changing some of the attention tensors within BERT and train on

Dpr. The BERT-large model combined with all the features achieves accuracy on the Wsc273 dataset.

Sakaguchi et al. WinoGrande use the same approach when evaluating the RoBERTa baseline for the WinoGrande dataset; however, they do not modify any attention layers, and train on WinoGrande rather than Dpr. They report achieving accuracy on WinoGrande, accuracy on Wsc273, on Pdp, on Wnli, and on Dpr. To this point, this is the highest performance achieved on the Wsc273 dataset by a large margin, showing the impact of additional training data. Curiously, they report achieving chance-level improvement on the validation set of WinoGrande when training on the WinoGrandedebiased. They suspect that the introduced model performed well on WinoGrandefull, because it trained to exploit a systemic bias within the dataset.

4 Conclusion

In this paper, we have reviewed and compared the datasets of Winograd schemas that have been created and the many systems that have been developed to attempt to solve them. Currently, the best of these systems, which exploit deep neural networks and incorporate very large and sophisticated pre-trained transformer models, such as BERT or RoBERTa finetuned, are able to achieve around 90% accuracy rates on Wsc273 and similar datasets.

Levesque et al. WinogradSchema claimed that because of the use of twin sentences, “clever tricks involving word order or other features of words or groups of words will not work [emphasis added].” This prediction has been falsified, at least as far as the dataset produced with that paper is concerned. The paper did not anticipate the power of neural networks, the rapid advances in natural language modelling technology resulting in language models like BERT, and the subtlety and complexity of the patterns in words that such technologies would be able to find and apply.

The systems that have succeeded on the Winograd Schema Challenge have succeeded on the pronoun disambiguation task in small passages of text, but they have not demonstrated either the ability to perform other natural language understanding tasks, or common sense. They have not demonstrated the ability to reliably answer simple questions about narrative text [Marcus and Davis2019] or to answer simple questions about everyday situations. Similarly, text generated using even state-of-the-art language modeling systems, such as GPT-2, frequently contains incoherences [Marcus2020].

The commonsense reasoning and the natural language understanding communities require new tests, more probing than the Winograd Schema Challenge, but still easy to administer and evaluate. Several tests have been proposed and seem promising. The problem of tracking the progress of a world model in narrative text is discussed by Marcus NextDecadeInAI. A number of proposed replacements for the Turing Test [Marcus et al.2016] likewise draw heavily on various forms of commonsense knowledge.

Acknowledgements

This work was supported by the EPSRC studentship OUCS/EPSRC-NPIF/VK/1123106.

References

  • [Amsili and Seminck2017] P. Amsili and O. Seminck. A Google-proof collection of French Winograd schemas. In Proc. 2nd CORBON Workshop, 2017.
  • [Baral2003] C. Baral. Knowledge Representation, Reasoning and Declarative Problem Solving. Cambridge University Press, 2003.
  • [Davis and Pan2015] E. Davis and X. Pan. A Corpus of Challenging Pronoun Disambiguation Problems, Adapted from Children’s Books At https://cs.nyu.edu/faculty/davise/annotate/corpus.pdf, unpublished, 2015.
  • [Davis et al.2017] E. Davis, L. Morgenstern, and C. L. Ortiz. The first Winograd Schema Challenge at IJCAI-16. AI Magazine, 2017.
  • [Devlin et al.2019] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. BERT: Pre-training of deep bidirectional trans- formers for language understanding. Proc. NAACL, 2019.
  • [Emami et al.2018] A. Emami, N. De La Cruz, A. Trisch- ler, K. Suleman, and J. C. K. Cheung. A knowledge hunting framework for common sense reasoning. In Proc. EMNLP, Brussels, Belgium, 2018.
  • [Fähndrich et al.2018] J. Fähndrich, S. Weber, and H. Kanthak. A marker passing approach to Winograd schemas. In Semantic Technology. Springer, 2018.
  • [Gelfond and Lifschitz1988] M. Gelfond and V. Lifschitz.

    The stable model semantics for logic programming. 1988.

  • [He et al.2019] P. He, X. Liu, W. Chen, J. Gao. A hybrid neural network model for commonsense reasoning. Proc. 1st Workshop on Commonsense Inference in NLP, 2019.
  • [Hovy et al.2006] E. Hovy, M. Marcus, M. Palmer, L. Ram- shaw, and R. Weischedel. OntoNotes: The 90% solution. In Proc. HLT-NAACL, 2006.
  • [Isaak and Michael2016] N. Isaak and L. Michael. Tackling the Winograd Schema Challenge through machine logical inferences. In Proc. STAIRS, 2016.
  • [Isaak and Michael2019] N. Isaak and L. Michael. Wino- Flexi: A crowdsourcing platform for the development of Winograd schemas. In Proc. AI, 2019.
  • [Kimmig et al.2012] A. Kimmig, S. H. Bach, M. Broeche- ler, B. Huang, L. Getoor. A short introduction to probabilistic soft logic. In Proc. NIPS Workshop on Probabilistic Programming: Foundations and Applications, 2012.
  • [Klein and Nabi2019] T. Klein, M. Nabi. Attention is (not) all you need for commonsense reasoning. ACL, 2019.
  • [Kocijan et al.2019a] V. Kocijan, O.-M. Camburu, A.-M. Cretu, Y. Yordanov, P. Blunsom, and T. Lukasiewicz. WikiCREM: A large unsupervised corpus for coreference resolution. In Proc. EMNLP, 2019.
  • [Kocijan et al.2019b] V. Kocijan, A.-M. Cretu, O.-M. Camburu, Y. Yordanov, T. Lukasiewicz. A surprisingly robust trick for Winograd Schema Challenge. Proc. ACL, 2019.
  • [Levesque et al.2012] H. J. Levesque, E. Davis, L. Morgenstern. The Winograd Schema Challenge. Proc. KR, 2012.
  • [Liu et al.2017a] Q. Liu, H. Jiang, A. Evdokimov, Z.-H. Ling, X. Zhu, S. Wei, and Y. Hu. Cause-effect knowledge acquisition and neural association model for solving a set of Winograd Schema Problems. In Proc. IJCAI, 2017.
  • [Liu et al.2017b] Q. Liu, H. Jiang, Z.-H. Ling, X. Zhu, S. Wei, and Y. Hu. Combing context and commonsense knowledge through neural networks for solving Winograd schema problems. 2017.
  • [Liu et al.2019] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov. RoBERTa: A robustly optimized BERT pretraining approach. arXiv:1907.11692, 2019.
  • [Marcus2020] G. Marcus. The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence arXiv:2002.06177, 2020.
  • [Marcus and Davis2019] G. Marcus and E. Davis. Rebooting AI. Pantheon Books, 2019.
  • [Marcus et al.2016] G. Marcus, F. Rossi, M. Veloso. Beyond the Turing Test. AI Magazine, 2016.
  • [Melo et al.2020] G. Melo, V. Imaizumi, and F. Cozman. Esquemas de Winograd em português. In Anais do XVI Encontro Nacional de Inteligência Artificial e Computacional, 2020.
  • [Morgenstern et al.2016] L. Morgenstern, E. Davis, and C. Ortiz. Planning, executing, and evaluating the Winograd Schema Challenge. AI Magazine, 2016.
  • [Opitz and Frank2018] J. Opitz and A. Frank. Addressing the Winograd Schema Challenge as a sequence ranking task. In Proc. 1st International Workshop on Language Cognition and Computational Models, ACL, 2018.
  • [Peng et al.2015] H. Peng, D. Khashabi, and D. Roth. Solving hard co-reference problems. In Proc. NAACL, 2015.
  • [Prakash et al.2019] A. Prakash, A. Sharma, A. Mitra, and C. Baral. Combining knowledge hunting and neural language models to solve the Winograd Schema Challenge. In Proc. ACL, 2019.
  • [Radford et al.2019] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. Language models are unsupervised multitask learners. 2019.
  • [Raffel et al.2019] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv:1910.10683, 2019.
  • [Rahman and Ng2012] A. Rahman and V. Ng. Resolving complex cases of definite pronouns: The Winograd Schema Challenge. In Proc. EMNLP, 2012.
  • [Ruan et al.2019] Y.-P. Ruan, X. Zhu, Z.-H. Ling, Z. Shi, Q. Liu, and S. Wei. Exploring unsupervised pretraining and sentence structure modelling for Winograd Schema Challenge. arXiv:1904.09705, 2019.
  • [Rudinger et al.2018] R. Rudinger, J. Naradowsky, B. Leo- nard, and B. Van Durme. Gender bias in coreference resolution. In Proc. NAACL, 2018.
  • [Sakaguchi et al.2020] K. Sakaguchi, R. Le Bras, C. Bhagavatula, Y. Choi. WINOGRANDE: An adversarial Winograd Schema Challenge at scale. In Proc. AAAI, 2020.
  • [Sharma et al.2015] A. Sharma, N. H. Vo, S. Aditya, and C. Baral. Towards addressing the Winograd Schema Challenge — Building and using a semantic parser and a knowledge hunting module. In Proc. IJCAI, 2015.
  • [Sharma2019] A. Sharma. Using answer set programming for commonsense reasoning in the Winograd Schema Challenge. arXiv:1907.11112, 2019.
  • [Speer and Havasi2012] R. Speer, and C. Havasi. Representing general relational knowledge in ConceptNet 5. In LREC, 2012.
  • [Trichelair et al.2018] P. Trichelair, A. Emami, J. C. K. Cheung, A. Trischler, K. Suleman, and F. Diaz. On the evaluation of common-sense reasoning in natural language understanding. In

    Proc. NeurIPS Workshop on Critiquing and Correcting Trends in Machine Learning

    , 2018.
  • [Trinh and Le2018] T. H. Trinh, Q. V. Le. A simple method for commonsense reasoning. arXiv:1806.02847, 2018.
  • [Turing1950] A. M. Turing. Computing machinery and intelligence. Mind, 59(236), 433, 1950.
  • [Wang et al.2019a] A. Wang, Y. Pruksachatkun, N. Nangia, A. Singh, J. Michael, F. Hill, O. Levy, S. R. Bowman. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. arXiv:1905.00537, 2019.
  • [Wang et al.2019b] A. Wang, A. Singh, J. Michael, F. Hill, O. Levy, and S. R. Bowman. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proc. ICLR., 2019.
  • [Wang et al.2019c] S. Wang, S. Zhang, Y. Shen, X. Liu, J. Liu, J. Gao, J. Jiang. Unsupervised deep structured semantic models for commonsense reasoning. NAACL, 2019.
  • [Winograd1972] T. Winograd. Understanding Natural Language. Academic Press, 1972.
  • [Ye et al.2019] Z.-X. Ye, Q. Chen, W. Wang, and Z.-H. Ling. Align, mask and select: A simple method for incorporating commonsense knowledge into language representation models. arXiv:1908.06725, 2019.
  • [Zhang and Song2018] H. Zhang, Y. Song. A distributed solution for Winograd Schema Challenge. ICMLC, 2018.
  • [Zhao et al.2018] J. Zhao, T. Wang, M. Yatskar, V. Ordo- nez, K.-W. Chang. Gender bias in coreference resolution: Evaluation and debiasing methods. In Proc. NAACL, 2018.