Communicating with humans is a long-standing goal in AI, and has been studied in the context of natural language for decades. Many of the key challenges in this task, such as using a shared understanding of the world, commonsense reasoning, and metaphor are, however, not language-specific, but are instead general-purpose tools that humans use when communicating through other modalities as well. For example, understanding what means in a text conversation requires grasping metaphor (it is unlikely to be literally suggesting one should put on a party hat), or understanding a sign with a truck swerving requires common-sense reasoning (the intent is to show slippery conditions, not to suggest drivers ought to begin swerving themselves). Humans can easily adapt to these different modalities, as well as use visual/symbolic tools (e.g., pointing with a finger, or an arrow in a diagram) that cannot be used in a text-only context. To build and test AIs for this skill, we introduce the first task and large-scale dataset for multimodal communication by creating Iconary, a game of drawing and guessing based on Pictionary, along with a dataset of games with human players, proposing automatic and online game playing metrics, and constructing proficient Iconary AIs.
In Iconary, one player (the Drawer) draws an image for a phrase by arranging icons (including the ability to rotate or change the sizes of icons) on a canvas, and a second player (the Guesser) guesses what phrase the drawing represents. We use icons so we can focus on the high-level semantics of the drawings, and to make the game easier to play online. The Guesser then makes a series of attempts to guess the phrase using only the drawing. If the Guesser is unsuccessful, the Drawer can revise the drawing, and the cycle repeats until time runs out or the Guesser is successful. Figure 1 shows an example of an Iconary game, played between a human player and our AI player.
Iconary combines several key comprehension challenges. First, non-literal imagery, since most words in our dataset do not have directly corresponding icons so players will often use visual metaphor (e.g., a school bus and book for ‘textbook’) or reference canonical examples (e.g., lit and unlit light for ‘turning off’) to convey words. Second, visual similarity, since icons can also be composed to draw objects, such as using concentric circles to draw a dartboard. Third, annotations, because Drawers often use arrows, circles, or crosses to indicate motion or to guide the interpretation of the image. Fourth, state tracking, because players need to remember what drawings/guesses have been already done (e.g., Drawers will often re-draw/augment scenes they could tell confused the Guesser, or use annotations to guide the Guesser’s attention towards missed elements). Fifth, world knowledge, since models are tested on words not seen during training.
We present a large dataset for Iconary by having human players play with each other – a collection of 56k games in train, in-domain (Ind) dev and test sets with 5k games, and out-of-domain (Ood) dev and test sets with 1k and 3k games respectively that contain words not seen during training.
Our proposed models, TDrawer and TGuesser, leverage world knowledge in the T5 t5 pre-trained language model and have been carefully adapted to draw and guess words not observed during training. We measure performance using automated metrics, but our main results are shown by having our AIs play games with human players. TDrawer and TGuesser perform remarkably well on the Ind sets (68.3% and 96.0% win rates), but are also able to play impressively with human players on the Ood sets (41.7% and 62.9% win rates), demonstrating their ability to extract and integrate world knowledge for unseen game-play words from language models. Figure 1 shows some interesting games played by our models with human partners on the Ood set.
While our models are capable players, skilled human players outperform them on the Ood sets (a smaller margin of 4.6% at guessing but a sizeable margin of 21.0% at drawing). An error analysis shows that most errors occur for unseen words, particularly verbs, compound words, and examples with complex drawings, such as those requiring fine-grained positional information. Our quantitative and qualitative analysis suggests ample room for future research in this new, rich and complex domain.
2 The Iconary Game and Dataset
2.1 Playing Iconary
Iconary is played using a web user interface (UI). First, the Drawer is shown a short phrase and creates a drawing by selecting icons from a library and arranging them on a canvas. We include 1,205 icons from the Noun Project111https://thenounproject.com that were chosen to cover a variety of common entities that would be difficult to draw using other icons. Icons can be resized, rotated, and flipped as desired. Once finished, the Drawer passes the turn to the Guesser.
The Guesser is shown the drawing and the phrase with the non-stop words replaced by blanks, and submits a series of guesses to the UI which indicates which words were correct after each guess to allow incremental progress. If the Guesser gives up, control is passed back to the Drawer who can modify their drawing in response to the guesses made so far. This cycle repeats until the phrase is guessed or a 4-minute timeout is reached. The game UI is provided in the appendix.
We collect phrases from two sources (see the appendix for more details). First, we have crowdworkers turn image summaries from imSitu imsitu into short phrases. These summaries are derived from FrameNet framenet and consist of an action with the addition of one or more agents (e.g., people, animals), places (e.g., park, office), or artifacts (e.g., computer, car) filling a variety of verb-specific roles. We base our phrases on these summaries since they contain words that can be depicted visually, i.e., they avoid abstract words like “believing" or “determination" that would be difficult to draw. We collect 41k phrases with 250 unique verbs, 2k other non-stop words, and an average of 5.4 words.
Second, we build out-of-domain (Ood) test phrases that have out-of-vocabulary (OoV) words. To maintain the vocabulary size of our training data, we build these phrases by having in-house annotators modify phrases in the Ind test set rather than holding out phrases with particular words from the imSitu phrases. First, we collect a list of candidate OoV words by gathering unused words from imSitu and a few other sources, and then manually filtering out words that could not plausibly be drawn. The new OoV words are complex and diverse, see Table 1 for a random sample. Second, annotators were given a test phrase and asked to write a new phrase that used one of the new words, at least one of the non-stop words from the original phrase, and otherwise preserve as much of the original phrase as possible. We build 2.8k new Ood phrases with 1.3k new words. Examples of drawings with these words can be found in the appendix.
The imSitu phrases are divided into train, dev and test sets. Additional filtering was done on dev and test to remove ambiguous words, unusual descriptions and grammatical errors (removing about 15%). The Ood phrases were divided into dev and test sets, see Table 2 for statistics.
2.3 Collecting Iconary Games
We gather Iconary games for these phrases by pairing crowdworkers together to play on our UI. Over 900 players played almost 60,000 games (we allowed multiple games to be played for a phrase). Workers qualify by winning a game with another player, and we disqualify workers that have very low win rates during data collection. We also heuristically filter out poor-quality games, such as removing games with no guesses. Since the OOD games are our main target, we additionally filter out games with players who had played less than 15 practice games, or that included a small number of players who had win rates far lower than the average, to ensure high quality.
Table 2 shows statistics for our 5 datasets. Humans have a high success rate for the Ind sets. The Ood phrases prove more challenging, likely because they often use more advanced words that require more skill to draw and guess.
To better understand our dataset, we perform two analyses. First, we manually label occurrences of six non-exclusive drawing strategies in a sample of 200 games from the Ind and Ood dev sets. The results are shown in Figure 2. We observe that most games use complex strategies to represent the phrase; such as composing multiple icons to represent nouns, drawing small scenes for verbs, using annotations, or creatively re-purposing icons. The Ood dataset tends to include less common nouns and verbs, and drawers adapt to this by using more complex strategies for those phrases.
Second, we study how Drawers revise their drawings when the Guesser is unsuccessful. We label drawing revisions as either edit: re-arranging, removing, or re-sizing icons, or adding arrows or other annotations, add: adding new icons to offer alternative visualizations or to hint at connections the Drawer missed, redraw: deleting and redrawing parts of a scene that confused the guesser. We make these labels exclusive by placing games into the latter-most category that applies across all drawing revisions in a game.
The results, and statistics for the use of multiple drawings, are shown in Table 3. We see that Drawers generally use a balanced mix of our identified strategies and that the more challenging Ood games tend to have more drawings.
|>= 2||>= 3||>= 4||Edit||Add||Redraw|
We propose TGuesser and TDrawer to play Iconary. Both models condition on the current game state, meaning the previous drawings, guesses and, for TDrawer, the game phrase, and then generate either text to guess the phrase (for TGuesser), or a sequence of special tokens that encode a drawing (for TDrawer).
Although this involves a visual modality, we propose to use language models for this task because (1) the icon names can be used to understand the drawing and (2) Iconary often requires using word knowledge (e.g., mapping person and thumb icons to ‘hitchhiking’ or milk and ice cream icons to ‘milkshake’) that is known to be captured by these models roberts-etal-2020-much. To do this, we encode the game state as text and apply the T5 t5 language model by treating the task as a text-to-text conditional generation task. Interestingly, we find vision-and-language (V+L) models lxmert; uniter to be less effective, which might be because current V+L models have inferior language-related abilities iki2021effect, or because models trained on photographic images are not well-suited to understand the non-literal imagery found in Iconary.
To encode the game state for the Guesser, we first construct a text description of the most recent drawing. A description of each icon is built by incorporating the icon name, possibly the prefix ‘huge’, ‘large’, ‘small’ and ‘tiny’ based on the icon’s size relative to the other icons, the prefix ‘rotated’ if the icon is rotated, and the prefix ‘flipped’ if the icon is reflected. We handle straight arrows as a special case by encoding them as ‘[left/right/up/down] arrow’ depending on their orientation. The text description is then a list of these icons sorted from left to right. To keep the result compact for complex scenes, such as a forest drawn with many tree icons, if multiple icons have the same text description we only produce that description once and add a number prefix to show the count. We use this simplified encoding scheme because preliminary experiments found encoding positional information more precisely, or encoding earlier drawings if they exist, did not improve performance when using T5.
Next, we append the text ‘phrase:’ and, for each word in the target phrase, either an underscore or the correct word if it is known (see Figure 3, top). We experimented with encoding previous incorrect guesses but found it unnecessary as long as models are prevented from repeating those guesses during generation.
The target output is the game phrase. During generation, we constrain models to ensure the output contains the right number of words, includes words that are known to be correct from previous guesses, and exclude words that are known to be incorrect. This is non-trivial for wordpiece models, but we leave details in the appendix.
3.2 Handling OOV Words
We observe that naively trained models often generate words seen in the training data even when they do not match the drawing. To combat this, we propose several extensions to TGuesser:
Rare Word Boosting: Based on a method from controlled language generation ma-etal-2020-powertransformer; ghosh-etal-2017-affect
, we boost the logit score of wordpieces not seen during training. In particular, we add a fixed value (chosen as a hyperparmeter), to the log-probabilities of those wordpieces and then re-apply the softmax operator to get updated word-piece probabilities during generation.
Fill-in-the-Blank Encoding: Following the T5 pre-training format t5, we encode the phrase using ‘extra_id’ tokens for sequences of unknown words instead of underscores and train the model to only predict the text that ought to replace those tokens. Figure 3 contains an example. We expect this will better enable the model to leverage pre-trained knowledge of unseen words; and this does provide improvements (See Table 6).
: We find training for only one epoch beneficial on theOod sets, possibly because more training causes the model to forget about words learned during pre-training, but are still needed in the Ood test sets, due to catastrophic forgetting french1999catastrophic.
Embed Freezing: The word-piece embeddings are frozen to help ensure the model can effectively use wordpieces that were not in the training data.
The Drawer’s input is the game phrase, marked with asterisks to show which words have already been guessed. The output encodes icons with six special tokens, each drawn from a set of new tokens added to T5’s vocabulary and initialized with random embeddings, one indicating the icon name, and five indicating the quantized x coordinate, y coordinate, scale, rotation and reflection (quantized with 32, 16, 11, 8 and 2 buckets respectively). The full output is a sequence of such icons (see Figure 3). Icons are generated in the order used by the human player (we experimented with other orderings, and found them to be less or equally effective), and we mask the output logits to ensure a valid drawing is produced during generation. We propose two additions to help models adapt to this output format:
Special Token Initialization: Icon tokens are initialized by averaging the embeddings of the wordpieces of their names, and quantized tokens are initialized with the embedding of numbers (the first x-coordinate special token is initialized with the embedding for ‘1’, the second for ‘2’, etc.). This gives the model some prior knowledge of what the icons are, and a sense of ordering among the quantized tokens wallace-etal-2019-nlp.
Constrained Training: The output masking used during generation is applied during training so the model does not need to learn the output format.
4 Experimental Setup
In this section, we specify our metrics and baselines. We use T5-3B for TGuesser, but T5-Large for TDrawer
since it generates longer sequences and therefore uses more memory. Other hyperparameters and training details are in the appendix.
4.1 Human/AI Metrics
The best test of Iconary models is playing with human players. When playing with human players, AI Guessers make up to 5 guesses a drawing since that is typical for human Guessers. To ensure diverse Drawings from AI Drawers, we sample a drawing from the model’s conditional distribution instead of using beam search if beam search yields a drawing with the same icons as a previous drawing (if the sample is still similar to a previous drawing, we use it anyway). Human players use the same UI and are not told whether they are playing a human or an AI.
Evaluation is complicated by the fact AIs can make more guesses/drawings than human players since they play faster. To control for this, we measure performance after a fixed number of guesses (for Guessers) and a fixed number of drawings (for Drawers). We measure the Win Rate, meaning whether the Guesser correctly guesses the game phrase. We also measure the Soft Win Rate, computed as whether the guesser guesses the exact phrase for phrases of length 2 or less, misses one word or less for phrases of length 3-5, and misses two words or less for phrases with 6 or more words. For Ood games, the game is only considered a soft win if at least one of the unseen words is guessed since that is the focus of our evaluation (denoted as Soft Win in tables).
We do not do AI/AI evaluations since we find AI players can often win with drawings that would not be understandable to human players.
4.2 Automatic Evaluation Metrics
Gathering human/AI games is challenging since it requires human players with experience playing Iconary. To facilitate automatic evaluation, we propose two metrics for both the Guesser and Drawer that can be computed using human/human games.
Win: Whether the Guesser can win from game states in human/human games. The Guesser generates five guesses for each drawing in a game where it is allowed to see the previous drawings, previous guesses made for those drawings by the human player, and its own previous guesses. Any word the model generates that does not appear in guesses for previous drawings is considered guessed. The game is won if all words are guessed. Note this is a pessimistic metric because models do not get second chances to guess words after they are identified by the human Guesser, but we expect it to be a reasonable proxy for success in human/AI games.
Soft Win: As above, except we evaluate the Guesser’s guessed words on the same soft win metric we use for human/AI games.
Icon F1: Treating drawings as bags of icons, we measure the F1 overlap score between human and computer drawings. We only use the initial drawings for each phrase, and we take the maximum F1 over all human drawings if there are multiple human games for a phrase.
Drawing Perplexity: For models that use the same method of encoding the drawing, we compare the perplexity of each human drawing, averaged over all drawings per game, then averaged over all games in the corpus.
We use the following baselines:
TGuesser-Large/T5Drawer-Base: Identical models but with smaller versions of T5.
BART Guesser/Bart Drawer: Identical models with the BART language model bart. For BART Guesser, we adapt the fill-in-the-blank encoding scheme to generate a copy of the input with the mask tokens replaced, instead of only generating the masked-out tokens, to match BART’s pre-training format.
Transformer Guesser/Transformer Drawer: We train a transformer-based model vaswani2017attention on this task that does not use a pre-trained language model. This model also encodes the drawings as a sequence of special tokens during both decoding and encoding, in which case we find it important to apply a data-augmentation strategy to help the model learn mappings between icons and words they might be used for. See the appendix for details.
TGuesser-IND: TGuesser without the Ood adaptations specified in Section 3.2.
5.1 Human/AI Results
Automatic evaluation metrics on the test sets forTGuesser and our baselines.
|Icon F1||Per.||Icon F1||Per.|
Our models and two baselines played 300 games of Iconary with the same crowdworkers used to build our dataset. We evaluate performance on win rate and soft win rate (see Section 4.1
). We compare against human/human games, and games with elite human players where either the Guesser (if comparing against an AI Guesser) or Drawer (if comparing against an AI Drawer) is a human player in the top quartile of win rates in human/human games. We ran experiments on all four models simultaneously, assigning workers to models randomly, and using the same set of 300 phrases randomly selected from theOod test set for each model.
Results are shown in Figure 4 (see appendix for tables). We cut off games at 20 guesses for Guessers, and 4 drawings for Drawers, since that is the most human players can typically accomplish in a game (<1% of human/human games are longer). At 20 guesses TGuesser has a win rate of 62.9%, which impressively out-performs the average human player by 9 points, but is still 5 points behind elite human players. The gap is larger when using the soft win metric, primarily because that metric requires guessing the OoV word, which is unsurprisingly more challenging. There is a large gap between TGuesser and TGuesser-IND, showing our OoV improvements were critical for success.
Drawing is more challenging than guessing. At 4 drawings TDrawer wins 41.7% of games, which is significant given the need to draw OoV words. It also outperforms the Transformer baseline suggesting that using T5 did help for OoV words. Human players, particularly elite players, perform much better, indicating a sizeable opportunity for future research.
We run the same experiment on 300 Ind test phrases using the same pool of annotators, details are in the appendix. We find our models do much better, TGuesser has a win rate of 96.0% and TDrawer has a win rate of 68.3% at 20 guesses and 4 drawings. Human teams on our Ind test and dev sets get 75.9% for both drawing and guessing. These numbers are not directly comparable since our human/human games used different annotators, but they still make it clear TGuesser is better than human players, and TDrawer is more comparable to human players, on the Ind phrases.
5.2 Error Analysis
We manually annotate 100 unsuccessful games for both TDrawer and TGuesser (qualitative examples are in the appendix). For TGuesser, we find 35% of errors were on relatively simple scenes where the model guessed related words, but misses the key association. Other errors occur with scenes that used visual similarity (15%), relied on fine-grained positional information (13%), had compound words drawn one part at a time (8%), and other complex scenes (17%). Only 3% of cases did not involve the OoV words, and 8% were clearly deficient drawings.
We find TDrawer fails to draw anything for OoV words in 32% of cases, particularly for verbs, possibly because it has learned some verbs do not need cues beyond the related nouns (e.g., ‘driving’ in ‘person driving a car’). Half the time it draws something related to the OoV words, but that is not sufficient for it to be identified (e.g., ‘money’ for hiring, but without anything to distinguish it from ‘buy’ or ‘sell’). Only 12% of unsuccessful games had non-OoV word drawing errors, and 6% were reasonable drawings.
5.3 Automatic Evaluation Metrics Results
We also evaluate our models with automatic metrics on the test sets. Table 4 shows the Guesser results. We find that using T5-3B (compared to T5-Large) is quite important. Also, consistent with our human/AI results the Ood optimizations result in a full 15 point gain in performance. The Transformer baseline falls behind the Ind optimized model, and both models on the soft win metric. Its performance is still reasonable, likely because the large training set provides enough examples of humans drawing for it to memorize common drawing strategies or the Ind words. However, the model is unable to learn to predict Ood words (applying OoV boosting for this model only resulted in incoherent output).
Table 3 shows the Drawer results. We find TDrawer benefits somewhat from using a large language model, and that the Transformer baseline is again effective on Ind data but poor on Ood data. BART Drawer shows better perplexity but significantly worse icon overlap.
We ablate our design choices in more detail using automatic metrics on the dev sets. Table 6 shows the Guesser ablations, we use TGuesser-Large to reduce computational expense. Our improvements are impactful with up to 10 points gained through OoV boosting. Icon modifiers help Ind but not Ood, which suggests the model struggles to make use of modifiers for unseen words, however just treating the drawing as a set of icon names clearly harms performance. Fill-in-the-blank encoding is also impactful, suggesting using an encoding scheme similar to the pre-training one is effective for Ood generalization. Unsurprisingly, many of these optimizations reduce Ind performance because they increase the usage OoV words, which never appear in the Ind dev sets. Table 7 shows the Drawer ablations. Our initialization strategy proves to be critical, which suggests it is what allows TDrawer to leverage the T5 parameter initialization even though it does not output natural language. We also get a modest boost by training with the formatting constraints.
|Icon F1||Per.||Icon F1||Per.|
|No Icon Init||47.33||4.77||31.81||5.96|
|No Num. Init||57.04||4.09||38.58||5.05|
|No Icon/Num. Init||44.85||4.84||28.49||5.99|
|No Train Const.||56.06||4.12||39.17||5.20|
6 Related Work
There is a long history of using games as a testbed for AI. Traditionally these have been adversarial strategy games like Chess Silver2018AGR, Go Silver2016MasteringTG, and many others Moravck2017DeepStackEA; Vinyals2017StarCraftIA; mnih2013playing A few cooperative games have been studied, like Codenames kim2019cooperation or Hanabi walton2019, that are similar to Iconary in that they require players to communicate in order to achieve a shared goal. However, those games severely limit means of communication, whereas Iconary allows a rich variety of communication strategies through the use of drawings, and contains language beyond single words. Pictionary-style guessing with freehand drawings has been explored in sarvadevabhatla2018pictionary; sarvadevabhatla2018game, although they only consider a single-word single-round setting.
Relating text to visual imagery has also been studied in many forms vqa; nlvr. Generating text that describes visual input, as done in Iconary, has been studied in visual dialog visual_dailogchen2015microsoft; flickr, and describing videos aafaq2019video. Training models to produce images from text has been studied for captions Cho2020XLXMERTPC, image specifications reed2016learning, and dialogue sharma2018chatpainter. Unlike in these works, the drawings in Iconary are not photographic and constructed to communicate a phrase. As a result, they can be non-literal and deictic, which makes understanding them a significantly different challenge.
Using a pre-trained language model to understand mixed language and visual input has been considered by marasovic-etal-2020-natural, who use features produced by object detectors or other visual understanding systems as input to GPT-2 radford2019language to generate natural language rationales. scialom-etal-2020-bert also show BERT devlin-etal-2019-bert can be trained for Visual Question Generation vqg. Similar strategies can be found in many V+L pre-trained models lxmert; Lu2019ViLBERTPT; Li2020OscarOA. We also find combining high-level visual features with a pre-trained language model is an effective way to generate visually relevant text, although again our focus is on drawings rather than photographs.
Figurative text is well studied leong-etal-2018-report; veale2016metaphor; shutova-etal-2016-black, but non-literal imagery has mostly only been explored in the context of parsing charts or diagrams. This includes food webs mitra2018knowledge, science diagrams kembhavi2016diagram, charts kafle2018dvqa or for geometry problems seo2014diagram. While this can involve related skills like understanding arrows or using icons to represent concepts, diagrams are usually used to convey technical information and therefore are unlikely to use things like visual metaphor, scenes, or icon compositions to signal words.
The back-and-forth of Iconary follows a dialogue structure where the Guesser is seeking information from the Drawer. A similar format can be found in dialogue QA datasets coqa; quac; aliannejadi2019asking, and task-oriented dialogue in general similarly requires understanding the intent of a human communicator young2013pomdp; chen2017survey. Iconary, however, makes this a multimodal process.
We have presented the game Iconary, a large dataset of human/human games, and our proposed Iconary models. This represents the first test for complex multimodal communication between humans and AIs, and is left as an open challenge to the community.
Iconary: A Pictionary-based Game for Testing Multimodal Communication with Drawings and Text
The appendix includes the following sections:
Appendix A Qualitative Results
Appendix B Training Data Characteristics
Figure 3 shows visualizations and statistics for the training dataset used to train TDrawer and TGuesser. This includes the training word cloud, icon set visualization and activity statistics.
Appendix C Games with Out of Vocabulary Words
Appendix D Iconary UI
Figure 5 shows the UI for playing Iconary.
shows the Guesser for their first turn of guessing, where they see previous guesses made in the left chatbox, color-coded by whether those guesses were incorrect, correct, or close (judged by word vector similarity). Above that, they see the game time and to the left, the drawing created by the Drawer. At the bottom, the Guesser can enter new guesses by filling in blanks for each word in the phrase.Bottom shows the Drawer on the second turn of drawing. The left panel shows the guesses made by the Guesser and the middle shows the drawing as before. When it is their turn, the Drawer can click on icons to move, resize, rotate, duplicate, delete or reflect them. The Drawer can search for icons using text search in the right panel.
Appendix E Constructing Iconary Phrases
In this section, we describe how we build Iconary game phrases in more detail.
e.1 In-Domain Phrases
Our primary source of game phrases is derived from the image summaries from the imSitu dataset imsitu. For each summary, we present crowd workers with the verb, one or more of the associated entities, and ask them to produce a short phrase using those elements. The UI for this task is shown in Figure 6. We use this process to construct about 41k phrases from 23k frames (a frame can produce multiple phrases depending on the subset of entities used). Phrases are on average 5.4 words in length and contain 250 unique verbs and 2,000 other non-stop words.
We hold out 3.5k of these phrases for the Ind test and validation set, ensuring phrases derived from the same imSitu frame are always in the same set. An author of this paper did an additional round of filtering on the test and validation phrases to remove any that contained potentially ambiguous words, described unusual scenes, or contained grammatical errors, leaving 3k phrases for both datasets. The remaining 33k phrases were used for the train set.
e.2 Collecting Out-of-Domain Phrases
We also construct a set of out-of-domain (Ood) test phrases that challenge models to play Iconary with out-of-vocabulary (OoV) words. The imSitu data has a limited vocabulary, and building this set by holding out phrases with particular words from the imSitu phrases would further restrict that vocabulary. Instead, we build phrases by having in-house annotators modify phrases in the Ind test set. We consider two kinds of modifications, verb substitutions, and noun substitutions.
Verb Substitution: We collect a list of verbs from a variety of sources, including the list of visual verbs from zellers-choi-2017-zero, any verbs in imSitu not already used in the training phrases, and the 1000 most frequent verbs that occur in the Google Books corpus googlebooks. This list was manually filtered to a list of 660 verbs that could plausibly be drawn and do not occur in the original phrase set. Annotators were then given a test phrase and asked to write a new phrase that used one of the new verbs, at least one of the nouns from the original phrase, and otherwise preserve as much of the original phrase as possible.
Noun Substitution: We collect a list of nouns by gathering nouns used in the imSitu corpus that had not yet been used in the training data, and a small number of additional nouns from WordNet wordnet that were not already present, and again manually filter them to ensure they are visually representable. In total, we get 4.6k new nouns. Annotators were asked to modify a test phrase by re-using the original verb, substituting in one of the new nouns, and otherwise preserving as much of the original phrases as possible.
In both cases, we make this task easier by building a recommender system that uses the fasttext word vectors fasttext to suggest new noun/verbs that are related to the given phrase. Altogether, we gather 1.5k new noun phrases and 1.5k new verb phrases that use 1.3k new OoV words. We reserve a portion of these (0.4k noun and 0.4k verb phrases) for the Ood dev set.
Appendix F Constraining the Guesser Output
In this section we explain in more detail how we constrain our Guesser wordpiece models to (1) generate the right number of words, (2) always generate known words, and (3) never generate words that are known to be incorrect. The challenge to doing this stems from the fact that these world-level constraints can apply across multiple wordpieces. We implement 1 and 2 by masking tokens during each generation step, specifically:
If the model is generating a known word, we mask out wordpeices that do not exist in that word and don’t start a new word.
If the next word is a known word, we mask out any wordpieces that start new words other than that next known word.
If the word is the last word, we mask out tokens that start a new word, but allow EOS. In other cases, we mask out EOS.
This is sufficient to enforce 1 and mostly enforce 2. It is technically possible for the model to only partly generate a known word, or generate some of its wordpeices out-of-order, but models rarely do so in practice because the output would usually be nonsense.
For 3, we mask out tokens that would start a new word if the word that has just been generated is known to be incorrect. This ensures the model can still generate the wordpieces ‘run’, ‘er’ even if it has already generated ‘run’ as an incorrect guess. This will sometimes mask out all high-probability continuation (e.g., it is unlikely there will be high-probability wordpieces that do not start a new word after generating the word pieces for ‘runners’ if ‘runners’ was an incorrect guess), which can force the model to enter very low-probability generations. To handle this we use a reasonably large number of beams (20), so other beams can be used when this occurs.
Empirically, we find >99.7% of guess generations from game states in the Ood dev set for TGuesser follow these three constraints.
Appendix G Training Details
We train our models with Adafactor adafactor with fixed learning rates of 5e-5 for TGuesser and 3e-4 for TDrawer. TGuesser is trained for one epoch as specified in Section 3.2 and TDrawer is trained for two epochs.
BART Guesser and Drawer are trained with Adam adam with a linearly decreasing learning rates. We train the Guesser for 2 epochs with a learning rate 1e-4, and the Drawer for 3 epochs with a learning rate of 3e-5. Both models linearly warmup the learning from zero for 10% of the training steps.
In all cases, we use a batch size of 32. The scale of the OoV boosting was chosen between 0 and 4.0 with increments of 0.5 on the Ood dev set, we use 0.0 for the TGuesser-IND, 3.5 for BART-Guesser, and 2.0 in all other cases. For generation, we use size 20 beam search with the AllenNLP allennlp implementation.
Appendix H Table of Human/AI Results
In this section, we show Human/AI results in tabular form, as well as the performance of these models when the number of guesses or drawings is unlimited, and our results from the Ind human/AI experiment.
Table 1 shows results for the Guessers, and Table 2 shows results for the Drawers from Figure 4. The AI players show more improvement if allowed to make more than 20 guesses or 4 drawings than human players, but as stated that is primarily because humans players almost always time-out before reaching that point.
Table 3 shows results for the Guessers, and Table 4 shows results for the Drawers on our Ind phrases. Note that human performance for these tables is derived from data in the Ind test and dev sets, which used different annotators than the Ood games and our other human/AI experiments, and is therefore not directly comparable. Nevertheless, it is clear TGuesser outperforms humans on these phrases with a win rate close to 100%, showing that the primary challenge for the Guesser is handling unseen words. TGuesser-IND does slightly better, which is not surprising since it was optimized for Ind performance.
TDrawer is only slightly behind humans on the Ind phrases, and the Transformer drawer is comparable to humans. The performance improvement is most likely due to the fact models can memorize drawing strategies for different words on the training data, and recompose them for new phrases that reuse those words. It is likely the Transformer Drawer is better able to do this because it was trained on the training data for longer, and the data augmentation strategy in appendix I.3 further guided it towards this approach.
|Elite Human Players||888||39.19||48.99||60.92||67.79||66.22||71.40||67.45||71.85||67.57||71.85|
|Elite Human Players||939||30.99||39.94||55.91||63.58||61.66||67.63||62.73||68.16||62.73||68.16|
Appendix I Transformer Models
In the section, we describe our Transformer baselines, which use GloVe glove word embeddings but are otherwise trained from scratch on our training data. Both models use a data augmentation strategy that leverages an icon to word mapping derived from the training data. Both models use 300-dimensional embeddings and 128-dimensional hidden layers, and all hyperparameters were tuned on the Ind dev set.
The Transformer Drawer works by encoding the game state and then decoding a drawing in a similar format to TDrawer. For this model, the last two drawings are converted into the same special tokens used as the output for TDrawer, which are then embedded with learned embeddings. The game phrase, and the previous guess made by the Guesser if there is one, are also embedded with GloVe word-vectors glove. These elements are concatenated as a sequence and encoded using learned positional embeddings and a 3-layer transformer vaswani2017attention. The decoder is another transformer that cross-attends to the encoded input while generating the output drawing. The network is optimized with Adam, using a learning rate of for 30 epochs.
Unlike TDrawer, the icon ordering for the input and target output is determined by the word-to-icon mapping described in Section I.3, in particular, icons are ordered in the order of the words they correspond to, and then in the order in which they were drawn. As a result, we are not able to show a comparable perplexity number to TDrawer in Table 3.
The Transformer Guesser is also a conditional generation model. The current drawing, and previous drawing if it exists, are embedded as a sequence using the same quantized format as before. A single transformer then encodes these drawings.
The decoder is a transformer that cross attends to the encoded drawings. We also allow the self-attention layer to attend to future slots in the game phrase, which are filled with the embeddings of the previous guess (or underscores and stopwords if no such guess exists) if those slots occur after the token currently being generated. We use a two-layer multi-layer perceptron with 256 hidden states and ReLU activations to predict the output word.
We again constrain the model to make sure it generates the right number of words, and any known words, during beam search, and select the highest probability beam that did not produce a word known to be incorrect from previous guesses as output. This model was trained using Adam adam with a learning rate of for ten epochs, and then with a learning rate for for an additional five epochs.
i.3 Data Augmentation
We use data augmentation to boost the performance of both these models (this method did not benefit TGuesser or TDrawer). First, we derive an icon-to-word mapping from the training data using icon/word co-occurrences by learning icon/word embeddings that are similar for drawings and game phrases found in our data, but dissimilar for drawings paired with random game phrases. Then, for each game, we match icons in drawings for that game to the words in the game phrase that best align with those icons. Finally, we build a pseudo-example by removing some words or constituents from the game phrase and removing the corresponding icons from the drawings. These examples are used as additional training data and are intended to help the models internalize the icon to word co-occurrences that occur in the training data.