Iconary: A Pictionary-Based Game for Testing Multimodal Communication with Drawings and Text

Communicating with humans is challenging for AIs because it requires a shared understanding of the world, complex semantics (e.g., metaphors or analogies), and at times multi-modal gestures (e.g., pointing with a finger, or an arrow in a diagram). We investigate these challenges in the context of Iconary, a collaborative game of drawing and guessing based on Pictionary, that poses a novel challenge for the research community. In Iconary, a Guesser tries to identify a phrase that a Drawer is drawing by composing icons, and the Drawer iteratively revises the drawing to help the Guesser in response. This back-and-forth often uses canonical scenes, visual metaphor, or icon compositions to express challenging words, making it an ideal test for mixing language and visual/symbolic communication in AI. We propose models to play Iconary and train them on over 55,000 games between human players. Our models are skillful players and are able to employ world knowledge in language models to play with words unseen during training. Elite human players outperform our models, particularly at the drawing task, leaving an important gap for future research to address. We release our dataset, code, and evaluation setup as a challenge to the community at http://www.github.com/allenai/iconary.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

page 18

page 19

page 20

page 21

page 22

10/06/2020

From Language Games to Drawing Games

We attempt to automate various artistic processes by inventing a set of ...
12/15/2017

CoDraw: Visual Dialog for Collaborative Drawing

In this work, we propose a goal-driven collaborative task that contains ...
07/17/2020

iNNk: A Multi-Player Game to Deceive a Neural Network

This paper presents iNNK, a multiplayer drawing game where human players...
02/21/2021

Mastering Terra Mystica: Applying Self-Play to Multi-agent Cooperative Board Games

In this paper, we explore and compare multiple algorithms for solving th...
02/20/2020

The DIDI dataset: Digital Ink Diagram data

We are releasing a dataset of diagram drawings with dynamic drawing info...
03/11/2019

Pragmatic inference and visual abstraction enable contextual flexibility during visual communication

Visual modes of communication are ubiquitous in modern life — from maps ...
02/28/2018

Investigating Human Priors for Playing Video Games

What makes humans so good at solving seemingly complex video games? Unli...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: Examples of gameplay between human players and our models. Snapshots show the progression (left to right) of two games, with the human player guessing in the top row and drawing in the bottom. Guesses in each round are shown beneath the drawing for that round, and are color-coded (cyan=correctly, magenta=incorrectly guessed word). The first game shows TDrawer drawing ‘origami’ with a sushi icon (presumably to indicate Japan), a turning icon and finally a paper icon once the human has guessed ‘folds’. The second game shows TGuesser correctly guessing ‘apprentice’ by interpreting the icons for baby, adult and knife. The words ‘origami’ and ‘apprentice’ do not appear in the training data for either model. See the appendix for more qualitative results.

Communicating with humans is a long-standing goal in AI, and has been studied in the context of natural language for decades. Many of the key challenges in this task, such as using a shared understanding of the world, commonsense reasoning, and metaphor are, however, not language-specific, but are instead general-purpose tools that humans use when communicating through other modalities as well. For example, understanding what means in a text conversation requires grasping metaphor (it is unlikely to be literally suggesting one should put on a party hat), or understanding a sign with a truck swerving requires common-sense reasoning (the intent is to show slippery conditions, not to suggest drivers ought to begin swerving themselves). Humans can easily adapt to these different modalities, as well as use visual/symbolic tools (e.g., pointing with a finger, or an arrow in a diagram) that cannot be used in a text-only context. To build and test AIs for this skill, we introduce the first task and large-scale dataset for multimodal communication by creating Iconary, a game of drawing and guessing based on Pictionary, along with a dataset of games with human players, proposing automatic and online game playing metrics, and constructing proficient Iconary AIs.

In Iconary, one player (the Drawer) draws an image for a phrase by arranging icons (including the ability to rotate or change the sizes of icons) on a canvas, and a second player (the Guesser) guesses what phrase the drawing represents. We use icons so we can focus on the high-level semantics of the drawings, and to make the game easier to play online. The Guesser then makes a series of attempts to guess the phrase using only the drawing. If the Guesser is unsuccessful, the Drawer can revise the drawing, and the cycle repeats until time runs out or the Guesser is successful. Figure 1 shows an example of an Iconary game, played between a human player and our AI player.

Iconary combines several key comprehension challenges. First, non-literal imagery, since most words in our dataset do not have directly corresponding icons so players will often use visual metaphor (e.g., a school bus and book for ‘textbook’) or reference canonical examples (e.g., lit and unlit light for ‘turning off’) to convey words. Second, visual similarity, since icons can also be composed to draw objects, such as using concentric circles to draw a dartboard. Third, annotations, because Drawers often use arrows, circles, or crosses to indicate motion or to guide the interpretation of the image. Fourth, state tracking, because players need to remember what drawings/guesses have been already done (e.g., Drawers will often re-draw/augment scenes they could tell confused the Guesser, or use annotations to guide the Guesser’s attention towards missed elements). Fifth, world knowledge, since models are tested on words not seen during training.

We present a large dataset for Iconary by having human players play with each other – a collection of 56k games in train, in-domain (Ind) dev and test sets with 5k games, and out-of-domain (Ood) dev and test sets with 1k and 3k games respectively that contain words not seen during training.

Our proposed models, TDrawer and TGuesser, leverage world knowledge in the T5 t5 pre-trained language model and have been carefully adapted to draw and guess words not observed during training. We measure performance using automated metrics, but our main results are shown by having our AIs play games with human players. TDrawer and TGuesser perform remarkably well on the Ind sets (68.3% and 96.0% win rates), but are also able to play impressively with human players on the Ood sets (41.7% and 62.9% win rates), demonstrating their ability to extract and integrate world knowledge for unseen game-play words from language models. Figure 1 shows some interesting games played by our models with human partners on the Ood set.

While our models are capable players, skilled human players outperform them on the Ood sets (a smaller margin of 4.6% at guessing but a sizeable margin of 21.0% at drawing). An error analysis shows that most errors occur for unseen words, particularly verbs, compound words, and examples with complex drawings, such as those requiring fine-grained positional information. Our quantitative and qualitative analysis suggests ample room for future research in this new, rich and complex domain.

2 The Iconary Game and Dataset

Figure 2: Examples of different drawing strategies found in our dataset. The proportion of games that use these methods in a sample from the Ind and Ood dev sets are shown on the top right of each panel.

2.1 Playing Iconary

Iconary is played using a web user interface (UI). First, the Drawer is shown a short phrase and creates a drawing by selecting icons from a library and arranging them on a canvas. We include 1,205 icons from the Noun Project111https://thenounproject.com that were chosen to cover a variety of common entities that would be difficult to draw using other icons. Icons can be resized, rotated, and flipped as desired. Once finished, the Drawer passes the turn to the Guesser.

The Guesser is shown the drawing and the phrase with the non-stop words replaced by blanks, and submits a series of guesses to the UI which indicates which words were correct after each guess to allow incremental progress. If the Guesser gives up, control is passed back to the Drawer who can modify their drawing in response to the guesses made so far. This cycle repeats until the phrase is guessed or a 4-minute timeout is reached. The game UI is provided in the appendix.

2.2 Phrases

magnets doorway honking
swerving nun floss
roasting skidding beverages
dreaming dormitory librarian
charcoal cornfield piloting
rioter stationary winery
bookmarks sampling fireworks
lumber photocopy shipping
unwrapping freezer recycling
motorcylist tidying waiter
receptionist pharmacist stylus
skewers enchilada graduating
diet guitarist lunchroom
cufflinks padlocks soaking
diploma gunpowder completing
Table 1: A random sample of 45 OoV words that are present in the Ood dev set, words like ‘graduating’ or ‘bookmarks’ require creativity to draw with icons.

We collect phrases from two sources (see the appendix for more details). First, we have crowdworkers turn image summaries from imSitu imsitu into short phrases. These summaries are derived from FrameNet framenet and consist of an action with the addition of one or more agents (e.g., people, animals), places (e.g., park, office), or artifacts (e.g., computer, car) filling a variety of verb-specific roles. We base our phrases on these summaries since they contain words that can be depicted visually, i.e., they avoid abstract words like “believing" or “determination" that would be difficult to draw. We collect 41k phrases with 250 unique verbs, 2k other non-stop words, and an average of 5.4 words.

Second, we build out-of-domain (Ood) test phrases that have out-of-vocabulary (OoV) words. To maintain the vocabulary size of our training data, we build these phrases by having in-house annotators modify phrases in the Ind test set rather than holding out phrases with particular words from the imSitu phrases. First, we collect a list of candidate OoV words by gathering unused words from imSitu and a few other sources, and then manually filtering out words that could not plausibly be drawn. The new OoV words are complex and diverse, see Table 1 for a random sample. Second, annotators were given a test phrase and asked to write a new phrase that used one of the new words, at least one of the non-stop words from the original phrase, and otherwise preserve as much of the original phrase as possible. We build 2.8k new Ood phrases with 1.3k new words. Examples of drawings with these words can be found in the appendix.

The imSitu phrases are divided into train, dev and test sets. Additional filtering was done on dev and test to remove ambiguous words, unusual descriptions and grammatical errors (removing about 15%). The Ood phrases were divided into dev and test sets, see Table 2 for statistics.

2.3 Collecting Iconary Games

We gather Iconary games for these phrases by pairing crowdworkers together to play on our UI. Over 900 players played almost 60,000 games (we allowed multiple games to be played for a phrase). Workers qualify by winning a game with another player, and we disqualify workers that have very low win rates during data collection. We also heuristically filter out poor-quality games, such as removing games with no guesses. Since the OOD games are our main target, we additionally filter out games with players who had played less than 15 practice games, or that included a small number of players who had win rates far lower than the average, to ensure high quality.

Table 2 shows statistics for our 5 datasets. Humans have a high success rate for the Ind sets. The Ood phrases prove more challenging, likely because they often use more advanced words that require more skill to draw and guess.

Dataset Games Phrases Win Off-by-One
Train 56k 34k 71.1 83.9
Ind Valid 5.1k 3.1k 75.1 87.5
Ind Test 4.7k 2.9k 76.8 88.3
Ood Valid 1.0k 0.8k 54.4 75.8
Ood Test 3.0k 2.3k 54.1 75.5
Table 2: Dataset statistics. Off-by-one means the Guesser was within one word of the target phrase.

2.4 Analysis

To better understand our dataset, we perform two analyses. First, we manually label occurrences of six non-exclusive drawing strategies in a sample of 200 games from the Ind and Ood dev sets. The results are shown in Figure 2. We observe that most games use complex strategies to represent the phrase; such as composing multiple icons to represent nouns, drawing small scenes for verbs, using annotations, or creatively re-purposing icons. The Ood dataset tends to include less common nouns and verbs, and drawers adapt to this by using more complex strategies for those phrases.

Second, we study how Drawers revise their drawings when the Guesser is unsuccessful. We label drawing revisions as either edit: re-arranging, removing, or re-sizing icons, or adding arrows or other annotations, add: adding new icons to offer alternative visualizations or to hint at connections the Drawer missed, redraw: deleting and redrawing parts of a scene that confused the guesser. We make these labels exclusive by placing games into the latter-most category that applies across all drawing revisions in a game.

The results, and statistics for the use of multiple drawings, are shown in Table 3. We see that Drawers generally use a balanced mix of our identified strategies and that the more challenging Ood games tend to have more drawings.

Split Rounds Drawing Strategy
>= 2 >= 3 >= 4 Edit Add Redraw
Ind 33.3 9.4 1.9 31.5 45.0 23.5
Ood 65.6 23.8 4.5 25.5 38.5 36.0
Table 3: Statistics for multi-drawing games in the Ind and Ood dev sets. The left three numeric columns show the percent of games with different numbers of drawings, the right three show the usage of different re-drawing strategies.

3 Models

We propose TGuesser and TDrawer to play Iconary. Both models condition on the current game state, meaning the previous drawings, guesses and, for TDrawer, the game phrase, and then generate either text to guess the phrase (for TGuesser), or a sequence of special tokens that encode a drawing (for TDrawer).

Although this involves a visual modality, we propose to use language models for this task because (1) the icon names can be used to understand the drawing and (2) Iconary often requires using word knowledge (e.g., mapping person and thumb icons to ‘hitchhiking’ or milk and ice cream icons to ‘milkshake’) that is known to be captured by these models roberts-etal-2020-much. To do this, we encode the game state as text and apply the T5 t5 language model by treating the task as a text-to-text conditional generation task. Interestingly, we find vision-and-language (V+L) models lxmert; uniter to be less effective, which might be because current V+L models have inferior language-related abilities iki2021effect, or because models trained on photographic images are not well-suited to understand the non-literal imagery found in Iconary.

3.1 Guesser

Figure 3:

Game state encoding for our models. For each encoding method, the upper text is the input and the lower text is the target output.

To encode the game state for the Guesser, we first construct a text description of the most recent drawing. A description of each icon is built by incorporating the icon name, possibly the prefix ‘huge’, ‘large’, ‘small’ and ‘tiny’ based on the icon’s size relative to the other icons, the prefix ‘rotated’ if the icon is rotated, and the prefix ‘flipped’ if the icon is reflected. We handle straight arrows as a special case by encoding them as ‘[left/right/up/down] arrow’ depending on their orientation. The text description is then a list of these icons sorted from left to right. To keep the result compact for complex scenes, such as a forest drawn with many tree icons, if multiple icons have the same text description we only produce that description once and add a number prefix to show the count. We use this simplified encoding scheme because preliminary experiments found encoding positional information more precisely, or encoding earlier drawings if they exist, did not improve performance when using T5.

Next, we append the text ‘phrase:’ and, for each word in the target phrase, either an underscore or the correct word if it is known (see Figure 3, top). We experimented with encoding previous incorrect guesses but found it unnecessary as long as models are prevented from repeating those guesses during generation.

The target output is the game phrase. During generation, we constrain models to ensure the output contains the right number of words, includes words that are known to be correct from previous guesses, and exclude words that are known to be incorrect. This is non-trivial for wordpiece models, but we leave details in the appendix.

3.2 Handling OOV Words

We observe that naively trained models often generate words seen in the training data even when they do not match the drawing. To combat this, we propose several extensions to TGuesser:

Rare Word Boosting: Based on a method from controlled language generation ma-etal-2020-powertransformer; ghosh-etal-2017-affect

, we boost the logit score of wordpieces not seen during training. In particular, we add a fixed value (chosen as a hyperparmeter), to the log-probabilities of those wordpieces and then re-apply the softmax operator to get updated word-piece probabilities during generation.

Fill-in-the-Blank Encoding: Following the T5 pre-training format t5, we encode the phrase using ‘extra_id’ tokens for sequences of unknown words instead of underscores and train the model to only predict the text that ought to replace those tokens. Figure 3 contains an example. We expect this will better enable the model to leverage pre-trained knowledge of unseen words; and this does provide improvements (See Table 6).

Early Stopping

: We find training for only one epoch beneficial on the

Ood sets, possibly because more training causes the model to forget about words learned during pre-training, but are still needed in the Ood test sets, due to catastrophic forgetting french1999catastrophic.

Embed Freezing: The word-piece embeddings are frozen to help ensure the model can effectively use wordpieces that were not in the training data.

3.3 Drawer

The Drawer’s input is the game phrase, marked with asterisks to show which words have already been guessed. The output encodes icons with six special tokens, each drawn from a set of new tokens added to T5’s vocabulary and initialized with random embeddings, one indicating the icon name, and five indicating the quantized x coordinate, y coordinate, scale, rotation and reflection (quantized with 32, 16, 11, 8 and 2 buckets respectively). The full output is a sequence of such icons (see Figure 3). Icons are generated in the order used by the human player (we experimented with other orderings, and found them to be less or equally effective), and we mask the output logits to ensure a valid drawing is produced during generation. We propose two additions to help models adapt to this output format:

Special Token Initialization: Icon tokens are initialized by averaging the embeddings of the wordpieces of their names, and quantized tokens are initialized with the embedding of numbers (the first x-coordinate special token is initialized with the embedding for ‘1’, the second for ‘2’, etc.). This gives the model some prior knowledge of what the icons are, and a sense of ordering among the quantized tokens wallace-etal-2019-nlp.

Constrained Training: The output masking used during generation is applied during training so the model does not need to learn the output format.

4 Experimental Setup

In this section, we specify our metrics and baselines. We use T5-3B for TGuesser, but T5-Large for TDrawer

since it generates longer sequences and therefore uses more memory. Other hyperparameters and training details are in the appendix.

4.1 Human/AI Metrics

The best test of Iconary models is playing with human players. When playing with human players, AI Guessers make up to 5 guesses a drawing since that is typical for human Guessers. To ensure diverse Drawings from AI Drawers, we sample a drawing from the model’s conditional distribution instead of using beam search if beam search yields a drawing with the same icons as a previous drawing (if the sample is still similar to a previous drawing, we use it anyway). Human players use the same UI and are not told whether they are playing a human or an AI.

Evaluation is complicated by the fact AIs can make more guesses/drawings than human players since they play faster. To control for this, we measure performance after a fixed number of guesses (for Guessers) and a fixed number of drawings (for Drawers). We measure the Win Rate, meaning whether the Guesser correctly guesses the game phrase. We also measure the Soft Win Rate, computed as whether the guesser guesses the exact phrase for phrases of length 2 or less, misses one word or less for phrases of length 3-5, and misses two words or less for phrases with 6 or more words. For Ood games, the game is only considered a soft win if at least one of the unseen words is guessed since that is the focus of our evaluation (denoted as Soft Win in tables).

We do not do AI/AI evaluations since we find AI players can often win with drawings that would not be understandable to human players.

4.2 Automatic Evaluation Metrics

Gathering human/AI games is challenging since it requires human players with experience playing Iconary. To facilitate automatic evaluation, we propose two metrics for both the Guesser and Drawer that can be computed using human/human games.

Win: Whether the Guesser can win from game states in human/human games. The Guesser generates five guesses for each drawing in a game where it is allowed to see the previous drawings, previous guesses made for those drawings by the human player, and its own previous guesses. Any word the model generates that does not appear in guesses for previous drawings is considered guessed. The game is won if all words are guessed. Note this is a pessimistic metric because models do not get second chances to guess words after they are identified by the human Guesser, but we expect it to be a reasonable proxy for success in human/AI games.

Soft Win: As above, except we evaluate the Guesser’s guessed words on the same soft win metric we use for human/AI games.

Icon F1: Treating drawings as bags of icons, we measure the F1 overlap score between human and computer drawings. We only use the initial drawings for each phrase, and we take the maximum F1 over all human drawings if there are multiple human games for a phrase.

Drawing Perplexity: For models that use the same method of encoding the drawing, we compare the perplexity of each human drawing, averaged over all drawings per game, then averaged over all games in the corpus.

4.3 Baselines

We use the following baselines:

TGuesser-Large/T5Drawer-Base: Identical models but with smaller versions of T5.

BART Guesser/Bart Drawer: Identical models with the BART language model bart. For BART Guesser, we adapt the fill-in-the-blank encoding scheme to generate a copy of the input with the mask tokens replaced, instead of only generating the masked-out tokens, to match BART’s pre-training format.

Transformer Guesser/Transformer Drawer: We train a transformer-based model vaswani2017attention on this task that does not use a pre-trained language model. This model also encodes the drawings as a sequence of special tokens during both decoding and encoding, in which case we find it important to apply a data-augmentation strategy to help the model learn mappings between icons and words they might be used for. See the appendix for details.

TGuesser-IND: TGuesser without the Ood adaptations specified in Section 3.2.

5 Results

5.1 Human/AI Results

Figure 4: Win rates of our models (TGuesser on the left and TDrawer on the right) when playing Iconary with human players on phrases from the Ood test set, as more guesses or drawings are used. Graphs with dashed lines show the soft win rate.
Model Ind Ood
Win Soft Win Soft
TGuesser 84.25 97.62 37.39 44.06
TGuesser-IND 85.91 98.55 22.67 27.24
TGuesser-Large 79.34 97.09 33.30 40.61
BART Guesser 78.84 96.69 27.07 34.48
Transformer 79.89 93.64 0.00 0.00
Table 4:

Automatic evaluation metrics on the test sets for

TGuesser and our baselines.
Model Ind Ood
Icon F1 Per. Icon F1 Per.
TDrawer 58.04 3.84 40.34 4.89
TDrawer-Base 58.06 3.95 39.18 5.05
BART Drawer 55.07 3.67 36.64 4.67
Transformer 58.19 - 35.78 -
Table 5: Automatic evaluation metrics on the test sets for TDrawer and our baselines33footnotemark: 3.

Our models and two baselines played 300 games of Iconary with the same crowdworkers used to build our dataset. We evaluate performance on win rate and soft win rate (see Section 4.1

). We compare against human/human games, and games with elite human players where either the Guesser (if comparing against an AI Guesser) or Drawer (if comparing against an AI Drawer) is a human player in the top quartile of win rates in human/human games. We ran experiments on all four models simultaneously, assigning workers to models randomly, and using the same set of 300 phrases randomly selected from the

Ood test set for each model.

Results are shown in Figure 4 (see appendix for tables). We cut off games at 20 guesses for Guessers, and 4 drawings for Drawers, since that is the most human players can typically accomplish in a game (<1% of human/human games are longer). At 20 guesses TGuesser has a win rate of 62.9%, which impressively out-performs the average human player by 9 points, but is still 5 points behind elite human players. The gap is larger when using the soft win metric, primarily because that metric requires guessing the OoV word, which is unsurprisingly more challenging. There is a large gap between TGuesser and TGuesser-IND, showing our OoV improvements were critical for success.

Drawing is more challenging than guessing. At 4 drawings TDrawer wins 41.7% of games, which is significant given the need to draw OoV words. It also outperforms the Transformer baseline suggesting that using T5 did help for OoV words. Human players, particularly elite players, perform much better, indicating a sizeable opportunity for future research.

We run the same experiment on 300 Ind test phrases using the same pool of annotators, details are in the appendix. We find our models do much better, TGuesser has a win rate of 96.0% and TDrawer has a win rate of 68.3% at 20 guesses and 4 drawings. Human teams on our Ind test and dev sets get 75.9% for both drawing and guessing. These numbers are not directly comparable since our human/human games used different annotators, but they still make it clear TGuesser is better than human players, and TDrawer is more comparable to human players, on the Ind phrases.

5.2 Error Analysis

We manually annotate 100 unsuccessful games for both TDrawer and TGuesser (qualitative examples are in the appendix). For TGuesser, we find 35% of errors were on relatively simple scenes where the model guessed related words, but misses the key association. Other errors occur with scenes that used visual similarity (15%), relied on fine-grained positional information (13%), had compound words drawn one part at a time (8%), and other complex scenes (17%). Only 3% of cases did not involve the OoV words, and 8% were clearly deficient drawings.

We find TDrawer fails to draw anything for OoV words in 32% of cases, particularly for verbs, possibly because it has learned some verbs do not need cues beyond the related nouns (e.g., ‘driving’ in ‘person driving a car’). Half the time it draws something related to the OoV words, but that is not sufficient for it to be identified (e.g., ‘money’ for hiring, but without anything to distinguish it from ‘buy’ or ‘sell’). Only 12% of unsuccessful games had non-OoV word drawing errors, and 6% were reasonable drawings.

5.3 Automatic Evaluation Metrics Results

We also evaluate our models with automatic metrics on the test sets. Table 4 shows the Guesser results. We find that using T5-3B (compared to T5-Large) is quite important. Also, consistent with our human/AI results the Ood optimizations result in a full 15 point gain in performance. The Transformer baseline falls behind the Ind optimized model, and both models on the soft win metric. Its performance is still reasonable, likely because the large training set provides enough examples of humans drawing for it to memorize common drawing strategies or the Ind words. However, the model is unable to learn to predict Ood words (applying OoV boosting for this model only resulted in incoherent output).

Table 3 shows the Drawer results. We find TDrawer benefits somewhat from using a large language model, and that the Transformer baseline is again effective on Ind data but poor on Ood data. BART Drawer shows better perplexity but significantly worse icon overlap.

5.4 Ablations

Model Ind Ood
Win Soft Win Soft
TGuesser-Large 78.96 95.92 32.00 39.28
TGuesser-Base 70.72 93.03 26.36 34.05
3 Epochs 82.05 96.61 29.85 34.97
No Boost 83.14 96.67 26.05 29.64
No Fill-in-the-Blank 82.17 96.89 29.64 34.46
No Modifiers 76.77 95.06 31.49 38.67
Names Only 74.45 94.48 29.74 35.90
Table 6: Guesser ablations on the dev sets. Ablations use T5-Base instead of T5-Large, train for 3 epochs instead of 1, remove OoV boosting, remove fill-in-the-blank encoding, remove modifiers like large/small/rotated from icon names, or use icons names in a randomized order to encode the drawing.

We ablate our design choices in more detail using automatic metrics on the dev sets. Table 6 shows the Guesser ablations, we use TGuesser-Large to reduce computational expense. Our improvements are impactful with up to 10 points gained through OoV boosting. Icon modifiers help Ind but not Ood, which suggests the model struggles to make use of modifiers for unseen words, however just treating the drawing as a set of icon names clearly harms performance. Fill-in-the-blank encoding is also impactful, suggesting using an encoding scheme similar to the pre-training one is effective for Ood generalization. Unsurprisingly, many of these optimizations reduce Ind performance because they increase the usage OoV words, which never appear in the Ind dev sets. Table 7 shows the Drawer ablations. Our initialization strategy proves to be critical, which suggests it is what allows TDrawer to leverage the T5 parameter initialization even though it does not output natural language. We also get a modest boost by training with the formatting constraints.

Model Ind Ood
Icon F1 Per. Icon F1 Per.
TDrawer 57.37 3.89 39.96 4.83
TDrawer-Base 57.46 4.01 39.01 4.98
No Icon Init 47.33 4.77 31.81 5.96
No Num. Init 57.04 4.09 38.58 5.05
No Icon/Num. Init 44.85 4.84 28.49 5.99
No Train Const. 56.06 4.12 39.17 5.20
Table 7: Drawer ablations on the dev sets. Ablations use T5-Base instead of T5-Large, remove icon, quantized token, or both initializations, or remove training-time formatting constraints (see Section 3.3).

6 Related Work

There is a long history of using games as a testbed for AI. Traditionally these have been adversarial strategy games like Chess Silver2018AGR, Go Silver2016MasteringTG, and many others Moravck2017DeepStackEA; Vinyals2017StarCraftIA; mnih2013playing A few cooperative games have been studied, like Codenames kim2019cooperation or Hanabi walton2019, that are similar to Iconary in that they require players to communicate in order to achieve a shared goal. However, those games severely limit means of communication, whereas Iconary allows a rich variety of communication strategies through the use of drawings, and contains language beyond single words. Pictionary-style guessing with freehand drawings has been explored in sarvadevabhatla2018pictionary; sarvadevabhatla2018game, although they only consider a single-word single-round setting.

Relating text to visual imagery has also been studied in many forms vqa; nlvr. Generating text that describes visual input, as done in Iconary, has been studied in visual dialog visual_dailog

, image captioning 

chen2015microsoft; flickr, and describing videos  aafaq2019video. Training models to produce images from text has been studied for captions Cho2020XLXMERTPC, image specifications reed2016learning, and dialogue sharma2018chatpainter. Unlike in these works, the drawings in Iconary are not photographic and constructed to communicate a phrase. As a result, they can be non-literal and deictic, which makes understanding them a significantly different challenge.

Using a pre-trained language model to understand mixed language and visual input has been considered by marasovic-etal-2020-natural, who use features produced by object detectors or other visual understanding systems as input to GPT-2 radford2019language to generate natural language rationales. scialom-etal-2020-bert also show BERT devlin-etal-2019-bert can be trained for Visual Question Generation vqg. Similar strategies can be found in many V+L pre-trained models lxmert; Lu2019ViLBERTPT; Li2020OscarOA. We also find combining high-level visual features with a pre-trained language model is an effective way to generate visually relevant text, although again our focus is on drawings rather than photographs.

Figurative text is well studied leong-etal-2018-report; veale2016metaphor; shutova-etal-2016-black, but non-literal imagery has mostly only been explored in the context of parsing charts or diagrams. This includes food webs mitra2018knowledge, science diagrams kembhavi2016diagram, charts kafle2018dvqa or for geometry problems seo2014diagram. While this can involve related skills like understanding arrows or using icons to represent concepts, diagrams are usually used to convey technical information and therefore are unlikely to use things like visual metaphor, scenes, or icon compositions to signal words.

The back-and-forth of Iconary follows a dialogue structure where the Guesser is seeking information from the Drawer. A similar format can be found in dialogue QA datasets coqa; quac; aliannejadi2019asking, and task-oriented dialogue in general similarly requires understanding the intent of a human communicator young2013pomdp; chen2017survey. Iconary, however, makes this a multimodal process.

7 Conclusion

We have presented the game Iconary, a large dataset of human/human games, and our proposed Iconary models. This represents the first test for complex multimodal communication between humans and AIs, and is left as an open challenge to the community.

References

Appendix -
Iconary: A Pictionary-based Game for Testing Multimodal Communication with Drawings and Text

The appendix includes the following sections:

  • Sec A - Qualitative Results

  • Sec B - Training Data Characteristics

  • Sec C - Out of Vocabulary Words

  • Sec D - Iconary UI

  • Sec E - Constructing iconary phrases

  • Sec F - Constraining the Guesser Output

  • Sec G - Training Details

  • Sec H - Table of Human/AI Results

  • Sec I - Baseline Transformer Models

Appendix A Qualitative Results

Here we present more qualitative results for human/AI games. Figure 1 shows games where the human player guessed the phrase that was drawn by TDrawer. Figure 2 shows games where the human player drew the icon compositions which were then sent to TGuesser to guess.

Figure 1: TDrawer qualitative results. Examples of gameplay between human guessers and TDrawer. Snapshots show the progression (left to right) of three games. Guesses in each round are shown beneath the drawing for that round and are color-coded (cyan=correctly, magenta=incorrectly guessed word). The first game shows TDrawer focused on conveying the word ’fainting’, a concept not encountered during training. Its first attempt is a literal representing of the phrase, but a subsequent drawing uses a frightened face to convey a possible cause of fainting. The second game shows TDrawer attempting to draw the unseen word ’astronaut’ by using a space shuttle and a ringed planet, which the guesser immediately recognizes. In the final game TDrawer must communicate ’reading a diploma in an office’ without having seen the difficult concept of ’diploma’ during training. The words ‘fainting’, ’astronaut’ and ‘diploma’ do not appear in the training data for TDrawer.
Figure 2: TGuesser qualitative results. Examples of gameplay between TGuesser and human drawers. Snapshots show the progression (left to right) of three games. Guesses in each round are shown beneath the drawing for that round and are color-coded (cyan=correctly, magenta=incorrectly guessed word). In the first game TGuesser quickly gets the action of ’shouting’ and the setting of a ’debate’, but struggles with the unseen concept of ’moderator’ until the human drawer adds a television to their scene. In the second game, the initial drawing is able to convey everything except the unseen verb ’crumbling’. The human drawer is able to use clouds of smoke and a trash can, symbols commonly used for demolition, to get it across. In the last game, the system is unable to guess the unseen verb ’circling’ until the human drawer emphasizes the circle icon with an arrow. The words ‘moderator’, ’crumbling’ and ‘circling’ do not appear in the training data for TGuesser.

Appendix B Training Data Characteristics

Figure 3 shows visualizations and statistics for the training dataset used to train TDrawer and TGuesser. This includes the training word cloud, icon set visualization and activity statistics.

Figure 3: Iconary Training Dataset characteristics. a, Word cloud showing the 500 most frequent words appearing in Iconary phrases, sized by the square root of their relative frequency. b, Cloud of icons available to Iconary players, sized by the square root of their relative frequency. The distributions of words and icons have long tails that contain a rich diversity of concepts. This sparsity forces models to learn concepts and icon usage from a small number of examples. c, Player activity within training set games quantified by the number of guesses and icon placements made by players. A nontrivial number of actions on the part of both players are required for a successful game. d, Breakdown of games by the number of complete rounds of drawing and guessing completed. Nearly half of all games require at least one round of feedback from the drawer, and a significant fraction require multiple rounds.

Appendix C Games with Out of Vocabulary Words

Figure 4 shows the first drawings within games between human players for phrases in the Ood set that contain an OoV word in Table 1. As seen, the drawings for these phrases are rich and often require a creative usage of icons to refer to the OoV words.

Figure 4: The first drawing for some human-human Iconary games. These phrases belong to the Ood dev set. The word in red represents the OoV word, not observed in the training set.

Appendix D Iconary UI

Figure 5 shows the UI for playing Iconary.

Figure 5: Our UI for playing Iconary. Top

shows the Guesser for their first turn of guessing, where they see previous guesses made in the left chatbox, color-coded by whether those guesses were incorrect, correct, or close (judged by word vector similarity). Above that, they see the game time and to the left, the drawing created by the Drawer. At the bottom, the Guesser can enter new guesses by filling in blanks for each word in the phrase.

Bottom shows the Drawer on the second turn of drawing. The left panel shows the guesses made by the Guesser and the middle shows the drawing as before. When it is their turn, the Drawer can click on icons to move, resize, rotate, duplicate, delete or reflect them. The Drawer can search for icons using text search in the right panel.

Appendix E Constructing Iconary Phrases

In this section, we describe how we build Iconary game phrases in more detail.

Figure 6: Our UI for authoring Iconary phrases based on the imSitu corpus.

e.1 In-Domain Phrases

Our primary source of game phrases is derived from the image summaries from the imSitu dataset imsitu. For each summary, we present crowd workers with the verb, one or more of the associated entities, and ask them to produce a short phrase using those elements. The UI for this task is shown in Figure 6. We use this process to construct about 41k phrases from 23k frames (a frame can produce multiple phrases depending on the subset of entities used). Phrases are on average 5.4 words in length and contain 250 unique verbs and 2,000 other non-stop words.

We hold out 3.5k of these phrases for the Ind test and validation set, ensuring phrases derived from the same imSitu frame are always in the same set. An author of this paper did an additional round of filtering on the test and validation phrases to remove any that contained potentially ambiguous words, described unusual scenes, or contained grammatical errors, leaving 3k phrases for both datasets. The remaining 33k phrases were used for the train set.

e.2 Collecting Out-of-Domain Phrases

We also construct a set of out-of-domain (Ood) test phrases that challenge models to play Iconary with out-of-vocabulary (OoV) words. The imSitu data has a limited vocabulary, and building this set by holding out phrases with particular words from the imSitu phrases would further restrict that vocabulary. Instead, we build phrases by having in-house annotators modify phrases in the Ind test set. We consider two kinds of modifications, verb substitutions, and noun substitutions.

Verb Substitution: We collect a list of verbs from a variety of sources, including the list of visual verbs from zellers-choi-2017-zero, any verbs in imSitu not already used in the training phrases, and the 1000 most frequent verbs that occur in the Google Books corpus googlebooks. This list was manually filtered to a list of 660 verbs that could plausibly be drawn and do not occur in the original phrase set. Annotators were then given a test phrase and asked to write a new phrase that used one of the new verbs, at least one of the nouns from the original phrase, and otherwise preserve as much of the original phrase as possible.

Noun Substitution: We collect a list of nouns by gathering nouns used in the imSitu corpus that had not yet been used in the training data, and a small number of additional nouns from WordNet wordnet that were not already present, and again manually filter them to ensure they are visually representable. In total, we get 4.6k new nouns. Annotators were asked to modify a test phrase by re-using the original verb, substituting in one of the new nouns, and otherwise preserving as much of the original phrases as possible.

In both cases, we make this task easier by building a recommender system that uses the fasttext word vectors fasttext to suggest new noun/verbs that are related to the given phrase. Altogether, we gather 1.5k new noun phrases and 1.5k new verb phrases that use 1.3k new OoV words. We reserve a portion of these (0.4k noun and 0.4k verb phrases) for the Ood dev set.

Appendix F Constraining the Guesser Output

In this section we explain in more detail how we constrain our Guesser wordpiece models to (1) generate the right number of words, (2) always generate known words, and (3) never generate words that are known to be incorrect. The challenge to doing this stems from the fact that these world-level constraints can apply across multiple wordpieces. We implement 1 and 2 by masking tokens during each generation step, specifically:

  • If the model is generating a known word, we mask out wordpeices that do not exist in that word and don’t start a new word.

  • If the next word is a known word, we mask out any wordpieces that start new words other than that next known word.

  • If the word is the last word, we mask out tokens that start a new word, but allow EOS. In other cases, we mask out EOS.

This is sufficient to enforce 1 and mostly enforce 2. It is technically possible for the model to only partly generate a known word, or generate some of its wordpeices out-of-order, but models rarely do so in practice because the output would usually be nonsense.

For 3, we mask out tokens that would start a new word if the word that has just been generated is known to be incorrect. This ensures the model can still generate the wordpieces ‘run’, ‘er’ even if it has already generated ‘run’ as an incorrect guess. This will sometimes mask out all high-probability continuation (e.g., it is unlikely there will be high-probability wordpieces that do not start a new word after generating the word pieces for ‘runners’ if ‘runners’ was an incorrect guess), which can force the model to enter very low-probability generations. To handle this we use a reasonably large number of beams (20), so other beams can be used when this occurs.

Empirically, we find >99.7% of guess generations from game states in the Ood dev set for TGuesser follow these three constraints.

Appendix G Training Details

We train our models with Adafactor adafactor with fixed learning rates of 5e-5 for TGuesser and 3e-4 for TDrawer. TGuesser is trained for one epoch as specified in Section 3.2 and TDrawer is trained for two epochs.

BART Guesser and Drawer are trained with Adam adam with a linearly decreasing learning rates. We train the Guesser for 2 epochs with a learning rate 1e-4, and the Drawer for 3 epochs with a learning rate of 3e-5. Both models linearly warmup the learning from zero for 10% of the training steps.

In all cases, we use a batch size of 32. The scale of the OoV boosting was chosen between 0 and 4.0 with increments of 0.5 on the Ood dev set, we use 0.0 for the TGuesser-IND, 3.5 for BART-Guesser, and 2.0 in all other cases. For generation, we use size 20 beam search with the AllenNLP allennlp implementation.

Appendix H Table of Human/AI Results

In this section, we show Human/AI results in tabular form, as well as the performance of these models when the number of guesses or drawings is unlimited, and our results from the Ind human/AI experiment.

Table 1 shows results for the Guessers, and Table 2 shows results for the Drawers from Figure 4. The AI players show more improvement if allowed to make more than 20 guesses or 4 drawings than human players, but as stated that is primarily because humans players almost always time-out before reaching that point.

Table 3 shows results for the Guessers, and Table 4 shows results for the Drawers on our Ind phrases. Note that human performance for these tables is derived from data in the Ind test and dev sets, which used different annotators than the Ood games and our other human/AI experiments, and is therefore not directly comparable. Nevertheless, it is clear TGuesser outperforms humans on these phrases with a win rate close to 100%, showing that the primary challenge for the Guesser is handling unseen words. TGuesser-IND does slightly better, which is not surprising since it was optimized for Ind performance.

TDrawer is only slightly behind humans on the Ind phrases, and the Transformer drawer is comparable to humans. The performance improvement is most likely due to the fact models can memorize drawing strategies for different words on the training data, and recompose them for new phrases that reuse those words. It is likely the Transformer Drawer is better able to do this because it was trained on the training data for longer, and the data augmentation strategy in appendix I.3 further guided it towards this approach.

Guesser n 5 10 15 20
Win Soft Win Soft Win Soft Win Soft Win Soft
Elite Human Players 888 39.19 48.99 60.92 67.79 66.22 71.40 67.45 71.85 67.57 71.85
Human Players 3930 28.80 39.95 47.84 56.51 52.72 60.03 53.84 60.64 54.15 60.71
TGuesser 299 38.13 43.14 53.18 57.53 61.87 64.88 62.88 65.55 66.56 68.90
TGuesser IND 300 23.33 29.00 42.00 46.00 50.67 54.33 54.00 57.33 60.67 63.33
Table 1: Guesser performance from Figure 4, left, in tabular form. The top column headers show the number of guesses made, the final column shows performance with an unlimited number of guesses, and the second column shows the number of games we have in each category. One game from TGuesser was removed because a Drawer timed-out without creating a Drawing.
Drawer n 1 2 3 4
Win Soft Win Soft Win Soft Win Soft Win Soft
Elite Human Players 939 30.99 39.94 55.91 63.58 61.66 67.63 62.73 68.16 62.73 68.16
Human Players 3930 28.65 38.70 48.58 56.49 53.18 60.15 54.05 60.64 54.05 60.74
TDrawer 300 19.67 24.67 32.33 36.67 37.67 41.67 41.67 45.67 45.00 48.33
Transformer 300 12.33 15.33 21.33 25.33 28.00 31.33 31.00 34.33 35.00 38.33
Table 2: Drawer performance from Figure 4, right, in tabular form.
Guesser n 5 10 15 20
Win Soft Win Soft Win Soft Win Soft Win Soft
Human Players 9825 51.60 81.41 72.10 88.71 75.58 89.37 75.94 89.38 75.94 89.38
TGuesser 298 83.22 98.32 93.62 98.99 95.64 98.99 95.97 98.99 95.97 98.99
TGuesser-IND 300 88.00 98.67 95.33 99.33 97.67 99.67 97.67 99.67 97.67 99.67
Table 3: Guesser performance when playing with humans on Ind test phrases.
Drawer n 1 2 3 4
Win Soft Win Soft Win Soft Win Soft Win Soft
Human Players 9825 51.58 80.64 71.62 88.41 75.48 89.33 75.90 89.33 75.90 89.33
TDrawer 300 39.67 75.33 59.33 88.67 66.00 88.67 68.33 88.67 69.00 88.67
Transformer 299 45.15 74.92 62.88 87.63 68.56 89.97 71.91 91.97 73.24 91.97
Table 4: Drawer performance when playing with humans on Ind test phrases.

Appendix I Transformer Models

In the section, we describe our Transformer baselines, which use GloVe glove word embeddings but are otherwise trained from scratch on our training data. Both models use a data augmentation strategy that leverages an icon to word mapping derived from the training data. Both models use 300-dimensional embeddings and 128-dimensional hidden layers, and all hyperparameters were tuned on the Ind dev set.

i.1 Drawer

The Transformer Drawer works by encoding the game state and then decoding a drawing in a similar format to TDrawer. For this model, the last two drawings are converted into the same special tokens used as the output for TDrawer, which are then embedded with learned embeddings. The game phrase, and the previous guess made by the Guesser if there is one, are also embedded with GloVe word-vectors glove. These elements are concatenated as a sequence and encoded using learned positional embeddings and a 3-layer transformer vaswani2017attention. The decoder is another transformer that cross-attends to the encoded input while generating the output drawing. The network is optimized with Adam, using a learning rate of for 30 epochs.

Unlike TDrawer, the icon ordering for the input and target output is determined by the word-to-icon mapping described in Section I.3, in particular, icons are ordered in the order of the words they correspond to, and then in the order in which they were drawn. As a result, we are not able to show a comparable perplexity number to TDrawer in Table 3.

i.2 Guesser

The Transformer Guesser is also a conditional generation model. The current drawing, and previous drawing if it exists, are embedded as a sequence using the same quantized format as before. A single transformer then encodes these drawings.

The decoder is a transformer that cross attends to the encoded drawings. We also allow the self-attention layer to attend to future slots in the game phrase, which are filled with the embeddings of the previous guess (or underscores and stopwords if no such guess exists) if those slots occur after the token currently being generated. We use a two-layer multi-layer perceptron with 256 hidden states and ReLU activations to predict the output word.

We again constrain the model to make sure it generates the right number of words, and any known words, during beam search, and select the highest probability beam that did not produce a word known to be incorrect from previous guesses as output. This model was trained using Adam adam with a learning rate of for ten epochs, and then with a learning rate for for an additional five epochs.

i.3 Data Augmentation

We use data augmentation to boost the performance of both these models (this method did not benefit TGuesser or TDrawer). First, we derive an icon-to-word mapping from the training data using icon/word co-occurrences by learning icon/word embeddings that are similar for drawings and game phrases found in our data, but dissimilar for drawings paired with random game phrases. Then, for each game, we match icons in drawings for that game to the words in the game phrase that best align with those icons. Finally, we build a pseudo-example by removing some words or constituents from the game phrase and removing the corresponding icons from the drawings. These examples are used as additional training data and are intended to help the models internalize the icon to word co-occurrences that occur in the training data.