Game of Sketches: Deep Recurrent Models of Pictionary-style Word Guessing

01/29/2018 ∙ by Ravi Kiran Sarvadevabhatla, et al. ∙ 0

The ability of intelligent agents to play games in human-like fashion is popularly considered a benchmark of progress in Artificial Intelligence. Similarly, performance on multi-disciplinary tasks such as Visual Question Answering (VQA) is considered a marker for gauging progress in Computer Vision. In our work, we bring games and VQA together. Specifically, we introduce the first computational model aimed at Pictionary, the popular word-guessing social game. We first introduce Sketch-QA, an elementary version of Visual Question Answering task. Styled after Pictionary, Sketch-QA uses incrementally accumulated sketch stroke sequences as visual data. Notably, Sketch-QA involves asking a fixed question ("What object is being drawn?") and gathering open-ended guess-words from human guessers. We analyze the resulting dataset and present many interesting findings therein. To mimic Pictionary-style guessing, we subsequently propose a deep neural model which generates guess-words in response to temporally evolving human-drawn sketches. Our model even makes human-like mistakes while guessing, thus amplifying the human mimicry factor. We evaluate our model on the large-scale guess-word dataset generated via Sketch-QA task and compare with various baselines. We also conduct a Visual Turing Test to obtain human impressions of the guess-words generated by humans and our model. Experimental results demonstrate the promise of our approach for Pictionary and similarly themed games.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In the history of AI, computer-based modelling of human player games such as Backgammon, Chess and Go has been an important research area. The accomplishments of well-known game engines (e.g. TD-Gammon [1], DeepBlue [2], AlphaGo [3]) and their ability to mimic human-like game moves has been a well-accepted proxy for gauging progress in AI. Meanwhile, progress in visuo-lingual problems such as visual captioning [4, 5, 6] and visual question answering [7, 8, 9] is increasingly serving a similar purpose for computer vision community. With these developments as backdrop, we explore the popular social game PictionaryTM.

The game of Pictionary brings together predominantly the visual and linguistic modalities. The game uses a shuffled deck of cards with guess-words printed on them. The participants first group themselves into teams and each team takes turns. For a given turn, a team’s member selects a card. He/she then attempts to draw a sketch corresponding to the word printed on the card in such a way that the team-mates can guess the word correctly. The rules of the game forbid any verbal communication between the drawer and team-mates. Thus, the drawer conveys the intended guess-word primarily via the sketching process.

Consider the scenario depicted in Figure 1. A group of people are playing Pictionary. New to the game, a ‘social’ robot is watching people play. Passively, its sensors record the strokes being drawn on the sketching board, guess-words uttered by the drawer’s team members and finally, whether the last guess is correct. Having observed multiple such game rounds, the robot learns computational models which mimic human guesses and enable it to participate in the game.

Fig. 1: We propose a deep recurrent model of Pictionary-style word guessing. Such models can enable social robots to participate in real-life game scenarios as shown above. Picture credit:Trisha Mittal.
Fig. 2: The time-line of a typical Sketch-QA guessing session: Every time a stroke is added, the subject either inputs a best-guess word of the object being drawn (stroke #). In case existing strokes do not offer enough clues, he/she requests the next stroke be drawn. After the final stroke (#), the subject is informed the object’s ground-truth category.

As a step towards building such computational models, we first collect guess-word data via Sketch Question Answering (Sketch-QA), a novel, Pictionary-style guessing task. We employ a large-scale crowdsourced dataset of hand-drawn object sketches whose temporal stroke information is available [10]. Starting with a blank canvas, we successively add strokes of an object sketch and display this process to human subjects (see Figure 2). Every time a stroke is added, the subject provides a best-guess of the object being drawn. In case existing strokes do not offer enough clues for a confident guess, the subject requests the next stroke be drawn. After the final stroke, the subject is informed the object category.

Sketch-QA can be viewed as a rudimentary yet novel form of Visual Question Answering (VQA) [7, 9, 8, 5]. Our approach differs from existing VQA work in that [a] the visual content consists of sparsely detailed hand-drawn depictions [b] the visual content necessarily accumulates over time [c] at all times, we have the same question – “What is the object being drawn?” [d] the answers (guess-words) are open-ended (i.e. not 1-of-K choices) [e] for a while, until sufficient sketch strokes accumulate, there may not be ‘an answer’. Asking the same question might seem an oversimplification of VQA. However, other factors — extremely sparse visual detail, inaccuracies in object depiction arising from varying drawing skills of humans and open-ended nature of answers — pose unique challenges that need to be addressed in order to build viable computational models.

Concretely, we make the following contributions:

  • We introduce a novel task called Sketch-QA to serve as a proxy for Pictionary (Section 2.2).

  • Via Sketch-QA, we create a new crowdsourced dataset of paired guess-word and sketch-strokes, dubbed WordGuess-160, collected from guess sequences of subjects across sketch object categories.

  • We perform comparative analysis of human guessers and a machine-based sketch classifier via the task of sketch recognition (Section

    4).

  • We introduce a novel computational model for word guessing (Section 6). Using WordGuess-160 data, we analyze the performance of the model for Pictionary-style on-line guessing and conduct a Visual Turing Test to gather human assessments of generated guess-words (Section 7).

Please visit github.com/val-iisc/sketchguess for code and dataset related to this work. To begin with, we shall look at the procedural details involved in the creation of WordGuess-160 dataset.

2 Creating the WordGuess-160 dataset

2.1 Sketch object dataset

As a starting point, we use hand-sketched line drawings of single objects from the large-scale TU-Berlin sketch dataset [10]. This dataset contains sketches uniformly spread across object categories (i.e. sketches per category). The sketches were obtained in a crowd-sourced manner by providing only the category name (e.g. “sheep”) to the sketchers. In this aspect, the dataset collection procedure used for TU-Berlin dataset aligns with the draw-using-guess-word-only paradigm of Pictionary. For each sketch object, temporal order in which the strokes were drawn is also available. A subsequent analysis of the TU-Berlin dataset by Schneider and Tuytelaars [11] led to the creation of a curated subset of sketches which were deemed visually less ambiguous by human subjects. For our experiments, we use this curated dataset containing object categories with an average of sketches per category.

2.2 Data collection methodology

To collect guess-word data for Sketch-QA, we used a web-accessible crowdsourcing portal. Registered participants were initially shown a screen displaying the first stroke of a randomly selected sketch object from a randomly chosen category (see Figure 2). A GUI menu with options ‘Yes’,‘No’ was provided. If the participants felt more strokes were needed for guessing, they clicked the ‘No’ button, causing the next stroke to be added. On the other hand, clicking ‘Yes’ would allow them to type their current best guess of the object category. If they wished to retain their current guess, they would click ‘No’, causing the next stroke to be added. This act (clicking ‘No’) also propagates the most recently typed guess-word and associates it with the strokes accumulated so far. The participant was instructed to provide guesses as early as possible and as frequently as required. After the last stroke is added, the ground-truth category was revealed to the participant. Each participant was encouraged to guess a minimum of object sketches. Overall, we obtained guess data from participants.

Given the relatively unconstrained nature of guessing, we pre-process the guess-words to eliminate artifacts as described below.

2.3 Pre-processing

Incomplete Guesses: In some instances, subjects provided guess attempts for initial strokes but entered blank guesses subsequently. For these instances, we propagated the last non-blank guess until the end of stroke sequence.

Multi-word Guesses: In some cases, subjects provided multi-word phrase-like guesses (e.g. “pot of gold at the end of the rainbow” for a sketch depicting the object category rainbow). Such guesses seem to be triggered by extraneous elements depicted in addition to the target object. For these instances, we used the HunPos tagger [12] to retain only the noun word(s) in the phrase.

Misspelt Guesswords: To address incorrect spellings, we used the Enchant spellcheck library [13] with its default Words set augmented with the object category names from our base dataset [10] as the spellcheck dictionary.

Uppercase Guesses: In some cases, the guess-words exhibit non-uniform case formatting (e.g. all uppercase or a mix of both uppercase and lowercase letters). For uniformity, we formatted all words to be in lowercase.

In addition, we manually checked all of the guess-word data to remove unintelligible and inappropriate words. We also removed sequences that did not contain any guesses. Thus, we finally obtain the GuessWord-160 dataset comprising of guesswords distributed across guess sequences and categories. It is important to note that the final or the intermediate guesses could be ‘wrong’, either due to the quality of drawing or due to human error. We deliberately do not filter out such guesses. This design choice keeps our data realistic and ensures that our computational model has the opportunity to characterize both the ‘success’ and ‘failure’ scenarios of Pictionary.

A video of a typical Sketch-QA session can be viewed at https://www.youtube.com/watch?v=YU3entFwhV4.

In the next section, we shall present various interesting facets of our WordGuess-160 dataset.

3 Guess Sequence Analysis

Fig. 3: In the above plot, x-axis denotes the number of unique guesses. y-axis denotes the number of subjects who made corresponding number of unique guesses.

Given a sketch, how many guesses are typically provided by subjects? To answer this, we examine the distribution of unique guesses per sequence. As Figure 3 shows, the number of guesses have a large range. This is to be expected given the large number of object categories we consider and associated diversity in depictions. A large number of subjects provide a single guess. This arises both from the inherent ambiguity of the partially rendered sketches and the confidence subjects place on their guess. This observation is also borne out by Table I which shows the number of sequences eliciting guesses ().

Guesses
# Sequences
TABLE I: The distribution of possible number of guesses and count of number of sequences which elicited them.
Fig. 4: Here, x-axis denotes the categories. y-axis denotes the number of sketches within the category with multiple guesses. The categories are shown sorted by the number of sketches which elicited multiple guesses.

We also examined the sequences which elicited multiple guesses in terms of object categories they belong to. The categories were sorted by the number of multi-guess sequences their sketches elicited. The top- and bottom- categories according to this criteria can be viewed in Figure 4. This perspective helps us understand which categories are inherently ambiguous in terms of their stroke-level evolution when usually drawn by humans.

Fig. 5: The distribution of first guess locations normalized over sequence lengths (y-axis) across categories (x-axis).

Another interesting statistic is the distribution of first guess location relative to length of the sequence. Figure 5 shows the distribution of first guess index locations as a function of sequence length (normalized to ). Thus, a value closer to implies that the first guess was made late in the sketch sequence. Clearly, the guess location has a large range across the object categories. The requirement to accurately capture this range poses a considerable challenge for computational models of human guessing.

Fig. 6: Categories sorted by the median location of first guess.

To obtain a category-level perspective, we computed the median first-guess location and corresponding deviation of first guess location on a per-category basis and sorted the categories by the median values. The resulting plot for the top and bottom categories can be viewed in Figure 6. This perspective helps understand which the level at which categories evolve to a recognizable iconic stroke composition relative to the original, full-stroke reference sketch. Thus, categories such as axe,envelope,ladder, although seemingly simple, are depicted in a manner which induces doubt in the guesser, consequently delaying the induction of first guess. On the other hand, categories such as cactus,strawberry,telephone tend to be drawn such that the early, initial strokes capture the iconic nature of either the underlying ground-truth category or an easily recognizable object form different from ground-truth.

The above analysis focused mostly on the overall sequence-level trends in the dataset. In the next section, we focus on the last guess for each sketch stroke sequence. Since the final guess is associated with the full sketch, it can be considered the guesser’s prediction of the object underlying the sketch. Such predictions can then be compared with ground-truth labels originally provided with the sketch dataset to determine ‘human guesser’ accuracy (Section 4.2). Subsequently, we compare ‘human guesser’ accuracy with that of a machine-based sketch object recognition classifier and discuss trends therein (Section 5).

4 Final guess-word analysis

Criteria Combination EM EM SUB EM SUB SYN EM SUB SYN HY EM SUB SYN HY HY-PC EM SUB SYN HY HY-PC WUP
Accuracy
TABLE II: Accuracy of human guesses for various matching criteria (Section 4.1). The indicates that the matching criteria are combined in a logical-OR fashion to determine whether the predicted guess-word matches the ground-truth or not.

With GuessWord-160 data at hand, the first question that naturally arises is: What is the “accuracy” of humans on the final, full sketches (i.e. when all the original strokes have been included)? For a machine-based classifier, this question has a straightforward answer: Compute the fraction of sketches whose predicted category label is exactly the same as ground-truth. However, given the open-ended nature of guess-words, an ‘exact matching’ approach is not feasible. Even assuming the presence of a universal dictionary, such an approach is too brittle and restrictive. Therefore, we first define a series of semantic similarity criteria which progressively relax the correct classification criterion for the final sketches.

4.1 Matching criteria for correct classification

Exact Match (EM): The predicted guess-word is a literal match (letter-for-letter) with the ground-truth category.

Subset (SUB): The predicted guess-word is a subset of ground-truth or vice-versa. This criteria lets us characterize certain multi-word guesses as correct (e.g. guess: pot of gold at the end of the rainbow, ground-truth: rainbow).

Synonyms (SYN): The predicted guess-word is a synonym of ground-truth. For synonym determination, we use the WordNet [14] synsets of prediction and ground-truth.

Hypernyms (HY): The one-level up parents (hypernyms) of ground-truth and predicted guess-word are the same in the hierarchy induced by WordNet graph.

Hypernyms-Parent and Child (HY-PC): The ground-truth and prediction have a parent-child (hypernym) relationship in the WordNet graph.

Wu-Palmer Similarity (WUP) [15]: This calculates relatedness of two words using a graph-distance based method applied to the corresponding WordNet synsets. If WUP similarity between prediction and ground-truth is at least , we deem it a correct classification.

4.2 Classification Performance

To compute the average accuracy of human guesses, we progressively relax the ‘correct classification’ rule by combining the matching criteria (Section 4.1) in a logical-OR fashion. The average accuracy of human guesses can be viewed in Table II. The accuracy increases depending on the extent to which each successive criterion relaxes the base ‘exact match’ rule. The large increase in accuracy for ‘EM SUB’ (2nd row of the table) shows the pitfall of naively using the exact matching (1-hot label, fixed dictionary) rule.

Criteria Combination EM SUB EM SUB SYN EM SUB SYN HY EM SUB SYN HY HY-PC EM SUB SYN HY HY-PC WUP
Avg. rating
TABLE III: Quantifying the suitability of matching criteria combination for characterizing human-level sketch object recognition accuracy. The larger the human rating score, more suitable the criteria. See Section 4.2 for details.

At this stage, a new question arises: which of these criteria best characterizes human-level accuracy? Ultimately, ground-truth label is a consensus agreement among humans. To obtain such consensus-driven ground-truth, we performed a human agreement study. We displayed “correctly classified” sketches (w.r.t a fixed criteria combination from Table II) along with their labels, to human subjects. Note that the labelling chosen changes according to criteria combination. (e.g. A sketch with ground-truth revolver could be shown with the label firearm since such a prediction would be considered correct under the ‘EM SUB SYN HY’ combination). Also, the human subjects weren’t informed about the usage of criteria combination for labelling. Instead, they were told that the labellings were provided by other humans. Each subject was asked to provide their assessment of the labelling on a scale of (‘Strongly Disagree with labelling’) to (‘Strongly Agree with labelling’). We randomly chose sketches correctly classified under each criteria combination. For each sketch, we collected agreement ratings and computed the weighted average of the agreement score. Finally, we computed the average of these weighted scores. The ratings (Table III) indicate that ‘EM SUB SYN’ is the criteria combination most agreed upon by human subjects for characterizing human-level accuracy. Having determined the criteria for a correct match, we can also contrast human-classification performance with a machine-based state-of-the-art sketch classifier.

5 Comparing human classification performance with a machine-based classifier

We contrast the human-level performance (‘EM SUB SYN’ criteria) with a state-of-the-art sketch classifier [16]. To ensure fair comparison, we consider only the sketches which overlap with the test set used to evaluate the machine classifier. Table V summarizes the prediction combinations (e.g. Human classification is correct, Machine classification is incorrect) between the classifiers. While the results seem to suggest that machine classifier ‘wins’ over human classifier, the underlying reason is the open-ended nature of human guesses and the closed-world setting in which the machine classifier has been trained.

To determine whether the difference between human and machine classifiers is statistically significant, we use the Cohen’s test. Essentially, Cohen’s is an effect size used to indicate the standardised difference between two means and ranges between and . Suppose, for a given category , the mean accuracy w.r.t human classification criteria is

and the corresponding variance is

. Similarly, let the corresponding quantities for the machine classifier be and . Cohen’s for category is calculated as :

(1)

where

is the pooled standard deviation, defined as:

(2)

We calculated Cohen’s for all categories as indicated above and computed the average of resulting scores. The average value is which indicates significant differences in the classifiers according to the signficance reference tables commonly used to determine Cohen’s significance. In general, though, there are categories where one classifier outperforms the other. The list of top-10 categories where one classifier outperforms the other (in terms of Cohen’s ) is given in Table IV.

Machines outperform humans Humans outperform machines
scorpion () dragon ()
rollerblades () owl ()
person walking () mouse ()
revolver () horse ()
sponge bob () flower with stem ()
rainbow () wine-bottle ()
person sitting () lightbulb ()
sailboat () snake ()
suitcase () leaf ()
TABLE IV: Category level performance of human and machine classifiers. The numbers alongside category names correspond to Cohen’s scores.
Prediction Relative % of test data
Human Machine
TABLE V: Comparing human and machine classifiers for the possible prediction combinations – ✔  indicates correct and ✕  indicates incorrect prediction.
Fig. 7: Distribution of correct predictions across categories, sorted by median category-level score. x-axis shows categories and y-axis stands for classification rate.

The distribution of correct human guess statistics on a per-category basis can be viewed in Figure 7

. For each category, we calculate confidence intervals. These intervals inform us at a given level of certainty whether the true accuracy results will likely fall in the range identified. In particular, the Wilson score method of calculating confidence intervals, which we employ, assume that the variable of interest (the number of successes) can be modeled as a binomial random variable. Given that the binomial distribution can be considered the sum of

Bernoulli trials, it is appropriate for our task, as a sketch is either classified correctly (success) or misclassified (failure).

Some examples of misclassifications (and the ground-truth category labels) can be seen in Figure 8. Although the guesses and ground-truth categories are lexically distant, the guesses are sensible when conditioned on visual stroke data.

Fig. 8: Some examples of misclassifications: Human guesses are shown in blue. Ground-truth category labels are in pink.

6 Computational Models

We now describe our computational model designed to produce human-like guess-word sequences in an on-line manner. For model evaluation, we split the sequences in GuessWord-160 randomly into disjoint sets containing , and of the data which are used during training, validation and testing phases respectively.

Data preparation: Suppose a sketch is composed of strokes. Let the cumulative stroke sequence of be , i.e. (see Figure 2). Let the sequence of corresponding guess-words be . The sketches are first resized to and zero-centered. To ensure sufficient training data, we augment sketch data and associated guess-words. For sketches, each accumulated stroke sequence is first morphologically dilated (‘thickened’). Subsequent augmentations are obtained by applying vertical flip and scaling (paired combinations of scaling of image side). We also augment guess-words by replacing each guess-word in with its plural form (e.g. pant is replaced by pants) and synonyms wherever appropriate.

Data representation: The penultimate fully-connected layer’s outputs of CNNs fine-tuned on sketches are used to represent sketch stroke sequence images. The guess-words are represented using pre-trained word-embeddings. Typically, a human-generated guess sequence contains two distinct phases. In the first phase, no guesses are provided by the subject since the accumulated strokes provide insufficient evidence. Therefore, many of the initial guesses ( etc.) are empty and hence, no corresponding embeddings exist. To tackle this, we map ‘no guess’ to a pre-defined non-word-embedding (symbol “#”).

Fig. 9: The architecture for our deep neural model of word guessing. The rectangular bars correspond to guess-word embeddings. corresponds to the CNN regressor whose penultimate layer’s outputs are used as input features to the LSTM model. “#” reflects our choice of modelling ‘no guess’ as a pre-defined non-word embedding. See Section 6 for details.

Model design strategy: Our model’s objective is to map the cumulative stroke sequence to a target guess-word sequence

. Given our choice of data representation above, the model effectively needs to map the sequence of sketch features to a sequence of word-embeddings. To achieve this sequence-to-sequence mapping, we use a deep recurrent neural network (RNN) as the architectural template of choice (see Figure

9).

For the sequential mapping process to be effective, we need discriminative sketch representations. This ensures that the RNN can focus on modelling crucial sequential aspects such as when to initiate the word-guessing process and when to transition to a new guess-word once the guessing has begun (Section 6.2). To obtain discriminative sketch representations, we first train a CNN regressor to predict a guess-word embedding when an accumulated stroke image is presented (Section 6.1). It is important to note that we ignore the sequential nature of training data in the process. Additionally, we omit the sequence elements corresponding to ‘no-guess’ during regressor training and evaluation. This frees the regressor from having to additionally model the complex many-to-one mapping between strokes accumulated before the first guess and a ‘no-guess’.

To arrive at the final CNN regressor, we begin by fine-tuning a pre-trained photo object CNN. To minimize the impact of the drastic change in domain (photos to sketches) and task (classification to word-embedding regression), we undertake a series of successive fine-tuning steps which we describe next.

6.1 Learning the CNN word-embedding regressor

Step-1: We fine-tune the VGG-16 object classification net [17] using Sketchy [18], a large-scale sketch object dataset, for -way classification corresponding to the categories present in the dataset. Let us denote the resulting fine-tuned net by .

Step-2: ’s weights are used to initialize a VGG-16 net which is then fine-tuned for regressing word-embeddings corresponding to the category names of the Sketchy dataset. Specifically, we use the -dimensional word-embeddings provided by the word2vec model trained on 1-billion Google News words [19]

. Our choice is motivated by the open-ended nature of guess-words in Sketch-QA and the consequent need to capture semantic similarity between ground-truth and guess-words rather than perform exact matching. For the loss function w.r.t predicted word embedding

and ground-truth embedding , we consider [a] Mean Squared Loss : [b] Cosine Loss [20] : 1- [c] Hinge-rank Loss [21] : where are length-normalized versions of respectively and ) corresponds to the normalized version of a randomly chosen category’s word-embedding. The value for is set to [d] Convex combination of Cosine Loss (CLoss) and Hinge-rank Loss (HLoss) : . The predicted embedding is deemed a ‘correct’ match if the set of its -nearest word-embedding neighbors contains . Overall, we found the convex combination loss with (determined via grid search) to provide the best performance. Let us denote the resulting CNN regressor as .

Step-3: is now fine-tuned with randomly ordered sketches from training data sequences and corresponding word-embeddings. By repeating the grid search for the convex combination loss, we found to once again provide the best performance on the validation set. Note that in this case, for Hinge-rank Loss corresponds to a word-embedding randomly selected from the entire word-embedding dictionary. Let us denote the fine-tuned CNN regressor by .

As mentioned earlier, we use the -dimensional output from fc7 layer of as the representation for each accumulated stroke image of sketch sequences.

6.2 RNN training and evaluation

Fig. 10: Examples of guesses generated by our model on test set sequences.

RNN Training: As with the CNN regressor, we configure the RNN to predict word-embeddings. For preliminary evaluation, we use only the portion of training sequences corresponding to guess-words. For each time-step, we use the same loss (convex combination of Cosine Loss and Hinge-rank Loss) determined to be best for the CNN regressor. We use LSTM [22] as the specific RNN variant. For all the experiments, we use Adagrad optimizer [23] (with a starting learning rate of ) and early-stopping as the criterion for terminating optimization.

Evaluation: We use the -nearest neighbor criteria mentioned above and examine performance for . To determine the best configuration, we compute the proportion of ‘correct matches’ on the subsequence of validation sequences containing guess-words. As a baseline, we also compute the sequence-level scores for the CNN regressor . We average these per-sequence scores across the validation sequences. The results show that the CNN regressor performs reasonably well in spite of the overall complexity involved in regressing guess-word embeddings (see first row of Table VI). However, this performance is noticeably surpassed by LSTM net, demonstrating the need to capture temporal context in modelling guess-word transitions.

LSTM Avg. sequence-level accuracy
TABLE VI: Sequence-level accuracies over the validation set are shown. In each sequence, only the portion with guess-words is considered for evaluation. The first row corresponds to CNN regressor. The first column shows the number of hidden units in the LSTM. The sequence level accuracies with k-nearest criteria applied to per-timestep guess predictions are shown for .

7 Overall Results

For the final model, we merge validation and training sets and re-train with the best architectural settings as determined by validation set performance (i.e.

as the feature extraction CNN, LSTM with

hidden units as the RNN component and convex combination of Cosine Loss and Hinge-rank Loss as the optimization objective). We report performance on the test sequences.

The full-sequence scenario is considerably challenging since our model has the additional challenge of having to accurately determine when the word-guessing phase should begin. For this reason, we also design a two-phase architecture as an alternate baseline. In this baseline, the first phase predicts the most likely sequential location for ‘no guess’-to-first-guess transition. Conditioned on this location, the second phase predicts guess-word representations for rest of the sequence (see Figure 11). To retain focus, we only report performance numbers for the two-phase baseline. For a complete description of baseline architecture and related ablative experiments, please refer to Appendix A.

As can be observed in Table VII, our proposed word-guess model outperforms other baselines, including the two-phase baseline, by a significant margin. The reduction in long-range temporal contextual information, caused by splitting the original sequence into two disjoint sub-sequences, is possibly a reason for lower performance for the two-phase baseline. Additionally, the need to integrate sequential information is once again highlighted by the inferior performance of CNN-only baseline. We also wish to point out that of guesses in the test set are out-of-vocabulary words, i.e. guesses not present in train or validation set. Inspite of this, our model achieves high sequence-level accuracy, thus making the case for open-ended word-guessing models.

Examples of guesses generated by our model on test set sketch sequences can be viewed in Figure 10.

Visual Turing Test: As a subjective assessment of our model, we also conduct a Visual Turing Test. We randomly sample sequences from our test-set. For each of the model predictions, we use the nearest word-embedding as the corresponding guess. We construct two kinds of paired sequences and where corresponds to the -th sketch stroke sequence () and correspond to human and model generated guess sequences respectively. We randomly display the stroke-and-guess-word paired sequences to human judges with judges for each of the two sequence types. Without revealing the origin of guesses (human or machine), each judge is prompted “Who produced these guesses?”.

Architecture Avg. sequence-level accuracy
(CNN)
Two-phase
Proposed
TABLE VII: Overall average sequence-level accuracy on test set are shown for guessing models (CNNs only baseline [first row], two-phase baseline [second] and our proposed model [third]).

The judges entered their ratings on a -point Likert scale (‘Very likely a machine’, ‘Either is equally likely’,’Very likely a human’). To minimize selection bias, the scale ordering is reversed for half the subjects [24]. For each sequence , we first compute the mode ( (human guesses), (model guesses)) of the ratings by guesser type. To determine the statistical significance of the ratings, we additionally analyze the rating pairs () using the non-parametric Wilcoxon Signed-Rank test [25].

When we study the distribution of ratings (Figure 12), the human subject-based guesses from WordGuess-160 seem to be clearly identified as such – the two most frequent rating levels correspond to ‘human’. The non-trivial frequency of ‘machine’ ratings reflects the ambiguity induced not only by sketches and associated guesses, but also by the possibility of machine being an equally viable generator. For the model-generated guesses, many could be identified as such, indicating the need for more sophisticated guessing models. This is also evident from the Wilcoxon Signed-Rank test which indicates a significant effect due to the guesser type (). Interestingly, the second-most preferred rating for model guesses is ‘human’, indicating a degree of success for the proposed model.

Fig. 11: Architecture for the two-phase baseline. The first phase (blue dotted line) is used to predict location of the transition to the word-guessing phase (output ). Starting from transition location, the second-phase (red dotted line) sequentially outputs word-embedding predictions until the end of stroke sequence.

8 Related Work

Beyond its obvious entertainment value, Pictionary involves a number of social [26, 27], collaborative [28, 29] and cognitive [30, 31] aspects which have been studied by researchers. In an attempt to find neural correlates of creativity, Saggar et al. [32] analyze fMRI data of participants instructed to draw sketches of Pictionary ‘action’ words (E.g. “Salute”,“Snore”). In our approach, we ask subjects to guess the word instead of drawing the sketch for a given word. Also, our sketches correspond to nouns (objects).

Human-elicited text-based responses to visual content, particularly in game-like settings, have been explored for object categorization [33, 34]. However, the visual content is static and does not accumulate sequentially, unlike our case. The work of Ullman et al. [35] on determining minimally recognizable image configurations also bears mention. Our approach is complementary to theirs in the sense that we incrementally add stroke content (bottom-up) while they incrementally reduce image content (top-down).

In recent times, deep architectures for sketch recognition [36, 37, 16] have found great success. However, these models are trained to output a single, fixed label regardless of the intra-category variation. In contrast, our model, trained on actual human guesses, naturally exhibits human-like variety in its responses (e.g. a sketch can be guessed as ‘aeroplane’ or ‘warplane’ based on evolution of stroke-based appearance). Also, our model solves a much more complex temporally-conditioned, multiple word-embedding regression problem. Another important distinction is that our dataset (WordGuess-160) contains incorrect guesses which usually arise due to ambiguity in sketched depictions. Such ‘errors’ are normally considered undesirable, but we deliberately include them in the training phase to enable realistic mimicking. This in turn requires our model to implicitly capture the subtle, fine-grained variations in sketch quality – a situation not faced by existing approaches which simply optimize for classification accuracy.

Our dataset collection procedure is similar to the one employed by Johnson et al. [38] as part of their Pictionary-style game Stellasketch. However, we do not let the subject choose the object category. Also, our subjects only provide guesses for stroke sequences of existing sketches and not for sketches being created in real-time. Unfortunately, the Stellasketch dataset is not available publicly for further study.

It is also pertinent to compare our task and dataset with QuickDraw, a large-scale sketch collection initiative by Google (https://github.com/googlecreativelab/quickdraw-dataset). The QuickDraw task generates a dataset of object sketches. In contrast, our task SketchQA results in a dataset of human-generated guess words. In QuickDraw, a sketch is associated with a single, fixed category. In SketchQA, a sketch from an existing dataset is explicitly associated with a list of multiple guess words. In SketchQA, the freedom provided to human guessers enables sketches to have arbitrarily fine-grained labels (e.g. ‘airplane’, ‘warplane’,‘biplane’). However, QuickDraw’s label set is fixed. Finally, our dataset (WordGuess-160) captures a rich sequence of guesses in response to accumulation of sketch strokes. Therefore, it can be used to train human-like guessing models. QuickDraw’s dataset, lacking human guesses, is not suited for this purpose.

Our computational model employs the Long Short Term Memory (LSTM) 

[22] variant of Recurrent Neural Networks (RNNs). LSTM-based frameworks have been utilized for tasks involving temporally evolving content such as as video captioning [39, 5] and action recognition  [40, 41, 42]. Our model not only needs to produce human-like guesses in response to temporally accumulated content, but also has the additional challenge of determining how long to ‘wait’ before initiating the guessing process. Once the guessing phase begins, our model typically outputs multiple answers. These per-time-step answers may even be unrelated to each other. This paradigm is different from a setup wherein a single answer constitutes the output. Also, the output of RNN in aforementioned approaches is a soft-max distribution over all

the words from a fixed dictionary. In contrast, we use a regression formulation wherein the RNN outputs a word-embedding prediction at each time-step. This ensures scalability with increase in vocabulary and better generalization since our model outputs predictions in a constant-dimension vector space.  

[43] adopt a similar regression formulation to obtain improved performance for image annotation and action recognition.

Since our model aims to mimic human-like guessing behavior, a subjective evaluation of generated guesses falls within the ambit of a Visual Turing Test [44, 45, 46]. However, the free-form nature of guess-words and the ambiguity arising from partial stroke information make our task uniquely more challenging.

9 Discussion and Conclusion

We have introduced a novel guessing task called Sketch-QA to crowd-source Pictionary-style open-ended guesses for object line sketches as they are drawn. The resulting dataset, dubbed GuessWord-160, contains guess sequences of subjects across object categories. We have also introduced a novel computational model which produces open-ended guesses and analyzed its performance on GuessWord-160 dataset for challenging on-line Pictionary-style guessing tasks.

In addition to the computational model, our dataset GuessWord-160 can serve researchers studying human perceptions of iconic object depictions. Since the guess-words are paired with object depictions, our data can also aid graphic designers and civic planners in creation of meaningful logos and public signage. This is especially important since incorrectly perceived depictions often result in inconvenience, mild amusement, or in extreme cases, end up deemed offensive. Yet another potential application domain is clinical healthcare. GuessWord-160 consists of partially drawn objects and corresponding guesses across a large number of categories. Such data could be useful for neuro psychiatrists to characterize conditions such as visual agnosia: a disorder in which subjects exhibit impaired object recognition capabilities [47].

Fig. 12: Distribution of ratings for human and machine-generated guesses.

In future, we wish to also explore computational models for optimal guessing, i.e. models which aim to guess the sketch category as early and as correctly as possible. In the futuristic context mentioned at the beginning (Figure 1), such models would help the robot contribute as a productive team-player by correctly guessing its team-member’s sketch as early as possible. In our dataset, each stroke sequence was shown only to a single subject and therefore, is associated with a single corresponding sequence of guesses. This shortcoming is to be mitigated in future editions of Sketch-QA. A promising approach for data collection would be to use digital whiteboards, high-quality microphones and state-of-the-art speech recognition software to collect realistic paired stroke-and-guess data from Pictionary games in home-like settings [48]. It would also be worthwhile to consider Sketch-QA beyond object names (‘nouns’) and include additional lexical types (e.g. action-words and abstract phrases). We believe the resulting data, coupled with improved versions of our computational models, could make the scenario from Figure 1 a reality one day.

Appendix A Two-Phase Baseline Model

In this section, we present the architectural design and related evaluation experiments of the two-phase baseline originally mentioned in Section 7.

Typically, a guess sequence contains two distinct phases. In the first phase, no guesses are provided by the subject since the accumulated strokes provide insufficient evidence. At a later stage, the subject feels confident enough to provide the first guess. Thus, the location of this first guess (within the overall sequence) is the starting point for the second phase. The first phase (i.e. no guesses) offers no usable guess-words. Therefore, rather than tackling both the phases within a single model, we adopt a divide-and-conquer approach. We design this baseline to first predict the phase transition location (i.e. where the first guess occurs). Conditioned on this location, the model predicts guess-word representations for rest of the sequence (see Figure

11).

In the two-phase model and the model described in the main paper, the guess-word generator is a common component. The guess-word generation model is already described in the main paper (Section 6). For the remainder of the section, we focus on the first phase of the two-phase baseline.

Consider a typical guess sequence . Suppose the first phase (‘no guesses’) corresponds to an initial sub-sequence of length . The second phase then corresponds to the remainder sub-sequence of length . Denoting ‘no guess’ as and a guess-word as , is transformed to a binary sequence . Therefore, the objective for the Phase I model is to correctly predict the transition index i.e. .

a.1 Phase I model (Transition prediction)

CNN model LSTM Loss Window width
01 CCE
01-a CCE
01-a 64 Seq
01-a 128 Seq
01-a 256 Seq
01-a 512 Seq
01-a 128 wSeq
01-a 128 mRnk
TABLE VIII: The transition location prediction accuracies for various Phase I architectures are shown. 01 refers to the binary output CNN model pre-trained for feature extraction. 01-a refers to the 01 CNN model with -way auxiliary classification. The last two rows correspond to test set accuracies of the best CNN and LSTM configurations. For the ‘Loss’ column, CCE = Categorical-cross entropy, Seq = Average sequence loss, wSeq = Weighted sequence loss, mRnk = modified Ranking Loss. The results are shown for ‘Window width’ sized windows centered on ground-truth transition location. The rows below dotted line show performance of best CNN and LSTM models on test sequences.

Two possibilities exist for Phase-I model. The first possibility is to train a CNN model using sequence members from pairs for binary (Guess/No Guess) classification and during inference, repeatedly apply the CNN model on successive time-steps, stopping when the CNN model outputs (indicating the beginning of guessing phase). The second possibility is to train an RNN and during inference, stop unrolling when a is encountered. We describe the setup for CNN model first.

a.1.1 CNN model

For the CNN model, we fine-tune VGG-16 object classification model [17] using Sketchy [18] as in the proposed model. The fine-tuned model is used to initialize another VGG-16 model, but with a -dimensional bottleneck layer introduced after fc7 layer. Let us denote this model as .

a.1.2 Sketch representation

As feature representations, we consider two possibilities:app [a] is fine-tuned for -way classification (Guess/No Guess). The -dimensional output from final fully-connected layer forms the feature representation. [b] The architecture in option [a] is modified by having -way class prediction as an additional, auxiliary task. This choice is motivated by the possibility of encoding category-specific transition location statistics within the -dimensional feature representation (see Figure 5). The two losses corresponding to the two outputs (-way and -way classification) of the modified architecture are weighted equally during training.

Loss weighting for imbalanced label distributions: When training the feature extraction CNN () in Phase-I, we encounter imbalance in the distribution of no-guesses (s) and guesses (s). To mitigate this, we employ class-based loss weighting [49] for the binary classification task. Suppose the number of no-guess samples is and the number of guess samples is . Let . The weights for the classes are computed as where and where . The binary cross-entropy loss is then computed as:

(3)

where stand for ground-truth and prediction respectively and when is a no-guess sample and otherwise. For our data, and , thus appropriately accounting for the relatively smaller number of no-guess samples in our training data.

A similar procedure is also used for weighting losses when the -way auxiliary classifier variant of is trained. In this case, the weights are determined by the per-object category distribution of the training sequences. Experimentally, with auxiliary task shows better performance – see first two rows of Table VIII.

a.1.3 LSTM setup

We use the -dimensional output of the -auxiliary CNN as the per-timestep sketch representation fed to the LSTM model. To capture the temporal evolution of the binary sequences, we configure the LSTM to output a binary label for each timestep . For the LSTM, we explored variations in number of hidden units (). The weight matrices are initialized as orthogonal matrices with a gain factor of  [50] and the forget gate bias is set to . For training the LSTMs, we use the average sequence loss, computed as the average of the per-time-step binary cross-entropy losses. The loss is regularized by a standard -weight norm weight-decay parameter (). For optimization, we use Adagrad with a learning rate of and the momentum term set to . The gradients are clipped to during training. For all LSTM experiments, we use a mini-batch size of .

a.1.4 LSTM Loss function variants

The default sequence loss formulation treats all time-steps of the sequence equally. Since we are interested in accurate localization of transition point, we explored the following modifications of the default loss for LSTM:

Transition weighted loss: To encourage correct prediction at the transition location, we explored a weighted version of the default sequence-level loss. Beginning at the transition location, the per-timestep losses on either side of the transition are weighted by an exponentially decaying factor where for time-steps , for . Essentially, the loss at the transition location is weighted the most while the losses for other locations are downscaled by weights less than – the larger the distance from transition location, the smaller the weight. We tried various values for . The localization accuracy can be viewed in Table IX. Note that the weighted loss is added to the original sequence loss during actual training.

CNN model LSTM Window width
01-a 128 5
01-a 128 7
01-a 128 10
TABLE IX: Weighted loss performance for various values of .

Modified ranking loss: We want the model to prevent occurrence of premature or multiple transitions. To incorporate this notion, we use the ranking loss formulation proposed by Ma et al. [42]. Let us denote the loss at time step as and the softmax score for the ground truth label as . We shall refer to this as detection score. In our case, for the Phase-I model, corresponds to the binary cross-entropy loss. The overall loss at time step is modified as:

(4)

We want the Phase-I model to produce monotonically non-decreasing softmax values for no-guesses and guesses as it progresses more into the sub-sequence. In other words, if there is no transition at time , i.e. , then we want the current detection score to be no less than any previous detection score. Therefore, for this situation, the ranking loss is computed as:

(5)

where

(6)

where corresponds to time step when (No Guesses) or (starting location of Guessing).

If time-step corresponds to a transition, i.e. , we want the detection score of previous phase (‘No Guess’) to be as small as possible (ideally ). Therefore, we compute the ranking loss as:

(7)

During training, we use a convex combination of sequence loss and the ranking loss with the loss weighting determined by grid search over (see Table X). From our experiments, we found the transition weighted loss to provide the best performance (Table VIII).

CNN model LSTM Window width
01-a 128
01-a 128
01-a 128
TABLE X: Ranking loss performance for various weighting of sequence loss and rank loss.
P-I P-II Average sequence-level accuracy
P-II only Full P-II only Full P-II only Full
01-a
Unified Unified
01-a R25
TABLE XI: Overall average sequence-level accuracy on test set are shown for guessing models (CNNs only baseline [first row], Unified [second], Two Phased [third]). R25 corresponds to best Phase-II LSTM model.

a.1.5 Evaluation

At inference time, the accumulated stroke sequence is processed sequentially by Phase-I model until it outputs a which marks the beginning of Phase-II. Suppose the predicted transition index is and ground-truth index is . The prediction is deemed correct if where denotes half-width of a window centered on . For our experiments, we used . The results (Table VIII) indicate that the -auxiliary CNN model outperforms the best LSTM model by a very small margin. The addition of weighted sequence loss to the default version plays a crucial role in the latter (LSTM model) since the default version does not explicitly optimize for the transition location. Overall, the large variation in sequence lengths and transition locations explains the low performance for exact () localization. Note, however, that the performance improves considerably when just one to two nearby locations are considered for evaluation ().

During inference, the location predicted by Phase-I model is used as the starting point for Phase-II (word guessing). We do not describe Phase-II model since it is virtually identical in design as the model described in the main paper (Section 6).

a.2 Overall Results

To determine overall performance, we utilize the best architectural settings as determined by validation set performance. We then merge validation and training sets, re-train the best models and report their performance on the test set. As the overall performance measure, we report two items on the test set – [a] P-II: the fraction of correct matches with respect to the subsequence corresponding to ground-truth word guesses. In other words, we assume accurate localization during Phase I and perform Phase II inference beginning from the ground-truth location of the first guess. [b] Full: We use Phase-I model to determine transition location. Note that depending on predicted location, it is possible that we obtain word-embedding predictions when the ground-truth at the corresponding time-step corresponds to ‘no guess’. Regarding such predictions as mismatches, we compute the fraction of correct matches for the full sequence. As a baseline model (first row of Table XI), we use outputs of the best performing per-frame CNNs from Phase I and Phase II.

The results (Table XI) show that the Unified model outperforms Two-Phased model by a significant margin. For Phase-II model, the objective for CNN (whose features are used as sketch representation) and LSTM are the same. This is not the case for Phase-I model. The reduction in long-range temporal contextual information, caused by splitting the original sequence into two disjoint sub-sequences, is possibly another reason for lower performance of the Two-Phased model.

References

  • [1] G. Tesauro, “TD-gammon, a self-teaching backgammon program, achieves master-level play,” Neural Computation, vol. 6, no. 2, pp. 215–219, 1994.
  • [2] Deep Blue Versus Kasparov: The Significance for Artificial Intelligence.   AAAI Press, 1997.
  • [3] D. Silver et al., “Mastering the game of Go with deep neural networks and tree search,” Nature, vol. 529, no. 7587, pp. 484–489, 2016.
  • [4] X. Chen and C. Lawrence Zitnick, “Mind’s eye: A recurrent visual representation for image caption generation,” in CVPR, 2015, pp. 2422–2431.
  • [5] S. Venugopalan, M. Rohrbach, J. Donahue, R. Mooney, T. Darrell, and K. Saenko, “Sequence to sequence-video to text,” in CVPR, 2015, pp. 4534–4542.
  • [6] K. Xu, J. Ba, R. Kiros, K. Cho, A. C. Courville, R. Salakhutdinov, R. S. Zemel, and Y. Bengio, “Show, attend and tell: Neural image caption generation with visual attention.” in ICML, vol. 14, 2015, pp. 77–81.
  • [7] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. Lawrence Zitnick, and D. Parikh, “VQA: Visual question answering,” in ICCV, 2015, pp. 2425–2433.
  • [8] H. Xu and K. Saenko, “Ask, attend and answer: Exploring question-guided spatial attention for visual question answering,” in ECCV.   Springer, 2016, pp. 451–466.
  • [9] M. Ren, R. Kiros, and R. Zemel, “Exploring models and data for image question answering,” in NIPS, 2015, pp. 2953–2961.
  • [10] M. Eitz, J. Hays, and M. Alexa, “How do humans sketch objects?” ACM Trans. on Graphics, vol. 31, no. 4, p. 44, 2012.
  • [11] R. G. Schneider and T. Tuytelaars, “Sketch classification and classification-driven analysis using fisher vectors,” ACM Trans. Graph., vol. 33, no. 6, pp. 174:1–174:9, Nov. 2014.
  • [12] P. Halácsy, A. Kornai, and C. Oravecz, “HunPos: an open source trigram tagger,” in Proc. ACL on interactive poster and demonstration sessions, 2007, pp. 209–212.
  • [13] D. Lachowicz, “Enchant spellchecker library,” 2010.
  • [14] G. A. Miller, “Wordnet: a lexical database for english,” Communications of the ACM, vol. 38, no. 11, pp. 39–41, 1995.
  • [15] Z. Wu and M. Palmer, “Verbs semantics and lexical selection,” in ACL.   Association for Computational Linguistics, 1994, pp. 133–138.
  • [16] R. K. Sarvadevabhatla, J. Kundu, and V. B. Radhakrishnan, “Enabling my robot to play pictionary: Recurrent neural networks for sketch recognition,” in ACMMM, 2016, pp. 247–251.
  • [17] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  • [18] P. Sangkloy, N. Burnell, C. Ham, and J. Hays, “The sketchy database: learning to retrieve badly drawn bunnies,” ACM Transactions on Graphics (TOG), vol. 35, no. 4, p. 119, 2016.
  • [19] T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient estimation of word representations in vector space,” arXiv preprint arXiv:1301.3781, 2013.
  • [20] T. Qin, X.-D. Zhang, M.-F. Tsai, D.-S. Wang, T.-Y. Liu, and H. Li, “Query-level loss functions for information retrieval,” Information Processing & Management, vol. 44, no. 2, pp. 838–855, 2008.
  • [21] A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, T. Mikolov et al., “Devise: A deep visual-semantic embedding model,” in NIPS, 2013, pp. 2121–2129.
  • [22] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997.
  • [23]

    J. Duchi, E. Hazan, and Y. Singer, “Adaptive subgradient methods for online learning and stochastic optimization,”

    JMLR, vol. 12, no. Jul, pp. 2121–2159, 2011.
  • [24] J. C. Chan, “Response-order effects in likert-type scales,” Educational and Psychological Measurement, vol. 51, no. 3, pp. 531–540, 1991.
  • [25] F. Wilcoxon, “Individual comparisons by ranking methods,” Biometrics Bulletin, vol. 1, no. 6, pp. 80–83, 1945. [Online]. Available: http://www.jstor.org/stable/3001968
  • [26] T. B. Wortham, “Adapting common popular games to a human factors/ergonomics course,” in Proc. Human Factors and Ergonomics Soc. Annual Meeting, vol. 50.   SAGE, 2006, pp. 2259–2263.
  • [27] F. Mäyrä, “The contextual game experience: On the socio-cultural contexts for meaning in digital play,” in Proc. DIGRA, 2007, pp. 810–814.
  • [28] N. Fay, M. Arbib, and S. Garrod, “How to bootstrap a human communication system,” Cognitive science, vol. 37, no. 7, pp. 1356–1367, 2013.
  • [29] M. Groen, M. Ursu, S. Michalakopoulos, M. Falelakis, and E. Gasparis, “Improving video-mediated communication with orchestration,” Computers in Human Behavior, vol. 28, no. 5, pp. 1575 – 1579, 2012.
  • [30] D. M. Dake and B. Roberts, “The visual analysis of visual metaphor,” 1995.
  • [31] B. Kievit-Kylar and M. N. Jones, “The semantic pictionary project,” in Proc. Annual Conf. Cog. Sci. Soc., 2011, pp. 2229–2234.
  • [32] M. Saggar et al., “Pictionary-based fMRI paradigm to study the neural correlates of spontaneous improvisation and figural creativity,” Nature (2005), 2015.
  • [33] L. Von Ahn and L. Dabbish, “Labeling images with a computer game,” in SIGCHI.   ACM, 2004, pp. 319–326.
  • [34] S. Branson, C. Wah, F. Schroff, B. Babenko, P. Welinder, P. Perona, and S. Belongie, “Visual recognition with humans in the loop,” in European Conference on Computer Vision.   Springer, 2010, pp. 438–451.
  • [35] S. Ullman, L. Assif, E. Fetaya, and D. Harari, “Atoms of recognition in human and computer vision,” PNAS, vol. 113, no. 10, pp. 2744–2749, 2016.
  • [36] Q. Yu, Y. Yang, Y.-Z. Song, T. Xiang, and T. M. Hospedales, “Sketch-a-net that beats humans,” arXiv preprint arXiv:1501.07873, 2015.
  • [37]

    O. Seddati, S. Dupont, and S. Mahmoudi, “Deepsketch: deep convolutional neural networks for sketch recognition and similarity search,” in

    CBMI.   IEEE, 2015, pp. 1–6.
  • [38] G. Johnson and E. Y.-L. Do, “Games for sketch data collection,” in Proceedings of the 6th eurographics symposium on sketch-based interfaces and modeling.   ACM, 2009, pp. 117–123.
  • [39] J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell, “Long-term recurrent convolutional networks for visual recognition and description,” in CVPR, 2015, pp. 2625–2634.
  • [40] S. Yeung, O. Russakovsky, N. Jin, M. Andriluka, G. Mori, and L. Fei-Fei, “Every moment counts: Dense detailed labeling of actions in complex videos,” arXiv preprint arXiv:1507.05738, 2015.
  • [41] J. Y.-H. Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici, “Beyond short snippets: Deep networks for video classification,” in CVPR, 2015, pp. 4694–4702.
  • [42] S. Ma, L. Sigal, and S. Sclaroff, “Learning activity progression in lstms for activity detection and early detection,” in CVPR, 2016, pp. 1942–1950.
  • [43] G. Lev, G. Sadeh, B. Klein, and L. Wolf, “Rnn fisher vectors for action recognition and image annotation,” in ECCV.   Springer, 2016, pp. 833–850.
  • [44] D. Geman, S. Geman, N. Hallonquist, and L. Younes, “Visual turing test for computer vision systems,” PNAS, vol. 112, no. 12, pp. 3618–3623, 2015.
  • [45] M. Malinowski and M. Fritz, “Towards a visual turing challenge,” arXiv preprint arXiv:1410.8027, 2014.
  • [46] H. Gao, J. Mao, J. Zhou, Z. Huang, L. Wang, and W. Xu, “Are you talking to a machine? dataset and methods for multilingual image question,” in NIPS, 2015, pp. 2296–2304.
  • [47] L. Baugh, L. Desanghere, and J. Marotta, “Agnosia,” in Encyclopedia of Behavioral Neuroscience.   Academic Press, Elsevier Science, 2010, vol. 1, pp. 27–33.
  • [48] G. A. Sigurdsson, G. Varol, X. Wang, A. Farhadi, I. Laptev, and A. Gupta, “Hollywood in homes: Crowdsourcing data collection for activity understanding,” in ECCV, 2016.
  • [49] D. Eigen and R. Fergus, “Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 2650–2658.
  • [50] A. M. Saxe, J. L. McClelland, and S. Ganguli, “Exact solutions to the nonlinear dynamics of learning in deep linear neural networks,” CoRR, vol. abs/1312.6120, 2013. [Online]. Available: http://arxiv.org/abs/1312.6120