Exploring Crowd Co-creation Scenarios for Sketches

As a first step towards studying the ability of human crowds and machines to effectively co-create, we explore several human-only collaborative co-creation scenarios. The goal in each scenario is to create a digital sketch using a simple web interface. We find that settings in which multiple humans iteratively add strokes and vote on the best additions result in the sketches with highest perceived creativity (value + novelty). Lack of collaboration leads to a higher variance in quality and lower novelty or surprise. Collaboration without voting leads to high novelty but low quality.



There are no comments yet.


page 5

page 6

page 7

page 8

page 9

page 10

page 11

page 12


Coevo: a collaborative design platform with artificial agents

We present Coevo, an online platform that allows both humans and artific...

Deep Learning in a Computational Model for Conceptual Shifts in a Co-Creative Design System

This paper presents a computational model for conceptual shifts, based o...

Multi Web Audio Sequencer: Collaborative Music Making

Recent advancements in web-based audio systems have enabled sufficiently...

Can pandemics transform scientific novelty? Evidence from COVID-19

Scientific novelty is important during the pandemic due to its critical ...

Algorithms for the Greater Good! On Mental Modeling and Acceptable Symbiosis in Human-AI Collaboration

Effective collaboration between humans and AI-based systems requires eff...

Deep Context-Aware Novelty Detection

A common assumption of novelty detection is that the distribution of bot...

Using the Crowd to Generate Content for Scenario-Based Serious-Games

In the last decade, scenario-based serious-games have become a main tool...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.


How can one best collaborate with humans in a creative process? Insights towards this can inform what roles machines can (or should not) play when co-creating with humans.

Specifically, we consider a scenario where agents take turns collaboratively drawing a sketch on a simple web interface (Figure 1). During each iteration, multiple agents propose strokes to add to the sketch. Agents then vote on the proposals, and the preferred set of strokes is added to the sketch. This process is repeated for a fixed number of iterations to create a final sketch.

The roles of creating stroke proposals and voting could each be fulfilled by either humans (H) or machines (M). Borrowing terminology from Generative Adversarial Networks [6], we can call the former role a generator (G), and the latter a discriminator (D). This allows for 4 {H,M} {G,D} co-creation scenarios. Further, different individuals could play the role of generators/discriminators across iterations, leading to crowd co-creation.

Figure 1: As a first step towards human-machine co-creation, we explore human-human collaboration for creating digital sketches on a simple web interface shown above. Video: https://youtu.be/9fikuKPYPd0
Figure 2: Few iterations of a sketch being created in the Collaborative + voting scenario. Once a parent sketch gets five children, it gets selected as the next iteration of the sketch (black outline), and the five children become the parents for the next iteration. Temporal visualization: https://youtu.be/JQmGALAhhMU. Examples with all iterations: Figures 7 and 8.

In this work, as a step towards human-machine co-creation, we study various human-human crowd co-creation scenarios. In the first, Individual, a single human creates the entire sketch (no discriminator D, and no crowd). Second, in Collaborative the sketch is generated by multiple human agents (crowd) iteratively taking turns adding strokes. That is, all the agents act as generators G and there is no voting or discriminator D. The third, Collaborative + voting, is where multiple human agents (generators) propose new strokes at each iteration. Another set of human agents (discriminators) vote on which set of strokes to add to the sketch. Finally, we explore Individual with collaborative prompts, for which the crowd is involved indirectly. A single human creates the entire sketch, but by following text prompts that describe the evolution of a sketch that was created in the Collaborative scenario.

We evaluate the qualitative difference between the sketches produced via these four scenarios. We find that the collaborative setting with a voting mechanism (Collaborative + voting) leads to sketches that are rated by human subjects as most creative (and are preferred along a variety of other dimensions). The lack of either one of these components results in less creative sketches: Individual sketches have decent quality (value) but low novelty, while Collaborative sketches have high novelty but low value. Individual with collaborative prompts results in high novelty but even worse quality. Overall, among these four scenarios, Collaborative + voting best hits the sweet spot for creativity: value + novelty [1].

Related Work

Research in related areas has been conducted in numerous areas, including crowdsourced art, machine interpretation of sketches, and human-machine co-creation of sketches.

Crowdsourced art projects111www.thejohnnycashproject.com
www.bicyclebuiltfortwothousand.com, www.swarmsketch.com
while often powerful, are typically one-off projects as opposed to systematic studies of collaboration strategies. Closest in philosophy to our work is perhaps Picbreeder [10]

which engages the crowd in a genetic algorithm to evolve images by allowing users to pick the ‘parents’ to be bred, or create branches of existing images to evolve them further. Our work explores crowd collaboration in the context of sketches and uses direct interaction (user draws on the canvas) to affect the artifact as opposed to via a

blackbox algorithm.

Several AI systems have been trained to recognize sketches (e.g., models trained on The Quick, Draw! Dataset222https://github.com/googlecreativelab/quickdraw-dataset). These may form useful building blocks for the next stages of our work. However, as seen in Figure 3, our sketches tend to be complex scenes and often abstract as opposed to concrete individual objects, which has been the focus of most existing work in automatic sketch recognition. There is also work on generating images based on sketches [3].

[5] employ a cognitive science framework called participatory sense-making to study co-creation in sketches. Central to their study is the back and forth interaction (dialog) between the human and machine as they take turns. Our work is focussed on a crowd setting where no two agents interact again in the future. [7] study human-AI co-creativity in the context of humans sketching for a particular design goal. Our work falls in the category of “casual creators” [4] – systems that support exploratory as opposed to goal-driven creativity.

Sketching Interface

Human agents create sketches using the JavaScript based interface shown in Figure 1. Strokes can be varied across four thicknesses and ten colors and have a paint-like texture. The number and length of the strokes an agent may draw is limited during each iteration. Feedback on how close they are to the cutoff is provided in real-time by the stroke limit bar. Thicker strokes count more towards the limit. Strokes drawn by the agent during the current iteration may be undone. See https://youtu.be/9fikuKPYPd0 for a video of the interface. Our interface is publicly available.

Figure 3: Example sketches from four co-creation scenarios along with differences identified by human subjects between sketches from pairs of scenarios. Collaborative + voting involves 12.5 times the individuals, and so was run for 20 instead of 30 iterations. For comparison, Collaborative sketches are also shown at 20 iterations. More sketches from the four scenarios can be seen in Figures 9, 10, 11, and 12 respectively.
Figure 4: Example prompts used in the Individual with collaborative prompts scenario.
Figure 5: Evolution of example sketches in the Collaborative scenario. Left: Focus of the sketch shifts from the house to the cat in the rain outside the house. Right: Faced with seemingly incoherent strokes, subjects emphasize structure they see in it so subsequent subjects can add to it. More examples of sketches evolving are in Figures 13 and 14.

Co-creation Scenarios

We explore four scenarios for collaborative human-human sketch co-creation. In every scenario, the sketch starts with a blank canvas. During each iteration, a limited number of strokes may be added. The limit roughly corresponds to five medium-thickness strokes spanning the width of the canvas. 30 iterations are used to create each sketch. Unless stated otherwise, we collected 20 sketches for each scenario. All our studies were conducted on Amazon Mechanical Turk. Subjects can not submit their work till they have contributed the required amount of strokes to the canvas.

Individual. The entire sketch is created by a single individual. That is, a single human agent adds all 30 iterations of strokes to “Create a beautiful, detailed, coherent painting!”.

Collaborative. A different human agent contributes strokes for each iteration of the sketch. That is, 30 unique individuals contribute to a sketch. The first subject sees a blank canvas and adds strokes. Every subsequent subject is shown the partial sketch and asked to add to it. They cannot undo strokes from earlier contributors. The prompt is “Let’s collectively create a beautiful, detailed, coherent painting!”. Subjects are given the additional instruction to consider the kind of painting being created and the stage of the painting when deciding upon which strokes to draw.

Collaborative + voting. Each subject contributes strokes to a sketch of their choosing from a set of five starting sketch variations. We refer to the chosen starting sketch as a parent, and the sketch created by a subject as the chosen sketch’s child. During each iteration, sketches are gathered until a parent is selected five times. Its children then replace the current five parents and the process is repeated. Children of parents selected less than five times are discarded. See Figure 2, Figures 7 and 8 and https://youtu.be/JQmGALAhhMU for more examples.

This voting strategy allows for the most promising versions of a sketch to go forward. This scenario is robust to the strokes added by any one individual. Of course, it is also significantly more “expensive”. In the best case scenario where a single parent gets all 5 children and none of the other parents get a child, it takes 5 times the amount of strokes to create a sketch compared to Collaborative. In the worst case, all 5 parents get 4 children each before a parent gets a fifth child. This would result in 21 times the number of strokes. In practice we found this factor to be about 12.5 times. Given the increased cost, we reduced the number of iterations in this scenario to 20 (as opposed to 30). On average, 250 unique individuals contribute to a single sketch.

Individual with collaborative prompts. A single individual creates an entire sketch using instructive text prompts provided at each iteration. The individual is instructed to follow the prompts when drawing. The text prompts are generated by asking another individual to describe what changed in a sketch from one iteration to the next in the Collaborative scenario. All text prompts for a sketch are written by a single individual. This is an interesting hybrid of having a single creator, but being guided through prompts that describe the evolution of a sketch as created by 30 unique individuals. We collected three sets of text descriptions for each of the 20 Collaborative sketches. This resulted in a total of 60 Individual with collaborative prompts sketches. In our evaluation, we consider 20 sketches (randomly picking 1 out of the set of 3). See Figure 4 for example prompts.

Figure 6: Collaborative + voting sketches are consistently preferred by human subjects over sketches from other scenarios across a variety of dimensions, and notably are rated as most creative. Notice the high variance in Individual sketches.


Example sketches from these scenarios are shown in Figure 3 as well as in Figures 9, 10, 11, 12. Before we discuss properties of the final sketch, it is worth considering the evolution of a sketch as it is being created. Collaborative sketches evolve in several interesting ways: what seems like the main subject of a sketch changes in a few iterations (Figure 5, left), given seemingly incoherent strokes, subsequent subjects try and emphasize regions that could lead to meaningful structures in the sketch for future subjects to build on (Figure 5, right), and subjects use the color white or other strategies to try and cover parts of the sketch they think are contributing negatively to it. More examples of sketches evolving across iterations can be found in Figures 13 and 14.

To assess the qualitative differences between sketches produced from the 4 scenarios, we created a collage of 20 sketches from each scenario (at 20 iterations for Collaborative and Collaborative + voting, 30 for the rest). We showed pairs of collages to subjects on Amazon Mechanical Turk and asked them to describe differences that stood out. Snippets from subjects’ responses are shown in Figure 3.

For a quantitative evaluation, we showed subjects pairs of sketches from two different scenarios (,). Each subject picked which sketch they prefer along 12 axes. Every pair was evaluated by 5 subjects resulting in 144,000 assessments: 20 (sketches from scenario ) 20 (sketches from scenario ) 6 (pairs of scenarios) 12 (axes) 5 (subjects per sketch-pair). The 12 axes were: which painting (1) seems more strange / unusual / different than typical paintings? (2) is a better painting? (3) do you like looking at more? (4) is more creative? (5) is more interesting? (6) is more original? (7) took more skill? (8) is made by an artist more likely to be an adult? (9) would you pay more for? (10) are you more likely to put up in your home? (11) is more likely to be in an art museum? (12) would you be more proud to have made yourself? Some of these axes (e.g., originality, novelty, skill) are from [11].

The % of times each scenario was picked over a competing scenario is shown in Figure 6. For 11 of 12 axes, including creativity, Collaborative + voting is preferred. Collaborative + voting scores well for both novelty (unusual) and quality (better, look), which we hypothesize increases its perceived creativity. Individual is rated well for quality but scores poorly on novelty. Across 11 axes, Individual has high variance due to differences in skill + motivation of individuals creating the sketches. Collaborative scores well on novelty, but worse on quality. Individual with collaborative prompts does poorly across all axes except for unusual, which is visually apparent in Figure 12. Of all scenarios, Collaborative + voting falls in the sweet spot for maximizing creativity (value + novelty).


In what way may a machine best contribute to the collaborative creation of a sketch? It is often the case that humans may not be good at generating strokes, but can tell if a sketch looks good or not. This may suggest using machines to generate candidate strokes and having humans vote on which versions should proceed next. The machine may also contribute in a manner similar to the humans in our fourth scenario, i.e., the machine could generate textual prompts as a human draws a sketch. The prompter can have different “personalities” based on whether it is trained on sketches generated from Individual (coherent), Collaborative (rich but chaotic) or Collaborative + voting (rich with subtle details and coherent). Humans and machine can generate strokes as a team, either in co-painting scenarios as in [2], or where the machine provides some visual guidance as in [8] or via suggestions for where to draw, what colors to use, etc. as explored in [9]. We can also train a machine to be a discriminator: given a few different stokes from a human, select which stroke should be added to the sketch next.

All our sketches started with a blank canvas. We could instead start sketches with a prompt (subject of the sketch, adjective describing a desired property of the sketch, a picture to be used as inspiration for the sketch, etc.), and have this prompt persist across iterations (or not).

It is interesting to consider ideas of ownership in the context of crowd co-creation. While no one individual may feel a complete sense of ownership of the final piece, crowd collaboration may lead to a sense of community and the satisfaction of contributing to a common cause. Finally, while our motivation was human-machine co-creation, studying human-human collaboration in general is, obviously, important and interesting in and of itself. Collaborative creative endeavors may be a fertile ground for such explorations.


  • [1] M. Boden (1992) The Creative Mind. Abacus, London. Cited by: Introduction.
  • [2] V. Cabannes, T. Kerdreux, L. Thiry, T. Campana, and C. Ferrandes (2019) Dialog on a Canvas with a Machine. In Creativity Workshop at NeurIPS, Cited by: Discussion.
  • [3] W. Chen and J. Hays (2018) SketchyGAN: Towards Diverse and Realistic Sketch to Image Synthesis. arXiv:1801.02753. Cited by: Related Work.
  • [4] K. Compton and M. Mateas (2015) Casual Creators. In ICCC, Cited by: Related Work.
  • [5] N. Davis, C. Hsiao, K. Y. Singh, L. Li, and B. Magerko (2016) Empirically Studying Participatory Sense-Making in Abstract Drawing with a Co-Creative Cognitive Agent. In IUI, Cited by: Related Work.
  • [6] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. C. Courville, and Y. Bengio (2014) Generative Adversarial Networks. NIPS. Cited by: Introduction.
  • [7] P. Karimi, J. Rezwana, S. Siddiqui, M. L. Maher, and N. Dehbozorgi (2020) Creative Sketching Partner: An Analysis of Human-AI Co-creativity. In IUI, Cited by: Related Work.
  • [8] Y. J. Lee, C. L. Zitnick, and M. F. Cohen (2011) ShadowDraw: Real-time User Guidance for Freehand Drawing. SIGGRAPH. Cited by: Discussion.
  • [9] C. Oh, J. Song, J. Choi, S. Kim, S. Lee, and B. Suh (2018)

    I Lead, You Help But Only with Enough Details: Understanding the User Experience of Co-Creation with Artificial Intelligence

    In CHI, Cited by: Discussion.
  • [10] J. Secretan, N. Beato, D. B. D’Ambrosio, A. R. Campbell, and K. O. Stanley (2008) Picbreeder: Evolving Pictures Collaboratively Online. In CHI, Cited by: Related Work.
  • [11] F. van der Velde, R. A. Wolf, M. Schmettow, and D. S. Nazareth (2015) A Semantic Map for Evaluating Creativity. ICCC. Cited by: Evaluation.