Designing for Democratization: Introducing Novices to Artificial Intelligence Via Maker Kits

by   Victor Dibia, et al.

Existing research highlight the myriad of benefits realized when technology is sufficiently democratized and made accessible to non-technical or novice users. However, democratizing complex technologies such as artificial intelligence (AI) remains hard. In this work, we draw on theoretical underpinnings from the democratization of innovation, in exploring the design of maker kits that help introduce novice users to complex technologies. We report on our work designing TJBot: an open source cardboard robot that can be programmed using pre-built AI services. We highlight principles we adopted in this process (approachable design, simplicity, extensibility and accessibility), insights we learned from showing the kit at workshops (66 participants) and how users interacted with the project on GitHub over a 12-month period (Nov 2016 - Nov 2017). We find that the project succeeds in attracting novice users (40% of users who forked the project are new to GitHub) and a variety of demographics are interested in prototyping use cases such as home automation, task delegation, teaching and learning.



There are no comments yet.


page 2

page 5


A System for Accessible Artificial Intelligence

While artificial intelligence (AI) has become widespread, many commercia...

AI Should Not Be an Open Source Project

Who should own the Artificial Intelligence technology? It should belong ...

Artificial Intelligence in Intelligent Tutoring Robots: A Systematic Review and Design Guidelines

This study provides a systematic review of the recent advances in design...

Transient Information Adaptation of Artificial Intelligence: Towards Sustainable Data Processes in Complex Projects

Large scale projects increasingly operate in complicated settings whilst...

On the Limits of Design: What Are the Conceptual Constraints on Designing Artificial Intelligence for Social Good?

Artificial intelligence AI can bring substantial benefits to society by ...

Mediating Artificial Intelligence Developments through Negative and Positive Incentives

The field of Artificial Intelligence (AI) is going through a period of g...

Mapping the Potential and Pitfalls of "Data Dividends" as a Means of Sharing the Profits of Artificial Intelligence

Identifying strategies to more broadly distribute the economic winnings ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Figure 1. (a) Kit cardboard (chipboard) and components, pre-assembly. (b) Fully assembled kit. (c) Examples of how kit components are combined with AI services to create advanced behaviors.
Figure 2. (a) Exploded view of TJBot’s components: (1) Left foot, (2) Right foot, (3) Camera, (4) Bottom retainer, (5) Top retainer, (6) LED retainer, (7) Right leg, (8) Left leg, (9) Raspberry Pi, (10) Microphone, (11) LED, (12) Speaker, (13) Camera braces, (14) Leg brace, (15) Jaw, (16) Servo motor, (17) Arm, (18) Head. (b,c) Assembled TJBot with head removed, front and rear view.

Buoyed by recent advances in machine learning, the general field of AI is well-positioned to solve problems across diverse domains, and it has been referred to as the most important general-purpose technology of our era 

(Brynjolfsson and Mcafee, 2017). Speedy declines in the error rates for perception (e.g. speech recognition, image recognition) and cognition tasks (e.g. learning to play complex games such as AlphaGo (Silver et al., 2017)) performed by AI systems herald an era where machines match and outperform their humans counterparts (He et al., 2016; Lake et al., 2015; Mnih et al., 2015). In the most successful forms of its applications, domain experts (e.g. medicine, chemistry, art etc.) collaborate with AI experts or leverage AI tools in crafting AI-powered solutions. Some examples of such collaborations include AI applied to medical imaging and diagnosis (Beck et al., 2011; Rajkomar et al., 2018; Ribli et al., 2018; Geras et al., 2017), AI applied to chemical search problems (Gómez-Bombarelli et al., 2016; Ma et al., 2015; Natarajan and Behler, 2016; Sendek et al., 2017) and AI applied to autonomous vehicle design (Bertozzi et al., 2002).

While this collaborative model holds promise in further expanding the impact of AI, it is limited by several challenges. The first challenge is related to scale. As a growing discipline, there are not enough AI experts available to collaborate with experts across all domains. More importantly, there is a need to make AI accessible to diverse user groups who can begin to identify problems, assemble data, and create solutions that solely exist within their cultural or social contexts. Second, users without AI expertise may perceive AI, a STEM field, to be complex and unapproachable (Lyons and Beilock, 2012; Chilana et al., 2015; Heilbronner, 2011; Nix et al., 2015)

, further deterring them from applying AI to solve their domain problems. These users may consist of students interested in learning, professionals outside the computer science domain (e.g. sales, marketing, medical, chemistry etc.) or individuals familiar with computer science but having no experience with AI (e.g. web and mobile software engineers). Interestingly, while many software developers rank AI as an area of interest, only a few already have the required skill. A recent large scale survey of 101,592 software developers from 183 countries showed that only 7.7% of developers identified as having skills relevant to AI (data science and machine learning) and ranked AI tools as the third most desired 

(Stackoverflow, 2018). Taken together, these challenges necessitate approaches that help on-board and introduce more user groups to AI.

While existing HCI studies have examined the general problem of understanding and supporting a spectrum of novice programmers (Du Boulay et al., 1992; Chilana et al., 2015; Chilana et al., 2016; Kelleher and Pausch, 2005; Myers and Ko, 2009), this work explores the approach of simpler programming language specifications (Kelleher and Pausch, 2005) and the use of visual programming paradigms (Resnick et al., 2009; Conway et al., 2000) in supporting learning goals for users. There is opportunity to systematically enrich the body of HCI theory and design practices through a focus on designing toolkits that make complex technology like AI more accessible, as well as studying their impact and limitations in the wild.

To address this gap, the TJBot project 111TJBot project was created by Maryam Ashoori of IBM Research  (Ashoori, 2016)

– an open-source maker kit was created in order to make AI approachable and enable users to easily prototype applications with an embodied agent using pre-built AI services (e.g. speech to text, text to speech, conversation, natural language processing, tone analysis, language translation, etc.). The physical embodiment for TJBot (Figure 

1) can be built from a piece of laser-cut cardboard (Figure 1a) or 3D printed (see Figure 3c) and contains off-the-shelf electronic components such as a Raspberry Pi, a speaker, servo, microphone, camera and LED. As part of the kit, the team released sample code that demonstrated how to easily combine AI services with the kit’s hardware to enable the bot to speak, listen, hold multi-turn conversations, see, translate text, and respond to emotion in spoken words (Figure 1

). These capabilities can then be integrated to build higher-level use cases, such as a storytelling robot for kids, an emotional companion, or a sign language translator using computer vision. Our hypothesis is that by exploiting the learning and engagement benefits

(Kuznetsov et al., 2011; Rode et al., 2015; Peppler and Glosson, 2013; Somanath et al., 2017) of maker kits, as well as design principles that make technology approachable, we can create tools that make AI approachable to novice users and support its application in prototyping solutions.

We acknowledge that democratization is a multifaceted concept with differing implications for different user groups. In this work, our scope of democratization refers to efforts that help make AI more accessible to novices and enable its creative application

in problem solving. As opposed to enabling the creation of complex AI models (e.g. design of novel neural network architectures), the goal is to familiarize users with AI and support its application in a set of problem domains. The target audiences for the maker kit were individuals with some basic knowledge of computing concepts but who have no prior experience with AI. This group included makers 

222Individuals familiar with hardware prototyping and basic programming. AI can serve to enable natural interactions and automation for their projects., non-AI software developers, and students.

To understand the impact of the TJBot kit in the wild, we adopted a multi-method strategy and report on our findings over a 12-month period (Nov 2016 - Nov 2017). We conducted several workshops (N=66 participants total) and demonstrated the kit at exhibition booths at two large conferences, conducting informal interviews with attendees. Our work contributes to the area of maker kits and their value in democratizing emerging technology in the following ways:

  1. [label=()]

  2. we provide one of the first detailed accounts of a design attempt to democratize AI using maker kits, and the strategies pursued in doing so. Our design guidelines show how to design maker kits that are cheap and easy to use, but highly functional.

  3. we identify a set of use cases of interest for maker kits (home automation, task delegation, teaching and learning), provide insights on specific use behavior (remixing, individual vs group use), and insights on the value of a visualization (interpretable) interface on user interaction.

  4. we provide an analysis on user interaction with the sample code for programming the kit (its success in attracting novice users), challenges faced and limitations to collaborations.

2. Background

AI Task Description Example Tools Skill
Creating AI Low level optimization, numeric computation Python, Numpy High
AI model creation, training, and debugging Tensorflow, Theano, PyTorch, MXNet, Caffe, CNTK High
High Level AI model design Keras, Gluon,, Chainer, Watson Studio, AutoML Medium
Applying AI Applying prebuilt models (vision, language, etc) Google, IBM, Microsoft, Amazon, Clarifai Medium
High-level exploration, embodied prototyping IBM TJBot, Google Vision Kit, Google Voice Kit Low
Table 1.

Tools that support the democratization of various AI tasks with estimates of required AI skill.

We discuss related work that underpin our research: democratizing innovation via toolkits, maker cultures and maker kits, and AI.

2.1. Democratizing Innovation via Toolkits

As new technologies emerge, an important aspect of their long-term success is the degree to which they can be applied to the specific needs of diverse user groups. This vital but challenging component of the innovation process has been described as need-related innovation (von Hippel, 2006, p.147). To address this, research studies have emphasized the user-innovation approach (Franke et al., 2006; von Hippel, 2005; Morrison et al., 2000), in which companies partner with and co-create with users. This approach yields several benefits. First, it enlists a diverse array of external partners, each of whom has a deep understanding of a given usage context and enables the generation of a diverse set of ideas. Next, it enables firms to focus their efforts on identifying highly engaged users (Hippel, 1986) and working with them to generate and test concepts (Urban and von Hippel, 1988). User innovation is attractive as it has been shown to enable faster production and reduced costs relative to sole reliance on internal R&D efforts (von Hippel, 2006; von Hippel and Katz, 2002, p.148). Given the value of user innovation, efforts have been made to understand approaches to enabling user co-creation using toolkits. von Hippel (von Hippel and Katz, 2002) has discussed the emergence of such innovation toolkits for product design, prototyping, and design-testing tools. These toolkits are intended to enable non-specialist users create high quality solutions that meet their specific needs (von Hippel, 2006), and thus democratizes the innovation process. To support innovation, von Hippel (von Hippel and Katz, 2002) argues that these toolkits must possess several attributes: support complete cycles of trial and error learning; offer a broad solution space for creativity; offer a friendly user interface; contain reusable modules that can be integrated into designs; and support creation of designs that can be reproduced at scale. Many aspects of the TJBot project followed these design guidelines in order to create an artifact that enabled creativity with a complex technology (AI).

2.2. Maker Cultures and Maker Kits

2.2.1. Making and Engagement

In recent years, the culture of making – also referred to as the maker movement or Do-It-Yourself (DIY) culture – has moved from being a niche or hobbyist practice to a professional field and emerging industry (Ames et al., 2014; Lindtner et al., 2014). It has been defined broadly as ”the growing number of people who are engaged in the creative production of artifacts in their daily lives and who find physical and digital forums to share their processes and products with others” (Halverson and Sheridan, 2014). Proponents of maker culture such as Chris Anderson (Anderson, 2012), distinguish between the maker movement and work by inventors, tinkerers and entrepreneurs of past eras by highlighting three characteristics: a cultural norm of sharing designs and collaborating online, the use of digital design tools, and the use of common design standards that facilitate sharing and fast iteration (Anderson, 2012). Research in this area has sought to understand the formation of online maker communities (Buechley and Eisenberg, 2008; Buechley and Hill, 2010; Kuznetsov and Paulos, 2010) as well as characterize the dominant activities, values, and motivations of participants. Perhaps the most relevant aspect of maker cultures and maker communities to our study is related to the motivations and ethos observed within these communities. Participants have been described as endorsing a set of values such as emphasizing open sharing, learning, and creativity over profit and engendering social capital (Kuznetsov and Paulos, 2010). Makers have also been described as participating for the purpose of receiving feedback on their projects, obtaining inspiration for future projects and forming social connections with other community members (Kuznetsov and Paulos, 2010). In general, maker culture promotes certain ethos and cultural tropes such as ”making is better than buying” (Tanenbaum et al., 2013, p.2604) and has been known to build and reinforce a collective identity that motivates participants to make for personal fulfillment and self-actualization (Somanath et al., 2017). Taken together, these motivations and behaviors suggest maker culture encourages an overall intrinsic motivation approach valuable in addressing known engagement problems (Heilbronner, 2011) associated with effortful learning tasks.

2.2.2. Maker Kits and Learning

Maker culture, much like user innovation, has adopted the use of toolkits that support problem solving and fabrication but with a focus on their learning benefits. Early work by Harel and Papert (Harel and Papert, 1991) introduced the theory of constructionism that emphasizes embodied production-based experiences as the core of how people learn. Building on this, learning support tools have been designed that allow for digital construction such as the Logo programming language (Papert, 1980) and the Scratch programming language (Resnick and Silverman, 2005), as well as and physical hands-on construction such as the LEGO Mindstorms kits (Resnick et al., 1988), the LilyPad (Buechley and Eisenberg, 2008; Buechley and Hill, 2010), EduWear (Katterfeldt et al., 2009), the Finch robot kit (Lauwers and Nourbakhsh, 2010) and MakerWear (Kazemitabaar et al., 2017). In addition, Google released two maker kits after the release of TJBot in order to provide makers with hands-on AI experiences: the Vision Kit (Google, 2017a) and the Voice Kit (Google, 2017b). All of these tools, which we collectively refer to as maker kits, emphasize learning through making and have shown promise in empowering users to create self-expressive and personally meaningful designs (Kafai et al., 2014b), improving the perception of computing (Kafai et al., 2014a) and introducing new user groups to an otherwise inaccessible technology or learning experience (Buechley and Hill, 2010; Mellis et al., 2016). The making approach has been cited for its potential to democratize technology, improve workforces, improve technology fluency (Cross et al., 2015), improve engagement and participation in education (Cross et al., 2015; Rode et al., 2015; Peppler and Glosson, 2013; Somanath et al., 2017; Lauwers and Nourbakhsh, 2010), empower consumers and contribute to the economy (Ames et al., 2014; Sivek, 2011).

2.3. Democratizing AI

The research field of AI originates from early efforts to simulate aspects of human intelligences using machines (McCarthy et al., 1955). The domain draws on advances in machine learning algorithms that allow machines to reason, learn, recognize patterns, and understand natural language in manners similar to the human brain.

To scale the impact of AI, research and industry stakeholders have begun to explore approaches that help democratize AI by supporting two types of tasks: creating AI and applying AI. Table 1 provides a summary of tools that make these tasks more accessible to users, as well as the skill requirements for each task.

Creating AI entails the use of low-level numeric computation functions that are then used to create, train, and debug AI models. These tasks are supported by programming frameworks such as Tensorflow (Abadi et al., 2016), Caffe (Jia et al., 2014), Theano (Bergstra et al., 2010), and PyTorch, and high level model design frameworks such as Keras (Chollet, 2015), IBM Watson Studio, and Google AutoML. Typically, the task of creating AI is complex compared to applying AI, and it draws on skills spanning programming, mathematics, statistics, and optimization. To reduce the complexity associated with applying AI in solving problems, AI models are now increasingly offered as black-box cloud hosted services (Spohrer and Banavar, 2015). These services remove the complexity of designing, training, and testing AI models by providing ready to use models that can be accessed over API endpoints. Examples include AI services offered by companies such as IBM, Google, Amazon, and Microsoft.

While being of immense value, these frameworks and services still require considerable skill to use, and may be unapproachable to non-technical users. In this work, we examine an effort to democratize AI that focuses on supporting individuals interested in applying AI (high-level exploration and prototyping).

3. The TJBot Maker Kit

Figure 3. Visualization interface showing: (a) audio transcript, intent confidence values and responses for two conversational interactions, (b) input and response from the computer vision service, and (c) a 3D-printed bot.

We briefly introduce the TJBot maker kit and describe its hardware and software capabilities, early feedback after its release, and emergent design principles identified through discussions with the kit’s development team.

3.1. Design Probe with Developers

Following the completion of our first prototype (Dibia et al., 2017), we created sample code that showed how to program the kit which was then shared on GitHub, an open-source code distribution platform, as well as documentation on how to obtain and assemble the kit hardware. We then showed the kit at a booth within a developer conference where we recruited participants (n=30), and conducted informal semi-structured interviews about the appearance of the kit, its functionality and their overall reaction to demonstrations of the kit. Each of these developers were later sent an early version of the kit, and we monitored GitHub for any issues or feedback they had. While we obtained overall positive feedback during our informal interviews (they felt the kit had an appropriate level of hardware complexity, appeared easy to assemble and felt the sample code was easy to navigate), they had challenges in adapting our sample code to new use cases. We found that users posted issues related to errors when making connections to the prebuilt AI service endpoints, challenges with managing voice interaction context (e.g deciding when to listen or speak during an interaction) etc. While solutions to these challenges are readily obvious to experienced software engineers, many of our users where either novices or were uninterested in solving technical problems unrelated to their primary use cases. To address these challenges, we created a software library api (TJBotlib) which encapsulates capabilities that were complex (usually required a connection to one or more AI services), frequently used and prone to error. This way users could focus on creativity and write significantly less code to realize their use case ideas (see Figure LABEL:fig:sampletjbotcode. We also observed that it was challenging for users being shown a demonstration of the kit to respond to changes in the state of the bot and understand how different cognitive services enabled its capabilities. For example users would frequently ask ”what did it hear?”, ”which of my comments is it responding to?”, ”why does it give this response? Its not correct”. To address this, we began designing a user-friendly dashboard interface (see Figure 3) that would visualize interactions with the bot and make the bot’s activities more transparent to the user. Reactions to this interface are reported in the findings section of the paper.

3.2. Hardware

The external body of TJBot may be built either from laser-cut chipboard or 3D-printed parts (Figure 2). The primary electronic component – TJBot’s “brain” – is a Raspberry Pi hardware board, an affordable-yet-functional credit-card sized computer that has become popular within the maker community for prototyping applications. The Pi connects with a set of external sensors and actuators such as an LED, a microphone, a speaker, a servo, and a camera. This hardware enables TJBot to receive visual and auditory input, and generate visual, auditory, and mechanical output.

3.3. Software

TJBot is programmed in JavaScript, selected for its ease of use and wide adoption rates (Meyerovich and Rabkin, 2013; Stackoverflow, 2018). It specifically uses the Node.js framework, an open-source cross-platform run-time environment that supports server-side JavaScript development. The Node.js architecture is expressive and functional without sacrificing performance (Lei et al., 2014; Tilkov and Vinoski, 2010), and it has a vibrant, community-maintained repository of 3rd party libraries.

TJBot comes with a set of three stock “recipes,” which are pre-written scripts that showcase AI capabilities: changing the color of the LED by voice, performing real-time sentiment analysis on tweets, and conducting a back-and-forth conversation. These recipes enable beginners to interact and experiment with AI services in a no-code/low-code manner; for those with some coding knowledge, the recipes can be tweaked to add new functionality (e.g. making TJBot shine a sequence of colors instead of just one color).

Early feedback on these recipes indicated that they showcased interesting behaviors, but it was difficult for those with some amount of programming knowledge to extend them to perform radically different behaviors. To address this feedback, we developed a software library (tjb, 2017c) to hide some of the more difficult aspects of programming with TJBot 333Other developers have also created their own approaches to simplifying TJBot programming, such as Node RED modules (tjb, 2017b, a) to enable visual programming.: authenticating to IBM’s Watson APIs, interfacing with the hardware elements, and providing easier asynchronous semantics for features such as listening or speaking.

Another point of feedback centered around debugging TJBot’s behaviors. Even though the bot has a variety of hardware actuators that may provide feedback, when one’s code wasn’t working, these actuators did not suffice for debugging. We developed a visual dashboard interface for observing TJBot’s internal state (Dibia, 2016) (Figure 3). This interface also enabled us to more easily describe the inner workings of TJBot, and its constituent AI services, to attendees at our workshops and conferences.

3.4. Emergent Design Principles

While the design process of TJBot did not begin with explicitly defined design principles, post-design reflection reveals that the following principles were captured in its design.

3.4.1. Approachable Design

Designs that are approachable inspire engagement. TJBot inspires people to engage with it via its familiar materials (paper, plastic), its “tinkerability,” and its friendly form factor. The square-shaped robot appearance has defined, recognizable features such as two eyes, a mouth, and an arm, conveying its ability to see, speak, and wave.

3.4.2. Simplicity (Low Floors, High Ceiling, Wide Walls)

This design goal necessitates a kit that is easy to both physically assemble and program (low floors), yet functional enough to support the creation of complex and meaningful use cases (high ceilings). In this sense, the kit should also support a wide range of user skill levels, from novice users (e.g. school students) to experienced developers. This design goal is also related to the “wide walls” approach described by Resnick et al. (Resnick and Silverman, 2005) in which they emphasize that construction kits (for kids in their case) should support a wide range of different kinds of exploration (wide walls). The expectation is that a diversity of possibilities will allow for the construction of unique creations that surprise both the user, and the creators of the kit, and inspire that sense of infinite possibilities necessary for sustained engagement.

We believe TJBot achieves this goal in several ways. First, its assembly approach allows users to build the kit by simply folding chipboard or snapping plastic parts together, without the need for complicated gluing. In addition, assembly can be performed by very young children under supervision, resulting in a fun artifact that they can make their own via decoration; no code or AI required. For those with more advanced skills, even the wiring of the electronics can be performed simply, without soldering. Finally, the software developed for TJBot hides much of the complexity of interfacing with hardware and cloud APIs, and enables one to program at the level of the bot’s capabilities: speak, see, listen, shine, wave, etc.

3.4.3. Extensibility

Extensibility allows users to easily add new capabilities to an existing artifact. As TJBot is an open source project, numerous projects have been developed to give TJBot new abilities, such as walking, driving, speaking different languages, and even communicating with a companion robotic dog.

3.4.4. Accessibility

TJBot was designed to be relatively cheap, easily disseminated, and widely available. The use of cardboard as a design material, as well as the use of off-the-shelf electronic components, make it easy for anyone to assemble their own kit. Accessibility is increasingly important for DIY kits as research has shown that in some cases, while maker culture strives to promote democratization of technology, resource constraints (custom components, expensive parts) make it remain an activity of privilege, accessible to only a select few (Tanenbaum et al., 2013, p.2605).

4. Study

We report on two types of analysis we conducted: informal feedback received from attendees of TJBot workshops and demonstrations, and findings from an analysis of data on the project’s Github repository.

4.1. Workshops and Demonstrations

We conducted five workshop sessions (N=66 participants total) at two large technology-oriented conferences. We invited participants of all skill levels interested in learning about and building AI-enabled prototypes. Participants were recruited on a first-come, first-served basis; emails were sent out to conference attendees inviting them to sign up until all spaces had been filled. Each session lasted an hour, and participants worked in groups of four. Participants filled out a pre-workshop survey that asked about their background, level of technical expertise, their interest in working with the kit’s hardware (sensing) capabilities and the software (AI) capabilities, and the use cases they envisioned. To reduce setup time and ensure each group could write code and interact with the bot, they were provided with a fully-assembled bot preloaded with starter sample code.

Each session began with a 15-minute introduction to the kit describing its components, the AI services available for use, and the online location of all materials (e.g. design files, sample code, instructions). This introduction also included a demonstration of the kit, showing off AI services such as speech to text, text to speech, computer vision, and sentiment analysis. During the next 20 minutes, each group of participants was guided through a hands-on assembly of the laser-cut chipboard. Finally, participants were instructed in running the sample code on the fully-assembled TJBots, and given an opportunity to examine and experiment with the source code. At the end, participants filled out a post-workshop survey in which they were asked to evaluate the kit’s visual appeal, ease of use (programming and assembly), and describe potential projects they might like to create using it. All participants were given a TJBot kit to take home.

Theme % of use case comments
Before After Overall
Home automation 9 13 19
Task delegation 28 18 40
Teaching/Learning 19 10 25
General functions 6 37 42
None 30 18 41
Table 2. Themes from reported use cases before and after the workshop.

4.1.1. Survey Findings

The pre-workshop survey revealed some diversity among the participants: 51% identified themselves as developers, 22% as makers, and 13% as designers. While most (70%) had over three years of programming experience, only 26% had a year or more experience working with embodied maker kits and most (74%) had no experience with AI services. This population reflects similar trends found in a large scale study of developers where only 7% of over 100k developers had machine learning or AI skills (Stackoverflow, 2018). None of the participants had any experience with TJBot, although about a third of them had previously heard about it online. At the end of the workshop, 90% of participants indicated they were interested in working with AI services going forward.

How do users envision their use of the kit? Participants reported that they intended to use the TJBot kit in a number of ways. Over half of them (62%) indicated that they intended to modify the kit’s hardware (e.g. adding new types of sensors and actuators), 77% intended to create additional software components, and about half intended to pursue both kinds of expansions. It is interesting to note that most participants said they would use the kit with other people: 65% with friends or colleagues, 48% with children, and 18% with students (teaching). A minority (27%) indicated that they would only work alone. In the post-workshop survey, the large majority of participants provided positive feedback about the kit: 96%, rated it as visually appealing, 96% felt it had a good repertoire of functions, 86% felt it was easy to program, and 78% felt it was easy to build.

What are the use cases of interest? Participants were asked to describe the use cases they envisioned for TJBot at the start and at the end of the workshop. Using an iterative coding approach, the use cases provided by participants were coded into three main themes and one ”General Functions” theme as shown in Table 2. Some participants provided several ideas that fit within multiple themes - these ideas were coded separately.
Home Automation: One of the most common use cases mentioned was related building a robot that would monitor objects/activities or manage smart devices already present within a home. These use cases focused on using the bot as a voice-based interface that can be used to instantiate monitoring activities as well as report on the status of managing activities.
twitter alert each time a deer walks through my yard - P32

home automation assistant (lights, nest), face recognition/welcome sensor

- P44
control home automation sensors to help handicap or elderly people - P52;
Interface to my home automation system, and my personal weather system - P37.
Task Delegation (Anthropomorphizing): This theme referred to use cases where participants referred to TJBot as an individual entity, using words like ”a friend”, ”assistant” or ”helper”. These participants often described their use cases using words that would typically describe interactions with a trusted acquaintance.
My new best friend - P4;
I can picture an Alexa-esque friend who can answer questions and do fun things - P18. Some participants described TJBot as a “personal assistant.” While this suggests less intimacy than the more personal framings above, it nevertheless indicates a willingness to delegate actions or responsibilities to the bot.
a friend that will greet guests at my door - P13;
house manager- P64;
intelligent house robot - P54
Teaching and Learning: Participants indicated various ways in which they would use the kit for teaching and learning. Several users mentioned they would use the kit either for self-learning, or as a tool to teach their kids (and other young individuals) about artificial intelligence and computer science in general.
just learning machine programming, learn about Watson services- P26;
using it as a way to teach my kids (10 and 12) about coding with Watson - P5;
projects with my daughter - P30;
getting kids interested in technology - P51;
would like to use this as a tool to work with young girls learning about technology - P60;
challenging my kids with development P67; Finally, others saw the kit as an opportunity for professional learning, a tool for communicating technical ideas to non-tech-savvy audiences (clients in some cases) and a tool for developing engaging proof of concepts (POCs).
I am in the automotive industry. I can definitely use this for proof of concepts.- P57
General Functions: Participants also came up with many other use cases and ideas that were more related to specific capabilities of the kit - e.g. ideas around the camera, audio, and general input output capabilities of the kit. Examples of these included using the bot as an announcer (announce tennis scores, weather changes, software project build status, security incidents, baby monitor, family greeter), vision recognition tool (recognize foreign currencies, analyze images to infer human activity or identity), a digital assistant for language learning, a tool for games and entertainment, virtual pets and a tool for prototyping interactive stories.

4.2. TJBot Demonstrations

At each conference where we ran the workshops, we also presented TJBot at an interactive booth. Visitors to the booth could interact with an assembled version of the kit, ask questions, and provide feedback. Each demonstration lasted about 10 - 20 minutes and allowed us observe reactions to the kit, and assess desire for specific use cases.

4.2.1. Reactions to the Kit

We observed 3 distinct reactions common to most users: an affective reaction in which users reacted to the visual appearance of the bot, a functional reaction in which users inquired about the capabilities of the bot or tried to infer this themselves, and a customization reaction in which they sought to learn about how they could customize the bot to use cases of interest to them.
(i) Affective: People tended to anthropomorphize the bot and perceive it as friendly, using words like “cute” and “cute little guy” to describe it.
(ii) Functional: People asked about the bot’s hardware and general capabilities and asked about concrete use cases for which it could be used. We also observed interesting assumptions made by users as they tried to interact with the bot. For example, several users would immediately attempt waving at the bot (assuming it could see) and issuing voice commands (assuming it could hear) even before they confirmed the bot was capable of these capabilities. These assumptions were likely inspired by the humanoid appearance of the bot.
(iii) Customization: People sought to understand the ways that they could extend the capabilities of the bot and understand the complexity of integrating 3rd party hardware and software components.

4.3. GitHub Activity

Several collaborative features of GitHub enabled us collect data on usage patterns: issues, stars, and forks. GitHub issues allow people to create bug reports, feature requests, enhancements, or submit general feedback. Project maintainers may also use issues to keep track of open tasks. GitHub stars allow people to bookmark projects of interest and show appreciation to their maintainers. Forks allow people to create a copy of a project that they can modify without affecting the original, and enables them to contribute their modifications if they chose to. During the period of 12 months considered in this analysis, the project repository was forked 173 times and starred 314 times by users. Of those who forked the repository, 40% had owned a GitHub account for a year or less and had only a single code repository listed on their account. Users also opened a total of 47 issues mainly around requesting support on hardware or software related errors they encountered (68%), suggesting contributions to the project documentation including identifying broken links or missing content (11%) and providing general feedback on issues they encountered or fixes they had implemented (21%). A total of 6 pull requests (GitHub’s mechanism for allowing external users contribute to a code repository) were created, of which 4 were approved and merged into the main code repository.

5. Discussion

5.1. Maker Kits as Tools for Democratizing Technology

TJBot addresses its goal of democratizing AI in several ways: the degree to which it elicits interest from users, attracts novice users, and engages broad audiences.

5.1.1. Eliciting Interest

Results from our survey and demonstrations suggest that TJBot is successful in eliciting interest in AI. While 74% of workshop participants noted no previous experience with AI services, 90% mentioned they were likely to create AI prototypes after the workshop. We also observed a 3-step reaction when we demonstrated the kit to users: a positive affective reaction to the visual appearance of the kit, exploration of its capabilities, and finally an expression of interest in extending these capabilities to meet personal use cases. In our workshop survey, participants also expressed interest in extending the hardware and software capabilities of the kit.

5.1.2. Attracting Novice Users

Attracting individuals to experiment with complex technology like AI can be challenging especially for first time users with limited technical skill. Interestingly, interaction pattern data from GitHub suggests that a significant amount of activity comes from relatively new users. We find that 40% of users who made public forks of the code had owned their GitHub accounts less than a year and had not authored any other publicly shared projects.

5.1.3. Impact

Although it is difficult to estimate the exact number of total users and programs designed around TJBot (the project does not include any explicit tracking), initial data from social media and other sources suggests the project has been widely used across across a range of demographics and locations. Over the period of 12 months, instructions for the project on Instructables were viewed over 91,000 times, the TJBot library used to program the kit was downloaded over 5,300 times and the project repository on GitHub received over 350 stars. Users also posted images and videos of their prototypes on Twitter spanning use cases such as connecting the bot to their IoT home devices, interactive storytelling and a range of voice based conversation examples, many in agreement with the use cases elicited during our workshop survey. We also saw groups use the kit during tech meetups, corporate and informal training, hackathons, and numerous STEM education events. The variety of interactions on Twitter spanned users from over 10 countries.

5.1.4. Exploring the Limits of AI

Recent studies indicate the average user incorrectly estimates the limits of AI (Chandrasekaran et al., 2017) by overestimating its capabilities or artificially discounting its value. These estimations can foster polarizing (utopian or dystopian) views of AI. A lack of experience with AI has been shown to be partly responsible for this failure to correctly predict what AI can or cannot accomplish (Chandrasekaran et al., 2017). Maker kits like TJBot can help address this challenge by providing a medium which enables user to experientially probe the limits of AI. The maker kit enables users to iteratively formulate and test hypothesis on AI capabilities, helping them build an objective assessment on the performance of AI. During these explorations, users can also address limitations they observe. For example, a user may a assemble a new image dataset of faces and train a custom AI model which improves image recognition performance for their skin tone. These activities can both aid learning and serve to increase the diversity of training data and problems that can be addressed using AI.

5.1.5. Building Trust Through Transparency

We found that explicitly visualizing the input/output of AI services used to prototype an interaction helped users to make sense of the decisions made by TJBot. The feedback provided by the dashboard visualizations helped users understand when the system was failing (e.g. incorrect transcripts of a command or keyword, network delays) and helped inform steps to recovery (e.g. retry the interaction or pause). This observation is similar to results found by Kulesza et al. (Kulesza et al., 2015) where they find that an implementation of an explanatory debugging interface increased user understanding of a machine learning system and allowed a more efficient correction of mistakes. Existing research also suggests that being able to understand how AI systems work is critical for building trust (Lipton, 2016). Trust is critical as it has been shown to drive system usage. Ribeiro et al. (Ribeiro et al., 2016) note that users are unlikely to use a system which they do not trust. With the proliferation of integrated smart appliances infused with AI (e.g. connected homes with smart TVs, refrigerators, lighting systems, etc.), our findings suggest that the user experience can be improved by creating visualizations which offer information on the dynamics (input and output state change) of integrated AI services.

5.2. AI Maker Kit Use Cases and Behaviors

Results from our survey provide insight on use cases of interest to users and the value of embodied maker kits.

5.2.1. Use Cases

The most common use cases mentioned by participants were task delegation (40%), teaching and learning (25%), and home automation (19%). Based on the study results, we find that participants, perhaps in an indirect reference to the AI capabilities within the kit, envision use cases where the bot can take on the role of a “manager” that both performs its own sensing functions and also manages other devices to orchestrate higher level actions within a home environment. This result agrees with extant studies of DIY communities where 35% of survey respondents indicated they worked on DIY projects for home improvement (Kuznetsov and Paulos, 2010). They also suggest that users perceive value in AI agents capable to managing tasks on their behalf with important implications for the design such agents. These include robust natural language interaction capabilities, integration capabilities that allows it interface with other agents or systems and reasoning capabilities to intelligently orchestrate high level decisions.

5.2.2. Social Making

Respondents indicated that they planned to use the maker with others: with friends or colleagues (65%), and with children in an educational or family setting (48%). While further research is required to the drivers of this social/group use behavior, the current findings suggest that such maker kits are suitable for team training, education and parent-child teaching use cases. They further position maker kits as tools to introduce students to AI, with potential to address engagement challenges (Heilbronner, 2011; Watkins and Mazur, 2013) known to deter students from STEM education. A design implication of this finding is the need to explore programming interfaces that support multiple concurrent users and user pairs (e.g. student-student, teacher-student, parent-child, etc). As advances in AI algorithms continue to drive the proliferation of AI, it is expected that more firms will provide black box AI services as well as professional and consumer development kits to support creativity with AI (e.g. (AIY, 2018)). The insights from our experience demonstrating TJBot and surveying users (use cases, social making) can help inform the design of such kits.

5.3. Challenges with Maker Kits

Issues reported on GitHub suggest that while most users are able to assemble the kit and download the sample code we provide, they face the most difficulty troubleshooting hardware and software problems during installation and prototyping solutions. These included difficulties correctly issuing some command line instructions and navigating project directories, modifying software configuration to match changes in their hardware components and connectivity issues with cloud hosted AI services. We find that 68% of all recorded issues on the project’s GitHub repository are focused on requesting technical support while only 21% were focused on providing feedback or solutions. Given that most users were not technical experts, there were only 6 external code contributions to the project on GitHub. This observation highlights the importance of allocating resources to resolving technical issues as an important aspect efforts in democratizing technology.

To address these issues, a dedicated team member should monitor issues posted on GitHub and continuously provide support, in addition to developing instructional materials such as videos and guides. Automated tests and troubleshooting scripts can help support self-troubleshooting. Troubleshooting hardware issues, such as incorrect wiring or broken components, were much more difficult to perform online via GitHub, and further work is needed to find good solutions to help people accomplish this.

5.4. Limitations and Future Work

We evaluated our kit using surveys and observations within a workshop setting. While this approach is common in maker kit research (Jacobs et al., 2014; Katterfeldt et al., 2009; Kazemitabaar et al., 2017; Lau et al., 2009; Meissner et al., 2017), it limits our ability to assess the effect of a workshop facilitator and workshop content on interaction behaviors. Further experimental research is needed to independently compare the maker kit approach to other approaches (for example, personal fabrication (Mellis et al., 2016)) of introducing novice users to new technology. Another limitation of this work is related to the sample of interaction data analyzed from GitHub. It is possible that not all new users who interacted with our project (built the kit and downloaded sample code) took the additional effort to create GitHub accounts or public forks limiting our ability to assess this population.

From our workshops, we have received early feedback from educators eager to integrate the kit into their curriculum. However, they have noted that further simplification is required to make the project accessible younger users (e.g. 6+). This is similar to findings from (Mellis et al., 2016) where they call for tools that simplify programming components associated with the task of personal fabrication. Thus, future work is needed to focus on further automating aspects of the kit’s software and hardware setup, and creating visual programming interfaces (e.g. (Resnick et al., 2009; Cross et al., 2013)) that allow younger or lower-skilled users to prototype solutions.

6. Conclusion

We described how the TJBot maker kit was designed to democratize the emergent area of AI. We report on principles demonstrated through the kit’s design, feedback received from workshops and demonstrations, and insights from analyzing data on how users interacted with its open-source code over a 12-month period. We find that users are interested in exploring a variety of AI use cases including home automation, task delegation, teaching, and learning, that they view working with the kit as a social endeavor, and that the kit has a wide appeal to novices. This work contributes to the area of designing for democratization and proposes maker kits as a viable approach. As more designers and researchers begin to explore the use of cardboard-based construction kits (e.g. Google AIY (Google, 2017a, b), Nintendo Labo (AIY, 2018; Nintendo, 2018)), the design principles we present can provide directions on how to design such kits.

7. Acknowledgement

We thank Maryam Ashoori whose efforts as the team lead of the TJBot project was critical to making the project a reality. We thank Rachel Bellamy for her enthusiastic support of this project, and are grateful to Thomas Erickson and Shari Trewin for valuable feedback on this manuscript.


  • (1)
  • tjb (2017a) 2017a. node-red-contrib-tjbot. Retrieved September 21, 2018 from
  • tjb (2017b) 2017b. TJBot-Node-RED. Retrieved September 21, 2018 from
  • tjb (2017c) 2017c. tjbotlib. Retrieved September 21, 2018 from
  • Abadi et al. (2016) Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2016. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. In OSDI’16 Proceedings of the 12th USENIX conference on Operating Systems Design and Implementation. 265–283.
  • AIY (2018) Google AIY. 2018. Do-it-yourself artificial intelligence.
  • Ames et al. (2014) MG Ames, J Bardzell, S Bardzell, and S Lindtner. 2014. Making cultures: empowerment, participation, and democracy-or not? CHI’14 Extended (2014).
  • Anderson (2012) Chris Anderson. 2012. Makers: The new industrial revolution. Vol. 1. 559–576 pages.
  • Ashoori (2016) Maryam Ashoori. 2016. Calling all Makers: Meet TJBot! Retrieved September 21, 2018 from
  • Beck et al. (2011) Andrew H. Beck, Ankur R. Sangoi, Samuel Leung, Robert J. Marinelli, Torsten O. Nielsen, Marc J. van de Vijver, Robert B. West, Matt van de Rijn, and Daphne Koller. 2011. Systematic Analysis of Breast Cancer Morphology Uncovers Stromal Features Associated with Survival. Science Translational Medicine 3, 108 (2011), 113–108.
  • Bergstra et al. (2010) James Bergstra, Olivier Breuleux, Frederic Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. 2010. Theano: a CPU and GPU math compiler in Python. Proceedings of the Python for Scientific Computing Conference (SciPy) Scipy (2010), 1–7.
  • Bertozzi et al. (2002) M. Bertozzi, A. Broggi, M. Cellario, A. Fascioli, P. Lombardi, and M. Porta. 2002. Artificial vision in road vehicles. Proc. IEEE 90, 7 (7 2002), 1258–1271.
  • Brynjolfsson and Mcafee (2017) Erik Brynjolfsson and Andrew Mcafee. 2017. The business of artificial intelligence. Harvard Business Review (2017).
  • Buechley and Eisenberg (2008) Leah Buechley and Michael Eisenberg. 2008. The LilyPad Arduino: Toward Wearable Engineering for Everyone. IEEE Pervasive Computing 7, 2 (4 2008), 12–15.
  • Buechley and Hill (2010) Leah Buechley and Benjamin Mako Hill. 2010. LilyPad in the wild: how hardware’s long tail is supporting new engineering and design communities. In Proceedings of the 8th ACM Conference on Designing Interactive Systems - DIS ’10. ACM Press, New York, New York, USA, 199.
  • Chandrasekaran et al. (2017) Arjun Chandrasekaran, Deshraj Yadav, Prithvijit Chattopadhyay, Viraj Prabhu, and Devi Parikh. 2017. It Takes Two to Tango: Towards Theory of AI’s Mind. (4 2017).
  • Chilana et al. (2015) Parmit K. Chilana, Celena Alcock, Shruti Dembla, Anson Ho, Ada Hurst, Brett Armstrong, and Philip J. Guo. 2015. Perceptions of non-CS majors in intro programming: The rise of the conversational programmer. In 2015 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC). IEEE, 251–259.
  • Chilana et al. (2016) Parmit K. Chilana, Rishabh Singh, and Philip J. Guo. 2016. Understanding Conversational Programmers: A Perspective from the Software Industry. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems - CHI ’16. ACM Press, New York, New York, USA, 1462–1472.
  • Chollet (2015) François Chollet. 2015.

    Keras: Deep Learning library for Theano and TensorFlow.

    GitHub Repository (2015), 1–21.
  • Conway et al. (2000) Matthew Conway, Steve Audia, Tommy Burnette, Dennis Cosgrove, and Kevin Christiansen. 2000. Alice: lessons learned from building a 3D system for novices. In Proceedings of the SIGCHI conference on Human factors in computing systems - CHI ’00. ACM Press, New York, New York, USA, 486–493.
  • Cross et al. (2013) Jennifer Cross, Christopher Bartley, Emily Hamner, and Illah Nourbakhsh. 2013. A visual robot-programming environment for multidisciplinary education. In Proceedings - IEEE International Conference on Robotics and Automation. 445–452.
  • Cross et al. (2015) Jennifer L. Cross, Emily Hamner, Chris Bartley, and Illah Nourbakhsh. 2015. Arts & Bots: Application and outcomes of a secondary school robotics program. In 2015 IEEE Frontiers in Education Conference (FIE). IEEE, 1–9.
  • Dibia (2016) Victor Dibia. 2016. TJBot Dashboard. Retrieved September 21, 2018 from
  • Dibia et al. (2017) Victor C. Dibia, Maryam Ashoori, Aaron Cox, and Justin D. Weisz. 2017. TJBot: An open source DIY cardboard robot for programming cognitive systems. In Conference on Human Factors in Computing Systems - Proceedings.
  • Du Boulay et al. (1992) J B H Du Boulay, M J Patel, and C Taylor. 1992. Programming Environments for Novices. Computer Science Education Research (1992), 127–154.{_}4
  • Franke et al. (2006) Nikolaus Franke, Eric Von Hippel, and Martin Schreier. 2006. Finding commercially attractive user innovations: A test of lead-user theory. In Journal of Product Innovation Management, Vol. 23. 301–315.
  • Geras et al. (2017) Krzysztof J. Geras, Stacey Wolfson, Yiqiu Shen, Nan Wu, S. Gene Kim, Eric Kim, Laura Heacock, Ujas Parikh, Linda Moy, and Kyunghyun Cho. 2017. High-Resolution Breast Cancer Screening with Multi-View Deep Convolutional Neural Networks. In Computer Vision and Pattern Recognition.
  • Gómez-Bombarelli et al. (2016) Rafael Gómez-Bombarelli, Jorge Aguilera-Iparraguirre, Timothy D. Hirzel, David Duvenaud, Dougal Maclaurin, Martin A. Blood-Forsythe, Hyun Sik Chae, Markus Einzinger, Dong Gwang Ha, Tony Wu, Georgios Markopoulos, Soonok Jeon, Hosuk Kang, Hiroshi Miyazaki, Masaki Numata, Sunghan Kim, Wenliang Huang, Seong Ik Hong, Marc Baldo, Ryan P. Adams, and Alán Aspuru-Guzik. 2016. Design of efficient molecular organic light-emitting diodes by a high-throughput virtual screening and experimental approach. Nature Materials 15, 10 (2016), 1120–1127.
  • Google (2017a) Google. 2017a. Vision Kit. Retrieved September 21, 2018 from
  • Google (2017b) Google. 2017b. Voice Kit. Retrieved September 21, 2018 from
  • Halverson and Sheridan (2014) Erica Rosenfeld Halverson and Kimberly Sheridan. 2014. The Maker Movement in Education. Harvard Educational Review 84, 4 (2014), 495–504.
  • Harel and Papert (1991) Idit Harel and Seymour Papert. 1991. Constructionism. In Constructionism. xi, 518.
  • He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016.

    Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In

    Proceedings of the IEEE International Conference on Computer Vision, Vol. 11-18-Dec. 1026–1034.
  • Heilbronner (2011) Nancy N. Heilbronner. 2011. Stepping onto the STEM pathway: Factors affecting talented students’ declaration of STEM majors in college. Journal for the Education of the Gifted 34, 6 (2011), 876–899.
  • Hippel (1986) Eric Von Hippel. 1986. Lead users : a source of novel product concepts. 32, 7 (1986), 791–805.
  • Jacobs et al. (2014) Jennifer Jacobs, Mitchel Resnick, and Leah Buechley. 2014. Dresscode: Supporting Youth in Computational Design and Making. Constructionism (2014), 1–10.
  • Jia et al. (2014) Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. 2014. Caffe: Convolutional Architecture for Fast Feature Embedding. In Proceedings of the ACM International Conference on Multimedia - MM ’14. ACM Press, New York, New York, USA, 675–678.
  • Kafai et al. (2014a) Yasmin Kafai, Deborah Fields, and Kristin Searle. 2014a. Electronic Textiles as Disruptive Designs: Supporting and Challenging Maker Activities in Schools. Harvard Educational Review 84, 4 (12 2014), 532–556.
  • Kafai et al. (2014b) Yasmin B. Kafai, Eunkyoung Lee, Kristin Searle, Deborah Fields, Eliot Kaplan, and Debora Lui. 2014b. A Crafts-Oriented Approach to Computing in High School: Introducing Computational Concepts, Practices, and Perspectives with Electronic Textiles. ACM Transactions on Computing Education 14, 1 (3 2014), 1–20.
  • Katterfeldt et al. (2009) Eva-sophie Katterfeldt, Nadine Dittert, and Heidi Schelhowe. 2009. EduWear : Smart Textiles as Ways of Relating Computing Technology to Everyday Life. In Proceedings of the 8th International Conference on Interaction Design and Children. 9–17.
  • Kazemitabaar et al. (2017) Majeed Kazemitabaar, Jason McPeak, Alexander Jiao, Liang He, Thomas Outing, and Jon E. Froehlich. 2017. MakerWear: A Tangible Approach to Interactive Wearable Creation for Children. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems - CHI ’17. ACM Press, New York, New York, USA, 133–145.
  • Kelleher and Pausch (2005) Caitlin Kelleher and Randy Pausch. 2005. Lowering the Barriers to Programming : a survey of programming environments and languages for novice programmers. Comput. Surveys 37, 2 (2005), 83–137.
  • Kulesza et al. (2015) Todd Kulesza, Margaret Burnett, Weng-Keen Wong, and Simone Stumpf. 2015. Principles of Explanatory Debugging to Personalize Interactive Machine Learning. In Proceedings of the 20th International Conference on Intelligent User Interfaces - IUI ’15. 126–137. arXiv:cond-mat/0402594v3
  • Kuznetsov and Paulos (2010) Stacey Kuznetsov and Eric Paulos. 2010. Rise of the expert amateur: DIY projects, communities, and cultures. In Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries. ACM, 295–304.
  • Kuznetsov et al. (2011) Stacey Kuznetsov, Laura C Trutoiu, Casey Kute, Iris Howley, Eric Paulos, and Dan Siewiorek. 2011. Breaking Boundaries: Strategies for Mentoring Through Textile Computing Workshops. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2957–2966.
  • Lake et al. (2015) B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum. 2015. Human-level concept learning through probabilistic program induction. Science 350, 6266 (2015), 1332–1338.
  • Lau et al. (2009) Winnie W.Y. Lau, Grace Ngai, Stephen C.F. Chan, and Joey C.Y. Cheung. 2009. Learning Programming Through Fashion and Design: A Pilot Summer Course in Wearable Computing for Middle School Students. Proceedings of the 40th ACM Technical Symposium on Computer Science Education 41, 1 (2009), 504–508.
  • Lauwers and Nourbakhsh (2010) Tom Lauwers and Illah Nourbakhsh. 2010. Designing the finch: Creating a robot aligned to computer science concepts. AAAI Symposium on Educational Advances in Artificial Intelligence 88 (2010).
  • Lei et al. (2014) K Lei, Y Ma, and Z Tan. 2014. Performance comparison and evaluation of web development technologies in php, python, and node. js. and Engineering (CSE), 2014 IEEE 17th … (2014).
  • Lindtner et al. (2014) S Lindtner, GD Hertz, and P Dourish. 2014. Emerging sites of HCI innovation: hackerspaces, hardware startups & incubators. of the SIGCHI Conference on Human … (2014).
  • Lipton (2016) Zachary C. Lipton. 2016. The Mythos of Model Interpretability. (6 2016).
  • Lyons and Beilock (2012) Ian M. Lyons and Sian L. Beilock. 2012. When Math Hurts: Math Anxiety Predicts Pain Network Activation in Anticipation of Doing Math. PLoS ONE 7, 10 (2012).
  • Ma et al. (2015) Junshui Ma, Robert P. Sheridan, Andy Liaw, George E. Dahl, and Vladimir Svetnik. 2015. Deep neural nets as a method for quantitative structure-activity relationships. Journal of Chemical Information and Modeling 55, 2 (2015), 263–274.
  • McCarthy et al. (1955) J. McCarthy, M. L. Minsky, N. Rochester, and C. E. Shannon. 1955. A Proposal for the Dartmouth Summer Research Project on Intelligence. , 11 pages.
  • Meissner et al. (2017) Janis Lena Meissner, John Vines, Janice McLaughlin, Thomas Nappey, Jekaterina Maksimova, and Peter Wright. 2017. Do-It-Yourself Empowerment as Experienced by Novice Makers with Disabilities. In Proceedings of the 2017 Conference on Designing Interactive Systems - DIS ’17. 1053–1065.
  • Mellis et al. (2016) David A. Mellis, Leah Buechley, Mitchel Resnick, and Björn Hartmann. 2016. Engaging Amateurs in the Design, Fabrication, and Assembly of Electronic Devices. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems - DIS ’16. 1270–1281.
  • Meyerovich and Rabkin (2013) Leo A. Meyerovich and Ariel S. Rabkin. 2013. Empirical analysis of programming language adoption. In Proceedings of the 2013 ACM SIGPLAN international conference on Object oriented programming systems languages & applications - OOPSLA ’13. ACM Press, New York, New York, USA, 1–18.
  • Mnih et al. (2015) Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. 2015.

    Human-level control through deep reinforcement learning.

    Nature 518, 7540 (2015), 529–533.
  • Morrison et al. (2000) Pamela D. Morrison, John H. Roberts, and Eric von Hippel. 2000. Determinants of User Innovation and Innovation Sharing in a Local Market. Management Science 46, 12 (12 2000), 1513–1527.
  • Myers and Ko (2009) Brad Myers and Andrew Ko. 2009. The Past , Present and Future of Programming in HCI. Human Factors (2009), 1–3.
  • Natarajan and Behler (2016) Suresh Kondati Natarajan and Jörg Behler. 2016. Neural network molecular dynamics simulations of solid–liquid interfaces: water at low-index copper surfaces. Physical Chemistry Chemical Physics 18, 41 (2016), 28704–28725.
  • Nintendo (2018) Nintendo. 2018. Nintendo Labo™ for the Nintendo Switch™ home gaming system - Official Site.
  • Nix et al. (2015) Samantha Nix, Lara Perez-Felkner, and Kirby Thomas. 2015. Perceived mathematical ability under challenge: a longitudinal perspective on sex segregation among STEM degree fields. Frontiers in psychology 6 (2015), 530.
  • Papert (1980) Seymour Papert. 1980. Mindstorms : children, computers, and powerful ideas. Basic Books.
  • Peppler and Glosson (2013) Kylie Peppler and Diane Glosson. 2013. Stitching Circuits: Learning About Circuitry Through E-textile Materials. Journal of Science Education and Technology 22, 5 (2013), 751–763.
  • Rajkomar et al. (2018) Alvin Rajkomar, Eyal Oren, Kai Chen, Andrew M. Dai, Nissan Hajaj, Michaela Hardt, Peter J. Liu, Xiaobing Liu, Jake Marcus, Mimi Sun, Patrik Sundberg, Hector Yee, Kun Zhang, Yi Zhang, Gerardo Flores, Gavin E. Duggan, Jamie Irvine, Quoc Le, Kurt Litsch, Alexander Mossin, Justin Tansuwan, De Wang, James Wexler, Jimbo Wilson, Dana Ludwig, Samuel L. Volchenboum, Katherine Chou, Michael Pearson, Srinivasan Madabushi, Nigam H. Shah, Atul J. Butte, Michael D. Howell, Claire Cui, Greg S. Corrado, and Jeffrey Dean. 2018. Scalable and accurate deep learning with electronic health records. npj Digital Medicine 1, 1 (12 2018), 18.
  • Resnick et al. (2009) Mitchel Resnick, John Maloney, Andrés Monroy-Hernández, Natalie Rusk, Evelyn Eastmond, Karen Brennan, Amon Millner, Eric Rosenbaum, J a Y Silver, Brian Silverman, and Yasmin Kafai. 2009. Scratch: Programming for All. Commun. ACM 52 (2009), 60–67.
  • Resnick et al. (1988) M Resnick, Stephen Ocko, and S Papert. 1988. LEGO, Logo, and design. Children’s Environments Quarterly 5 (1988), 14–18.
  • Resnick and Silverman (2005) Mitchel Resnick and Brian Silverman. 2005. Some Reflections on Designing Construction Kits for Kids. In Proceeding of the 2005 conference on Interaction design and children (IDC ’05). 117–122.
  • Ribeiro et al. (2016) Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016.

    Why Should I Trust You?Explaining the Predictions of Any Classifier.

    Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD ’16 39, 2011 (2016), 117831.
  • Ribli et al. (2018) Dezső Ribli, Anna Horváth, Zsuzsa Unger, Péter Pollner, and István Csabai. 2018. Detecting and classifying lesions in mammograms with Deep Learning. Scientific Reports 8, 1 (12 2018), 4165.
  • Rode et al. (2015) Jennifer A. Rode, Anne Weibert, Andrea Marshall, Konstantin Aal, Thomas von Rekowski, Houda El Mimouni, and Jennifer Booker. 2015. From computational thinking to computational making. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing - UbiComp ’15. ACM Press, New York, New York, USA, 239–250.
  • Sendek et al. (2017) Austin D. Sendek, Qian Yang, Ekin D. Cubuk, Karel-Alexander N. Duerloo, Yi Cui, and Evan J. Reed. 2017. Holistic computational structure screening of more than 12 000 candidates for solid lithium-ion conductor materials. Energy & Environmental Science 10, 1 (2017), 306–320.
  • Silver et al. (2017) David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel, and Demis Hassabis. 2017. Mastering the game of Go without human knowledge. Nature 550, 7676 (10 2017), 354–359.
  • Sivek (2011) S. C. Sivek. 2011. “We Need a Showing of All Hands” Technological Utopianism in MAKE Magazine. Journal of Communication Inquiry 35, 3 (7 2011), 187–209.
  • Somanath et al. (2017) Sowmya Somanath, Lora Oehlberg, Janette Hughes, Ehud Sharlin, and Mario Costa Sousa. 2017. ’Maker’ within Constraints: Exploratory Study of Young Learners using Arduino at a High School in India. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems - CHI ’17. ACM Press, New York, New York, USA, 96–108.
  • Spohrer and Banavar (2015) Jim Spohrer and Guruduth Banavar. 2015. Cognition as a Service: An Industry Perspective. AI Magazine 36, 4 (2015), 71–86.
  • Stackoverflow (2018) Stackoverflow. 2018. Stack Overflow Developer Survey 2018. Technical Report.
  • Tanenbaum et al. (2013) Joshua G. Tanenbaum, Amanda M. Williams, Audrey Desjardins, and Karen Tanenbaum. 2013. Democratizing technology: pleasure, utility and expressiveness in DIY and maker practice. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI ’13. ACM Press, New York, New York, USA, 2603.
  • Tilkov and Vinoski (2010) S Tilkov and S Vinoski. 2010. Node. js: Using JavaScript to build high-performance network programs. IEEE Internet Computing (2010).
  • Urban and von Hippel (1988) Glen L. Urban and Eric von Hippel. 1988. Lead User Analyses for the Development of New Industrial Products. Management Science 34, 5 (1988), 569–582.
  • von Hippel (2005) Eric von Hippel. 2005. Democratizing innovation: The evolving phenomenon of user innovation. Journal fur Betriebswirtschaft 55, 1 (3 2005).
  • von Hippel (2006) Eric von Hippel. 2006. Application: Toolkits for User Innovation and Custom Design. In Democratizing Innovation. 147–164.
  • von Hippel and Katz (2002) Eric von Hippel and Ralph Katz. 2002. Shifting Innovation to Users via Toolkits. , 821–833 pages.
  • Watkins and Mazur (2013) By Jessica Watkins and Eric Mazur. 2013. Retaining Students in Science, Technology, Engineering, and Mathematics (STEM) Majors. Journal of College Science Teaching 42, 5 (2013), 36–41.{_}042{_}05{_}36