Curiosity Notebook: The Design of a Research Platform for Learning by Teaching

08/22/2021
by   Ken Jen Lee, et al.
University of Waterloo
0

While learning by teaching is a popular pedagogical technique, it is a learning phenomenon that is difficult to study due to variability in the tutor-tutee pairings and learning environments. In this paper, we introduce the Curiosity Notebook, a web-based research infrastructure for studying learning by teaching via the use of a teachable agent. We describe and provide rationale for the set of features that are essential for such a research infrastructure, outline how these features have evolved over two design iterations of the Curiosity Notebook and through two studies – a 4-week field study with 12 elementary school students interacting with a NAO robot and an hour-long online observational study with 41 university students interacting with an agent – demonstrate the utility of our platform for making observations of learning-by-teaching phenomena in diverse learning environments. Based on these findings, we conclude the paper by reflecting on our design evolution and envisioning future iterations of the Curiosity Notebook.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 9

06/08/2019

Mastery Learning-Like Teaching with Achievements

This paper describes the design of a second-year, 20 ECTS credit course ...
12/04/2019

An Open Platform to Teach How the Internet Practically Works

Each year at ETH Zurich, around 100 students build and operate their ver...
02/05/2018

Design a multicultural blended e-learning system

Most universities in developing countries are using a teaching and learn...
10/19/2021

Scalable Learning Environments for Teaching Cybersecurity Hands-on

This Innovative Practice full paper describes a technical innovation for...
05/07/2021

An Enterprise Architecture Framework for E-learning

With a trend toward becoming more and more information and communication...
11/21/2019

NaMemo: Enhancing Lecturers' Interpersonal Competence of Remembering Students' Names

Addressing students by their names helps a teacher to start building rap...
12/07/2018

BRISC-V Emulator: A Standalone, Installation-Free, Browser-Based Teaching Tool

Many computer organization and computer architecture classes have recent...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Learning by teaching is a popular and well-studied pedagogical technique, shown to produce the protégé effect (Chase et al., 2009)—students learn more effectively by teaching others. When teaching, students synthesize and structure materials, become aware of their own learning process, and expend more effort to learn. Learning by teaching has been shown to benefit the tutor academically and psychologically (Robinson et al., 2005). These benefits include improving inquiry skills (Aslan, 2017), transferring skills towards related domains (Robinson et al., 2005), improving attitudes towards the material (Cohen et al., 1982) and school (Robinson et al., 2005), and bolstering self-confidence (Srivastava and Rashid, 2018). Further, certain activities during learning by teaching have been shown to enhance such benefits, including providing explanations, collaboratively building knowledge with the tutee, and receiving feedback about the tutee’s learning (Duran, 2016; Okita and Schwartz, 2013; Okita et al., 2013). Learning by teaching has found its use in a wide variety of learning settings, at different education levels (e.g., university (Grzega and Schöner, 2008), grade (Leelawong and Biswas, 2008; Chin et al., 2013), middle and high school (Roscoe and Chi, 2007)) and involving different content areas (e.g., linguistics, applied sciences (Grzega and Schöner, 2008), mathematics (Matsuda et al., 2013) and sciences (Leelawong and Biswas, 2008; Chin et al., 2010)). In education research, learning by teaching is closely related or synonymous to other terms, such as peer tutoring (Robinson et al., 2005; Duran, 2016; Matsuda et al., 2020), cooperative learning (Duran, 2016), and peer-assisted learning (Garkal et al., 2018).

Despite extensive research, our understanding of the specific conditions that make learning by teaching effective is limited because there are often substantial confounds introduced by the variability in the tutor-tutee pairings. One recommendation, put forth by Roscoe and Chi in a large survey on tutor learning  (Roscoe and Chi, 2007), is to “develop teachable agents to test hypotheses about specific tutor behaviours” by systematically manipulating the teachable agent’s characteristics and question asking behaviour. The use of teachable agents as tutees allow researchers to overcome some challenges faced in conducting studies with human tutors and tutees, including the ability to precisely control the characteristics of the tutee so that participants are exposed to the same experimental stimuli and removing the potential risk of human tutees being disadvantaged in the process (Matsuda et al., 2013, 2020; Roscoe and Chi, 2007).

While many agent-mediated learning-by-teaching environments have been developed by researchers, most are not designed with the degree of flexibility required for reuse across a wide range of learning contexts (Matsuda et al., 2013; Ogan et al., 2012; Okita and Schwartz, 2013; Chin et al., 2010). Our work seeks to fill this gap by introducing a learning-by-teaching platform called the Curiosity Notebook. Our premise is that such a research platform must be highly configurable to enable the investigation of learning by teaching in a large number of settings. This is important because, as Roscoe and Chi pointed out, tutor learning is not uniform across various settings and the reasons behind the variability are not well understood (Roscoe and Chi, 2007). They also encouraged future researchers to analyze and compare tutor learning processes across settings (Roscoe and Chi, 2007). Having a configurable research platform that supports a variety of settings and agent configurations is an important step towards facilitating comparisons across studies, by minimizing confounding factors and interactions (Roscoe and Chi, 2007). However, such configurability is often not found because most platforms are built to investigate learning-by-teaching phenomena in very specific learning contexts. Moreover, designing a highly configurable platform requires a significant amount of resources and is often deemed unnecessary for these narrowly focused studies. Closest to our work is Betty’s Brain, where students teach a virtual agent (i.e., an avatar called Betty) about causal relationships through concept maps. Betty’s Brain has been used to study hierarchical reasoning (Chin et al., 2013), self-regulated learning (Biswas et al., 2005; Munshi et al., 2018), collaborative teaching (Emara et al., 2018), the role of feedback (Leelawong et al., 2002; Tan et al., 2006; Segedy et al., 2012), and scaffolded learning of multiple representations (Basu et al., 2016), to name a few; this variety suggests a certain level of configurability allowed by the platform for researchers to adjust according to each study’s specific needs. Our work goes beyond prior work by providing configurability for agent’s embodiment (e.g., text-based chatbots, voice agents or physical robots), coordinated group-based teaching (e.g., explicitly implemented turn-taking mechanisms that enable students to teach in groups of varying sizes), and learning task and materials (e.g., grade school vs. university students), all of which are discussed in this paper.

In this paper, we outline the set of configurable features in Section 3, informed by prior literature, that are useful for studying learning by teaching and essential for allowing better cross-study comparisons to be made. We provide a detailed description of the platform’s design evolution in Section 4, as informed by observations of two deployments—elementary school children participating in groups to teach physical robots in person, and university students working individually to teach a chatbot online. These deployments serve two purposes: first, they inform potential design changes that could be made for subsequent iterations of the platform; second, they serve to demonstrate the platform’s versatility as a research infrastructure in collecting useful observations for understanding learning-by-teaching phenomena in a wide range of learning contexts. In Section 5, we describe a number of observations collected using Curiosity Notebook in these deployments, as evidence for the utility of the platform and the importance of the configurability allowed by it, and conclude in Section 6 with a discussion of the envisioned design improvements for future iterations of the platform.

2. Background

2.1. Learning by teaching

As noted above, learning by teaching produces the protégé effect (Chase et al., 2009)—students who are asked to teach others demonstrate more proficiency in synthesizing and structuring materials, become aware of their own learning process, and expend additional effort within the learning process. The benefit of learning by teaching comes from the additional knowledge-building activities that are inherent within the process of providing explanations for others, which in turn promotes long-term understanding and consolidation of information (Fiorella and Mayer, 2015; Hoogerheide et al., 2014; Fiorella and Mayer, 2013, 2014). Learning by teaching also allows students to better assess their own understanding of concepts and identify the gaps in their knowledge, thereby enhancing self-learning by repairing these gaps (Lachner et al., 2020). For instance, prior work has shown that students exhibit greater gains in knowledge when actively teaching, compared to just preparing to teach (Hoogerheide et al., 2014).

The benefits from learning by teaching also come from learners’ motivations. Benware et al. (Benware and Deci, 1984)

hypothesized that individuals who learn material with the purpose of teaching take a more active learning role and have higher levels of intrinsic motivation. An intrinsically motivated learner engages with learning due to personal enjoyment and interests, instead of external rewards and incentives (e.g., higher test scores). According to the cognitive evaluation theory, intrinsic motivation is a culmination of self-perception of one’s own autonomy (the feeling that actions can affect the surroundings) and competence (the feeling of having mastered a topic)

(Legault, 2017). Benware et al., in (Benware and Deci, 1984), explained that learning for the purpose of teaching makes the learning process purposeful, as it increases autonomy which, in turn, enhances intrinsic motivation. Their study showed that, compared to participants who learn new material to take a test, those who learned to teach were more interested in the material and enjoyed the study more, supporting the hypothesis that learning to teach increases intrinsic motivation.

As mentioned in the introduction, learning by teaching also has various other benefits, including psychological benefits. A meta-analysis by Robinson and Schofield (Robinson et al., 2005) found that tutoring other students increases the tutor’s sense of belonging to their school and with other students. Moreover, tutors who taught materials from one domain of mathematics performed better in other domains of mathematics as well, showing improvements in tutors’ transferring of skills. Other studies found improvements in the tutors’ self-concept (defined as the “organization of qualities that the individual attributes to himself” (Kinch, 1963)) (Flores and Duran, 2013), self-awareness and self-direction (Sprinthall and Scott, 1989).

The degree to which programs involving learning by teaching succeed is influenced by numerous program characteristics, e.g., how much instruction the tutors received, age difference between tutor and tutee, how the tutors and tutees are chosen for the study. Moreover, many studies fail to tease apart the effects of teaching from other effects like collaborative learning (defined as when“two or more people learn or attempt to learn something together” (Dillenbourg, 1999)) (Robinson et al., 2005). Developing teachable agents as a method of systematically studying learning by teaching, as suggested by Roscoe and Chi (Roscoe and Chi, 2007), can help overcome some of these limitations. In particular, with teachable agents as tutees, researchers have full control over the tutees’ characteristics and how the tutor and tutee interact. This means that confounding factors and other closely related effects (e.g., collaborative learning) can be controlled or accounted for. More specifically, teachable agents can allow researchers to study students’ responses to learning by teaching, including differential responses as the characteristics and behaviour of the agent are manipulated.

2.2. Teachable Conversational Agents

Within educational settings, conversational agents have adopted numerous roles. As experts, agents improve information acquisition; as motivators, agents increase self-efficacy; and as mentors, agents improve both learning and motivation

(Baylor and Kim, 2005). Tutor agents have been successful in tasks including writing algebraic expressions (Heffernan and Croteau, 2004), learning about urban ecosystems (Griol and Callejas, 2013), and foreign language learning (Fryer and Carpenter, 2006).

Only a handful of systems position the agent as a less intelligent or knowledgeable ”peer” that students teach. These agents are referred to as teachable agents. Using teachable agents to research learning by teaching has various benefits, including 1) enabling precisely determined control conditions not possible with human tutees, 2) reducing the risk of any human tutees from being harmed in the process, and, importantly, 3) facilitating the collection of detailed interaction data between the teacher and the agent (Matsuda et al., 2013). SimStudent, for example, is a simulated learner used to study student-tutor learning in mathematics problem solving (Matsuda et al., 2013). It was built with the primary purpose of collecting detailed interaction data when students are teaching the teachable agent and investigating the cognitive and social theoretic underpinnings of when, how and why learning by teaching is effective. Students teach SimStudent by posing algebraic problems to it, providing feedback and help while SimStudent attempts to solve the problems, and quizzing SimStudent to gauge learning. SimStudent was found to be effective for learning procedural, but not conceptual, skills. Further, it was more effective when students taught it correctly and made good use of the feedback and quizzing features. Another example is the Betty’s Brain system (Biswas et al., 2005; Leelawong et al., 2002), a teachable agent learning environment in which students read articles, then teach, query and quiz a virtual agent (”Betty”) about causal relationships (e.g., burning fossil fuels increases CO2) in science by manipulating concept maps. It is designed to help students develop structured networks of knowledge, take responsibility and make decisions about learning, and develop reflection or meta-cognitive skills (Biswas et al., 2005), thereby preparing them for future learning.

Other researchers have explored the use of teachable agents that involve physical robots (Tanaka and Matsuzoe, 2012; Hood et al., 2015; Tanaka et al., 2015; Yadollahi et al., 2018; Chandra et al., 2017) and showed the importance of incorporating recursive feedback within the design of effective agents (embodied or otherwise) for learning by teaching. Recursive feedback is when teachers observe their tutees use knowledge they taught. Okita et al. (Okita and Schwartz, 2013; Okita et al., 2013) found that participants who observed their tutees interacting with examiners learned better than participants who only performed learning by teaching. In another study, high-school students who observed the agents they taught compete with other agents learned more than high-school students who taught agents and proceed to compete against other agents themselves (Okita and Schwartz, 2013). It turns out that this effect is also apparent when the participants are playing the role of the examiner (instead of having a separate entity play the examiner) (Leelawong et al., 2002).

While teachable agents provide many benefits over human tutees, such work is labour-intensive, requiring meticulous design and implementation (i.e., coding). As such, having a research platform reusable across many learning-by-teaching studies would be significantly beneficial.

3. Essential Features of a Research Platform for Learning by Teaching

Our goal is to create a research platform for investigating learning-by-teaching phenomena; though, certainly, there would be educational, non-research applications as well. As discussed before, a platform needs to be highly configurable to allow the investigation of learning by teaching across diverse settings. Having a single platform support multiple settings is crucial for enhancing understanding of the complex interactions in learning by teaching and for allowing cross-study comparisons (as encouraged by (Roscoe and Chi, 2007)). Below, we discuss the configurable features (CF), their significance, and the type of research questions they support.

CF#1 Agent Characteristics. Our platform enables systematic modulation of agent characteristics hypothesized to be relevant to learning (e.g., types of question asked, amount and accuracy of content demonstrated when tested). Tutee characteristics is a commonly studied and manipulated dimension within relevant literature, and is an integral part of learning-by-teaching experiences (Nichols, 1994; Park and Kim, 2015; Biswas et al., 2005). Using an agent allows for precise control over tutee characteristics and behaviours not possible with human tutees (Matsuda et al., 2013; Roscoe and Chi, 2007). For instance, modifying teachable agents to have them provide feedback to their teacher and ask authentic questions improves the student tutor’s engagement with the material (Park and Kim, 2015). Other studied or discussed characteristics include the degree to which agents should take control of the conversation (Nichols, 1994) and the agents’ self-regulation behaviour (Biswas et al., 2005). Similar to how the variability in human tutees behaviours (Roscoe and Chi, 2007) limits generalizability of findings, the variability in agent characteristics across studies makes it hard for meaningful comparisons to be made. Thus, there is an important need for a research platform that allows for careful and granular modifications of tutee characteristics. Having configurable agent characteristics would also allow systematic analyses and verification of findings from human-to-human learning-by-teaching studies. An example is Roscoe’s study that found that human tutee’s questions were significant predictors of human tutors’ knowledge building and deep understanding (Roscoe, 2014). Roscoe suggested future studies to investigate how the absolute and relative levels of tutor and tutee expertise affect the amount of knowledge building; something that is much easier to investigate using agents since it is possible to fix the level of expertise in an agent tutee, making this variable consistent across tutors (Roscoe, 2014). By providing functionalities to configure agent characteristics, our platform can support new experiments that investigate the effects of tutor-tutee pairings on learning by teaching.

CF#2 Quantification of Teaching Strategies. Our platform provides students with choices of teaching activities (e.g., teaching vs. testing), in order to allow for a quantitative characterization of their teaching strategies. This is essential as teaching strategies could be analyzed to better understand what and why certain teaching activities diminish or enhance benefits related to learning by teaching. For instance, Munshi et al. (Munshi et al., 2018) found that students who were high performers employed different patterns of teaching activities than low performers within the Betty’s Brain platform. High performing students read relevant articles after quizzing Betty more frequently than low performing students. Using the same platform, Wagster et al.  (Wagster et al., 2007) analyzed how high- and low-performing students employ the six available teaching activities while teaching the agent. They found that high-performing students incorporated more activities related to information seeking and self-monitoring than low-performing students. Beyond Betty’s Brain, however, there is a lack of research on the effects of teaching activities and strategies on the learning of the tutor teaching an agent tutee; as such, our understanding of these effects is limited to tasks for learning causal relationships between concepts, which Betty’s Brain supports. An important configurable feature of our platform, therefore, is a set of user interface controls (e.g., buttons) that student tutors can invoke to initiate a range of different teaching conversations with the agent. These user interface controls enable researchers to observe what teaching activities the student tutors would choose to engage in and in what order, thus providing rich data for capturing learning-by-teaching behaviour. This can help towards understanding not only what, but how and why tutors learn from teaching, which have been identified as notable gaps within the extant research studies (Roscoe and Chi, 2007).

CF#3 Scalable Learning Task and Material. Our platform supports learning tasks that can be scaled in complexity to different age groups (e.g., usable by elementary school as well as university students). Existing learning-by-teaching research involving teachable agents (i.e., agent tutees) are mostly for younger children (usually in elementary school) (Leelawong and Biswas, 2008; Emara et al., 2018; Wagster et al., 2007; Biswas et al., 2005; Munshi et al., 2018; Leelawong et al., 2002). Research on learning by teaching for older students (e.g., university students), on the other hand, mostly investigated human tutor/tutee contexts without the use of teachable agents (Annis, 1983; Coleman et al., 1997; Roscoe and Chi, 2004; MacDonald, 1991; Roscoe, 2014)

. Research have also shown that the learning process itself evolves with age. For example, although the associative learning process might remain constant in children and adults, executively controlled learning (i.e., learning that requires engagement on the learner’s part) increases in frequency with older ages

(Kuhn and Pease, 2006). Metacognitive skills also develop with age; adults are more capable of such skills than children, including taking more initiative to verbally rehearse during memory tasks (Keeney et al., 1967), improve on their retrieval skills given previous failed attempts (Flavell and Wellman, 1975), and employ knowledge acquisition strategies (Kuhn, 2000). As such, while learning by teaching has been found to benefit both children and adults alike, some important questions remain unanswered: 1) are teachable agents as effective as human-to-human learning by teaching for older students? and 2) how is the effectiveness of teachable agents for young children similar or different for older students? Having a single platform that supports multiple age groups and learning task complexities allows for a direct comparison between these age groups, and eventually allowing learning by teaching to benefit students of different ages through informed and targeted configurations.

CF#4 Coordinated Group-Based Teaching. Our platform supports students teaching individually, in pairs or in larger groups, explicitly controlling the turn-taking process in order to provide equal access to teaching opportunities. Collaborative learning, defined as “the grouping and pairing of learners for the purpose of achieving a learning goal”, has many benefits (Laal and Ghodsi, 2012), which include encouraging critical thinking and increasing students’ motivation, and promoting active student participation and personalized learning (Laal and Ghodsi, 2012). In teachable agents contexts, however, the only studies comparing individual versus collaborative learning by teaching, are those using Betty’s Brain. Namely, Emara et al. (Emara et al., 2018) found that when paired to teach Betty together, students learned more and created more accurate concept maps than when working individually. This effect emerged because working in pairs allowed students to discuss errors in quizzes and their concept maps, which in turn encouraged more systematic approaches to learning and teaching the agent (Emara et al., 2018). While the findings are significant, many questions remain about the inner working of learning by collaboratively teaching teachable agents. In particular, it is not well understood how a group’s dynamics affect each tutor’s learning. Within collaborative learning literature, researchers found that its effectiveness is affected by students’ personality (e.g., extroverted vs. introverted), cognitive style (Miller and others, 1994), motivation levels (Rienties et al., 2009), and the group’s contribution equity (Shah et al., 2014). For instance, Shah et al. (Shah et al., 2014) found that students who perceived themselves as being less competent in the subject were dominated in discussions within dyads, and as a result, were negatively affected in their learning process. These results raise questions of how teachable agent systems should be designed to manage group dynamics, e.g., encourage or enforce equal engagement and contribution among tutors. By providing functionalities that coordinate group-based teaching, our platform enables new questions about collaborative learning by teaching to be answered.

CF#5 Flexible Agent Embodiments. Our platform supports teaching conversations with text-based, voice-based or physical agents. Agent embodiments, beyond simple text interfaces (e.g., auditory speech, digital or physical animated forms), have been found to improve performance in tasks such as tutoring, where continuous interactions with agents are held (Shamekhi et al., 2018; Atkinson, 2002; Moreno et al., 2001). However, agent embodiment is nuanced and rarely investigated in a thorough manner; these nuances are often listed as limitations in studies (Hone et al., 2003; Shamekhi et al., 2018; Vossen et al., 2009; Thellman et al., 2016). Although out of this paper’s scope, supporting flexible agent embodiment would allow for fair and detailed comparisons of the effects of embodiment on teaching and learning performance. For instance, one can investigate how agent embodiment affects rapport and how increased/decreased rapport would impact learning-by-teaching behaviours and outcomes.

To summarize, the five configurable features above are integral for researching learning by teaching with teachable agents. However, as discussed above, many questions remain unexplored for each configurable feature, and most studies use different learning-by-teaching platforms, making it hard to compare and synthesize generalizable findings that can be easily applied to the education sector. Although prior work has explored a few of the configurable features in depth, the effects of CF#3 and CF#5 remain unexplored. As such, we set out to build Curiosity Notebook to support all five configurable features, and introduce its design in the next section.

4. Curiosity Notebook: a Research Platform for Learning by Teaching

In this paper, we introduce the Curiosity Notebook—a new research platform for learning by teaching—and describe how its design evolved over two years of development. We also demonstrate how the platform supports each configurable feature (e.g., CF#3) discussed in Section 3.

[Design of Curiosity Notebook]This picture shows the Curiosity Notebook. Seven buttons representing unique conversations were added. The agent also has a notebook where it can record notes of what it has been taught.

Figure 1. Curiosity Notebook (second version) with reading panel (left) and teaching panel (right).

At a high-level, the Curiosity Notebook provides a web interface that students use to read articles and teach an agent to perform a task based on the reading. In our design thus far, we have focused exclusively on how the Curiosity Notebook enable the teaching of classification tasks—how to classify objects, e.g., animals as mammals, insects, and reptiles; rocks as sedimentary, igneous and metamorphic (rocks have also been used as a topic in previous studies on agents for education

(Ceha et al., 2019)); or paintings as pointillism, realism and impressionism. However, the task could take many forms; for example, students could read an article about programming and teach the agent how to correct mistakes in a piece of code, or students could read an article about grammar rules and teach the agent how to identify part of speech in a sentence. The main idea is that the platform supports the process of students teaching factual knowledge (obtained from reading) in a way that builds up and transforms into a particular skill (e.g., the ability to apply rules to classify objects) in the agent. For classification, the agent would need to be able to identify features of an object, identify the categories that the object can belong to, and map features to categories. The platform, as described in detail below, is designed to facilitate this text-to-rule translation. Shown in Figure 1, the main interface consists of a reading panel (left), and the teaching panel (right), which provides functionalities that enable students to communicate with the agent and assess the progress of the agent’s learning.

4.1. Deployments

During our iterative design process, we deployed two versions of the Curiosity Notebook, each to a vastly different user population and learning setting. Before describing Curiosity Notebook’s design evolution, we first describe the details about the two deployments.

The deployments served two purposes—first, to inform the iterative design changes in the platform, and second, to demonstrate the platform’s versatility as a research infrastructure. For instance, the two deployments described below demonstrate how the platform can support users from CF#3 different age groups (elementary school children vs. university students), learning contexts (in-person vs. online), CF#4 group configurations (group-based vs individual teaching) and with CF#5 different agent embodiments (e.g., robot vs. chatbot). The differences between the deployments are summarized in Table 1.

Deployment 1 Deployment 2
Setting In-person within school Online
Length 4 weeks, 1.5 hours per week Single session, 1 hour
Education level (CF#3) Elementary school University
(Grades 4 & 5)
Number of students per agent (CF#4) 3 1
Agent embodiment (CF#5) Physical Robot (NAO) Chatbot
Table 1. Comparison of deployment characteristics.

Deployment 1: Elementary School Students, In-Person. For the Curiosity Notebook’s first version, we conducted a 4-week exploratory study with 12 fourth- and fifth-grade students (7M/5F) at a local school (Law et al., 2020). Enrollment was on a first come, first served basis. No monetary compensation was provided; instead, students were given a “Certified Robot Teacher” certificate as a token of appreciation. The study was conducted in an after-school club, which ran once a week for 1.5 hours each. Four NAO robots were used in each session; to personalize the experience, each robot had a name tag hung around their neck with a gender-neutral name (i.e., Alpha, Beta, Gamma and Delta). Students formed groups of three, and taught the robot about a different topic (i.e., animals, rocks, paintings) each week, then all topics during the last week. Each student was given a Chromebook, and sat together with their group members facing the robot, which was positioned in a sitting posture in front of the students on the table (as shown in Figure 2). Each group of students was joined by a student researcher, who observed the group and resolved issues (if any) with the platform. We provided physical artifacts (as shown in Figure 2) for each classification task, namely animal figurines, rocks and minerals, and postcards of different styles of paintings from NYC Metropolitan Museum. During the session, we piloted a variety of surveys, iteratively re-designed the platform, made detailed observations, and interviewed students about their learning-by-teaching experience.

Figure 2. Students teaching a NAO robot (left) using the Curiosity Notebook and physical artifacts (right).

Deployment 2: University Students, Online. After the first deployment with elementary students, we revised the design of the Curiosity Notebook, and deployed it in a second observational study with university students. After a small pilot with 11 participants, we recruited 41 participants (22M, 19F; ages 18 to 29, ). All studies were conducted over Zoom in the presence of a researcher and were about 90-minutes long. Participants were given an $20 CAD Amazon gift card as remuneration. Participants were asked to read articles about rocks and teach the agent (named Gamma) how to classify rocks as metamorphic, sedimentary, or igneous. We recorded all interactions (such as article clicks, button clicks, notebook checks, text logs of chat data between participant and the agent, and the amount of time a participant spent teaching Gamma) on the Curiosity Notebook as well as participants’ responses to the administered pre-study and post-study questionnaires (submitted via Google Forms). The pre-study and post-study questionnaires measured participants’ attitudes towards the agent (via likeability and perceived intelligence subscales of the Godspeed questionnaire (Bartneck et al., 2008)), and towards the teaching task (via Pick-a-Mood pictorial self-report scale (Desmet et al., 2016) for self and agent). The Interest/Enjoyment, Pressure/Tensions and Effort/Importance subscales from the Intrinsic Motivation Inventory (IMI) (Ryan, 1982) were used for a deeper understanding of participants’ subjective experiences with the teaching task. The Academic Motivation Scale (AMS) was used to measure participants’ amount and type of motivations towards the experiment (Vallerand et al., 1992). Lastly, open-ended questions about participants’ perceptions of the agent and the Curiosity Notebook were added for additional qualitative insights.

Beyond this paper, our platform has been used in a number of studies investigating the effects of agent characteristics (CF#1) on learning by teaching—to investigate agents with different humour styles in relation to student effort in teaching (Ceha et al., 2021), and to study physically embodied agents that are capable of sensing group dynamics and verbally encouraging equal contributions among student tutors (which is also relevant to CF#4 and CF#5) (Ravari et al., 2021). The flexibility of the Curiosity Notebook platform (in the form of the five CFs) not only allows for its deployment in an expanded number of settings consisting of varying characteristics, but also allow researchers to expeditiously switch between these settings.

4.2. Design Evolution

We deployed the first version of the Curiosity Notebook to the first exploratory study with elementary school students, revised the design based on our observations and findings, then deployed the second version to the second exploratory study with university students. There were a number of features that evolved substantially between the two versions. We describe here these platform features and their design evolution.

4.2.1. Teaching Conversations

In the first version of the Curiosity Notebook, the agent begins each teaching conversation by asking the students to show it a physical artifact (e.g., “Can you pick an animal and tell me its name? I can’t wait to see it!”). After selecting an object, the teaching conversation proceeds with the agent highlighting one of the “knowledge bubbles” and asking a series of 4 or 5 questions about the corresponding feature, as shown in Figure 3(a).

(a) First Version: knowledge bubbles indicating which features have been taught.
(b) Second Version: buttons for initiating teaching conversations and robot’s notebook for tracking learned knowledge
Figure 3. Design Iterations of the Teaching Panel

The agent is designed to ask a mix of low- and high-level questions. Low-level thinking questions include questions about features (e.g., “What does the skin of mammals look like?”), examples (e.g., “Can you give me an example of a cubism painting?”), and facts (e.g., “Select a sentence to tell me about frogs and how they lay eggs.”). High-level thinking questions include why questions (e.g., “Why is a snake a reptile?”), synthesis questions (e.g., “Do all reptiles look the same?”), and questions that prompt students to repeat/rephrase or explain the meaning of a word, e.g., “Can you help me understand what you just said better?” These questions reflect different categories of cognitive operations posited by Gallaghar and Aschner (Gallagher and Aschner, 1963). While low-level questions involve cognitive memory operations via the reproduction of factual knowledge directly from the articles, high-level questions involve convergent thinking via the discovery and learning of concepts (which take the form of classification rules in the context of classification tasks) from analyzing and integrating knowledge derived from the articles (Gallagher and Aschner, 1963); convergent thinking questions have been shown to be beneficial to students’ question-asking skills (Alaimi et al., 2020). The system automatically generates a variety of questions by filling in predefined sentence templates with names of objects, features, and categories that the students are currently teaching. The questions are sequenced such that lower-level thinking questions always precede higher-level thinking questions, allowing students to first learn (through teaching) factual knowledge before concepts relating to classification rules. Some randomness was introduced for ordering questions to prevent the conversation from being too mechanical. The teachable agent occasionally seeks feedback from students about its learning, by asking questions about its general intelligence (e.g., “Am I smart?”), its learning progress (e.g., “Am I learning?”, “Do you think I know more now than before?”), or how well it might perform if tested (“Will I do well in a test?”).

After 4-5 rounds of questioning and answering, a “knowledge bubble” is filled and students are rewarded with confetti on the screen letting them know that the agent has “learned” that feature, and the next feature is randomly chosen for students to teach. Between teaching conversations, students can also choose to test the agent’s knowledge. The testing interface shows a set of images representing objects (e.g., images of paintings) to be classified. Students can click on an image, and the agent will attempt to classify it (e.g., saying “I think it is an impressionist painting” or “I don’t know”). The system provides corrective feedback—an overlay over each image will show a green checkmark if the agent is correct, and a red “x” otherwise. This testing feature builds on prior research, which shows that student tutors learned more when observing their tutees use the knowledge they were taught either through some form of quiz or test (Okita and Schwartz, 2013; Okita et al., 2013; Leelawong et al., 2002), or through competition with other tutees (Okita and Schwartz, 2013).

A key change in the second version’s design is a set of controls that allows participants to choose specific actions to take next; this, in turn, allows us to more quantitatively capture the variety of teaching strategies employed (CF#2). In lieu of the knowledge bubbles, the teaching panel (as shown in Figure 3(b)) now contains, at the top, seven “conversation-type” buttons grouped into three categories, supporting Teach, Check and Entertain actions. When a button is clicked, the system locks the agent (named Gamma) and the participant into a conversation that involves 4-6 rounds of question and answering. The student tutor can engage in the conversation through free-form text, or by highlighting specific sentences in the text to teach the agent.

Gamma: It’s good to understand better why rocks look the way they do.
Gamma: Can you pick a new rock and tell me what it’s called please?
User: Shale
Gamma: I don’t know about this rock.
Gamma: But now it’s in my notebook so that I don’t forget it.
Gamma: What category does shale belong to?
User: Sedimentary
Gamma: Can you tell me why shale looks the way it does?
Gamma: Can you pick a sentence in the articles for me please?
User: As more sediments get deposited, the particles underneath become tightly packed; eventually, they become a dense, solid rock.
Gamma: Mmm.. okay I see.
Gamma: Could you explain that more clearly?
User: With time, sediments get deposited over each other, forming a dense solid rock.
Gamma: Alright.
Gamma: I’m really interested in rocks.
Gamma: You can now select a new button to keep teaching me.
Table 2. Explain conversation in the second version of the Curiosity Notebook

Each button initiates a teaching conversation with the agent. Teach actions include Describe, Explain and Compare. Upon clicking the Describe button, the agent will ask participants to identify an object’s category (e.g., “Pumice is an igneous rock.”) and feature (e.g., “Pumice is often white.”). This conversation allows students to better grasp factual knowledge, which is an essential step in developing competence in the topic taught (Donovan and Bransford, 2005). The Explain conversation prompts for an explanation for why an object has a particular feature (e.g., “Pumice is often white because of the silica in the lava; without the silica, Pumice could be black.”). A sample Explain teaching conversation is shown in Table 2. Providing explanations is an important teaching activity that has proven to improve learning significantly, especially while learning the material (Lachner et al., 2020). The Compare conversation allows the participant to discuss similarities or differences between two objects. Ziegler and Stern found that when encouraged to make comparisons between conceptually different material, students benefited from larger long-term learning gains (Ziegler and Stern, 2014). In terms of cognitive operations, Describe supports cognitive memory while Explain and Compare support convergent thinking (Gallagher and Aschner, 1963).

Check actions include Correct and Quiz. The Correct button allows the student to correct facts that they deemed the agent to have learned incorrectly (due to them teaching incorrectly; the agent in this version is not configured to inject errors automatically). This feature is designed to encourage metacognitive skills among student tutors. Specifically, by monitoring and identifying mistakes in the agent’s understanding, student tutors may also identify similar flaws in their own understanding, since the agent learns directly from them. This is referred to as comprehension-monitoring, and has been found to be an essential part of knowledge building in learning by teaching (Roscoe, 2014). To probe the current performance of the agent, students can click on the Quiz button, and select an object to test the agent on (Figure 4). The agent will classify the object correctly or incorrectly, based on its current knowledge model. As with the first version, the quiz feature encourages learning by providing feedback to the student tutors about the agent’s learning progress.

[Example of quiz conversation]The picture shows an ongoing quiz conversation. The agent starts by saying ”Okay I am ready to take a test. Click on an image and ask me to categorize it!” The user is then presented with pictures of four different rocks above the chat window. The user clicks on the picture of granite. This creates a new message in the chat window by the user, saying ”Do you know what kind of rock granite is?” The agent replies by saying ”Oh is that a granite? I think that is a sedimentary rock. I am starting to have fun learning about rocks. Keep teaching me!” This concludes the quiz conversation.

Figure 4. Quiz conversation in the second version of the Curiosity Notebook

Finally, participants can choose to entertain the agent through the Entertain actions. The Fun Fact button allows the participant to tell a fun fact about an object (e.g., “Obsidian is used in heart surgery”) and provide a reason for why the fact is interesting. The Tell Joke button allows the participant to tell the agent a joke. Powell and Andersen found that the use of humour in learning settings, if not excessive, can benefit learning by increasing students’ attention and motivation (Powell and Andresen, 1985).

Together, these seven conversations are designed to support flexible teaching strategies on the side of the student tutors, and quantifiable teaching strategies that can be analyzed on the side of the researchers (CF#2). A final key difference between the first and second version of the Curiosity Notebook is the dialog system that drives the conversation. In the first version, the question templates were randomly drawn; whereas in the second version, each conversation is controlled by a state machine, of which configurations are stored in a JSON file that specifies the conversation’s flow. This setup enables the conversations to be easily modified without having to change any part of the code (CF#1).

4.2.2. Coordinated Turn-Taking

Our platform is highly configurable and supports a wide variety of learning-by-teaching scenarios. Students can teach the agent individually or in groups of arbitrary size, and their group placement can be configured by teachers or researchers through a command line interface in the first version, and through an admin interface in the second version (CF#4). If a student is placed in a group and their group members are present, their view of the system is synchronized—that is, if one student navigates to another interface (e.g., teaching vs. testing), all students will be automatically brought to the same screen. During the first deployment, between each after-school session, there were design modifications to the turn-taking mechanisms. Initially, our platform gave students complete freedom to choose what and when to teach the robot. This setup was too open-ended, and students had great difficulty narrowing down what content mattered and dividing the teaching task. Subsequently, the agent was redesigned to control turn taking—namely, it determines which group member is online and active, asks the student who has participated the least number of turns to teach next. When a student is stuck (e.g., picked a sentence unrelated to what the agent is asking about), the agent will also delegate the task to the next student, asking them to help. Overall, the turn taking mechanism enabled children, placed in different-sized groups within the same classroom, to work together on simultaneously teaching different robots.

4.2.3. Agent Embodiment Logic

The Curiosity Notebook supports a clean separation between agent logic and embodiment, thereby allowing the teachable agent to take on different types of embodiment (CF#5). This is accomplished by keeping the teachable agent’s logic—e.g., how it learns, how it exhibits emotions, and what it says—inside the Curiosity Notebook web application, and having an external program (e.g., a python script) ping the database for chat messages that the physical robot should say out loud. Each chat message is associated with an emotion tag (e.g., curious), which can be used to control the movements/gestures of the robot (e.g., rubbing its head or chin) to convey that emotion. Similarly, the external program can push sensing events to the Curiosity Notebook. The NAO robot, for example, has tactile sensors on its head, hands and feet, which can serve as alternative ways for students to provide feedback to the robot (e.g., patting its head when it answers a quiz question correctly). If the Curiosity Notebook is used without any physical embodiment for the agent, the agent takes the form of a text-based conversational agent. This functionality enabled us to set up the agent to be a physical robot in the first deployment, and a chatbot (i.e., text-based conversational agent) in the second deployment. In other words, supporting flexible agent embodiments allows for studies that use different embodiments to be deployed either simultaneously or in quick succession without having to change the platform’s core logic.

4.2.4. Interfaces for Monitoring the Agent’s Knowledge

As shown earlier in Figure 3, in the first version of the Curiosity Notebook, the state of the agent’s learning is represented by knowledge bubbles, each representing a feature that is relevant for the classification task at hand. For example, a feature relevant for distinguishing mammals and reptiles would be whether the animal lays eggs. A knowledge bubble becomes filled when an agent has mastered/learned the associated feature, otherwise it remains empty. In our first deployment, we found that although filled knowledge bubbles were reflective of the agent’s knowledge, students could not gauge from the bubbles what content the agent learned and how different actions affected the agent’s knowledge. The knowledge bubbles also had the unintentional effect of encouraging students to rush through teaching in order to fill up all the knowledge bubbles as quickly as possible, and reducing their focus on learning the material.

To address these issues, in the second version, the knowledge bubbles were replaced with the agent’s notebook (Figure 5), which records everything that the agent has been taught. The agent’s notebook is informative in two ways; it tells students what the agent knows, and it gives students clues as to what each button does. For example, sharing a joke with the agent does not affect its knowledge, but telling a fun fact does. Throughout teaching, students can access the agent’s notebook using the Notebook button (next to the agent’s avatar in Figure 3(b)) at any time. In the initial testing, we found that users who were not familiar with the interface often forgot that the agent’s notebook existed. Thus, extra logic was built to make the Notebook button pulsate for approximately two seconds every time a new note has been taken by the agent. Additionally, the agent is programmed to tell the user through chat that it has made a new entry in its notebook.

          [Images of notebook interface]There are two images showing the agent’s notebook. The left one shows the index page of the notebook, which has names of the rocks that the agent has been taught. The right picture shows a page for the rock Slate containing Slate related notes.

Figure 5. Table of content page of the agent’s notebook (left) and the page containing notes for Slate (right), in the second version of the Curiosity Notebook

The notebook starts out empty initially, with the first page serving as a table of content. Upon learning a fact about a rock, a new page is created for that rock with the added note, and a new index to the page is added to the table of content. The last page of the notebook is reserved for showing all the fun facts. The cursive Indie Flower font 111https://fonts.google.com/specimen/Indie+Flower was chosen for the notebook texts to represent the agent’s handwriting. The information that the agent learns is stored in a relational database. For instance, for rock classification, the rock-to-category table stores what the agent has learned about rocks and their categories. The rock-to-feature table stores what the agent has learned about a given rock’s features. When a fact is taught, a note is generated (e.g., “Schist is a Metamorphic rock”, “Conglomerate has sand or pebbles”) and automatically linked to a rock-to-category mapping, or one or more rock-to-feature mappings. Additionally, if users provide an explanation in Explain, the explanation is added to the note as well (e.g., “Granite has large crystals because the cooling process is slow”). When two rocks from the same category are compared in Compare, the note associates these rocks to the same feature (e.g., “Schist has layers and Gneiss has layers”). If the rocks are from different categories, then the note associates each rock to a different feature (e.g., “Quartzite could be white because … while Sandstone has layers”). Finally, fun facts (from Fun Fact) are stored along with the reasons that the users provided the agent as to why they found it interesting. An example is “There is Gneiss in Canada that date back 4 billion years ago! (Reason: It is fascinating to know that rocks more than 4 billion years old can be found in this country)”.

Gamma: I got something wrong?
Gamma: I know ’gneiss’, ’shale’, ’slate’
Gamma: Ok, tell me which rock this is about?
User: Gneiss
Gamma: Oh, gneiss
Gamma: Select which notebook entry you want to correct!
User: I think that ’Gneiss is an igneous rock’ is wrong.
Gamma: So, what kind of a rock do you think gneiss is then?
User: Metamorphic
Gamma: Thanks for clearing this up!
Gamma: Thanks for that information. Now you can select another button to keep teaching me!
Table 3. Correct conversation flow in the second version of the Curiosity Notebook

Students can click on the Correct button to update a fact that was previously taught to the agent. A sample conversation is a shown in Table 3—the agent will first ask what object the student wants to focus on, then present the student with a list of learned facts about that object (entries in the agent’s notebook) to choose from, and finally, use questions to elicit a specific kind of correction.

In the first version’s design, teaching is considered done when all the knowledge bubbles were filled. With the notebook, on the other hand, there is no simplistic idea of knowledge mastery. The users are told to teach the agent for 40-minutes in the way that they like, and are free to interpret if and how much the notebook is indicative of their competence as a teacher. Having the notebook also gives users more transparency into how the agent learns, what it has been taught, and what it has yet to learn. As a more explicit representation of the agent’s knowledge (compared to the knowledge bubbles in the first version), the platform better supports metacognitive skills like identifying gaps and mistakes in the agent (and the student tutor’s own) understanding. Finally, the notebook feature presents a future opportunity to simulate the behaviour of different types of agent learners, e.g., an attentive agent learner might take notes of everything, whereas an inattentive learner might only occasionally take notes.

4.2.5. Agent’s Learning Mechanism

In the first version, the agent is simplistic—it does not understand students’ responses to questions, and always pretends to learn what students have taught. Initially, the agent is unable to answer any test questions correctly; its ability to answer test questions increases with the number of features it has learned (i.e., number of completed teaching conversations). The second version improves upon this: the agent answers test questions according to the set of features it has been taught about the tested entity. As such, the agent is not able to categorize entities that it has not been taught before (and will inform the student tutor as such). It will also categorize an entity incorrectly if it has been taught the wrong information. As discussed previously, the second version’s agent knowledge is represented using the agent’s notebook in the frontend, and using the entity-to-category and entity-to-feature mappings in the backend. This allows the agent to answer test questions according to its knowledge, and allows the student tutors to accurately gauge the agent’s learning.

4.2.6. Admin Interfaces

Figure 6. Admin interface for mapping sentences to relational facts, in the second version of the Curiosity Notebook. Right image courtesy NASA/JPL-Caltech/MSSS and PSI (NASA, 2012)

[Image of admin interface]The picture shows the admin interface of Curiosity Notebook used for configuring articles. Researchers can edit the articles’ images, text, and map each sentence in the article to one or more features.

As we envision the Curiosity Notebook may be used by researchers to conduct studies and, eventually, by teachers to organize learning by teaching activities for their class, the platform provides a set of web-based administrative tools for adding/removing users, updating user information, assigning users to groups, as well as configuring classification tasks and materials (e.g., articles, images). There is, for example, a semi-automatic way of mapping sentences to relational facts (e.g., Gabbbro is an igneous rock, Gabbro has holes), as shown in Figure 6. For features, the system automatically maps sentences to features by scanning for a set of keywords or phrases (e.g., holes, bubble, bubbles, porous, cavities) that are synonymous to a feature (e.g., “has holes”), and these mappings are subsequently manually verified, and corrected if needed, by the human user organizing the learning-by-teaching session. These admin features allow researchers and teachers to easily configure the learning material (CF#3) and tutor groups (CF#4), thus allowing teachers and experimenters to quickly adapt to unexpected changes during learning-by-teaching sessions, e.g., having to modify group assignments due to team dynamics problems or absent group members, or having to adapt materials on the fly.

As a research platform, the Curiosity Notebook provides researchers with the ability to configure experiments. Functionalities include the ability to add/remove experiments, add/remove conditions to/from experiments, and assign users to a specific experiment and condition. The platform also provides functionalities for researchers to configure the verbal behaviour of the agent, and associate different verbal behaviour with different experimental conditions (CF#1), and deploy these agents, each with a distinctly different verbal behaviour, simultaneously to different participants. Following, we describe findings from two deployments in different settings that were made possible due to Curiosity Notebook’s support of the five CFs.

5. Utility of Curiosity Notebook as a Research Platform for learning by teaching

In this section, we describe findings from the two deployments (detailed in Section 4.1), as a way to demonstrate how the configurability of the Curiosity Notebook makes it a useful research platform for learning by teaching. Our two deployments, which took place in contrasting learning environments—in one deployment, elementary school students taught a physical NAO robot in groups; in the second deployment, university students taught a text-based conversational agent individually—provide evidence for the versatility and utility of our platform. The ability of our platform to adapt learning materials (CF#3), group sizes (CF#4) and agent embodiment (CF#5) enabled us to quickly configure the system to deploy to vastly different learning settings. In this section, we describe, in some details, the utility of certain configurable features and what they enabled us to observe during the two deployments. For observations from the first deployment, we describe the results in aggregate as much of the observations were gathered informally from children. Due to the informal nature of the after-school club during which the sessions were run, children worked mostly with the same group of children, but were sometimes reassigned to a different group due to absences and personal preferences. For the second deployment which involved individual sessions, participants are denoted here as p1,…,pn, respectively.

The Impact of Agent Characteristics on Perception and Learning-by-Teaching Behaviour. Our findings suggest that different students perceived the same agent characteristics (e.g., programmed personality) in both positive or negative ways. In the first deployment, some elementary school students attributed the behaviour of the agent as indicative of it being a good learner; students said that the agent is a good learner because of its attentiveness (e.g., “because it pays very close attention”, “because he/she …sits in one spot and doesn’t get distracted”), curiosity (e.g., “because it’s curious”, “because Delta asks questions, just like a human student”), and its ability and eagerness to learn (e.g., “because he got everything right”, “because he’s always ready to learn”). However, other students had more negative perceptions, mentioning the rationale of “talking too much” as the reason for the agent not being a good student. Likewise, in the second exploratory study, most participants perceived Gamma as “eager” (p9, p11, p13, p15, p20, p22, p23, p31), “enthusiastic” (p13), “positive” (p15, p26, p40), “pleasant” (p16, p18), “cheerful” (p17, p24, p25), and “excited” (p16, p17, p36), and a good student because it takes notes. On the other hand, some participants found Gamma to be disingenuous, e.g., “over enthusiastic” (p33), “a bit over the top” (p35), “fake” (p3, p5), “artificial” (p6) and “repetitive” (p3). One participant (p21) suggested that a more natural version of Gamma would be one where “she would stop paying attention if you don’t use an entertainment button, and lose focus and maybe jots down the wrong answer.”

How students perceived the agent also critically affected how they perceived themselves as teachers. One interesting observation from the first deployment was that what the agent said, as well as the degree to which the agent was reported to be a “good” student, seemed to be associated with students’ perceptions of their own competence as teachers. That is, the majority of the students in the study saw themselves as good teachers and attributed their success at teaching to not only the learning progress of the robot (e.g., “because my robot has learned a lot”, “because we got all the bubbles for animals”), but also to the positive feedback they received from the robot (e.g., “because the robot told me so”, “because Delta always says good choice”). In contrast, university students in the second deployment, who had more access to information about what the robot has actually learned, seemed to rely more on accuracy information to judge their own competence as teachers. Participants who thought they taught well gave reasons including: 1) the notes written in the agent’s notebook were all correct (p0, p6), 2) the agent answered all quiz questions correctly (p1, p7, p19, p20, p22, p24, p38, p40), and 3) they taught the agent all 12 rocks (p11, p13-16). None of the participants, except for one (p20), mentioned the agent’s conversational feedback helped them see themselves as a good teacher, even though both the first and second deployment agents were designed to have the same level of enthusiasm. Given this observation, it may be beneficial for future versions to modify the verbal repertoire of the teachable agent to include specific feedback to the students about their teaching.

Configurability of agent characteristics (CF#1) therefore can allow us to observe how the agent, as a tutee, can affect student’s self-perception and subsequently their learning and teaching behaviour.

Coordination of Group-Based Teaching. The configurable feature CF#4 allows us to create student tutor groups of varied sizes. During the first deployment, students taught the agent in groups of 3. This (i.e., having many students tutor the same agent tutee) is a somewhat atypical scenario in human-to-human peer teaching, but has been studied in human-to-machine teaching (Breazeal et al., 2013; Hood et al., 2015). We observed that some students took the initiative to offer help to their teammates when it was not their turn to teach, while others were impatient at having to wait. Interestingly, the amount of attention that the robot gives to each student tutor seems to also affect students’ perceptions of their own teaching ability; one student said “Student X teaches way better because the robot chooses X more.” Together, these observations suggest the benefit of having a more idiosyncratic approach to managing group-based teaching that takes into account each student’s ability to work in a team and their unique need for attention from the agent.

Quantification of Teaching Strategies. Most importantly, our platform enabled us to observe, both qualitatively and quantitatively, how students go about teaching the agent (CF#2).

In the first deployment, we observed different groups of students demonstrating different teaching strategies, for example, some groups filled as many knowledge bubbles as possible before they tested the robot’s knowledge; whereas other groups tested the robot often, e.g., after each filled bubble. Since the agent completely controls the conversation through a predefined sequence of questions, several students found this restriction as limiting their teaching process and wanted to be able to control how they teach more proactively. With the introduction of teaching buttons, the second deployment provides us with a much bigger opportunity to understand, quantitatively, the teaching behaviour of the students.

For the second deployment, we performed a cluster analysis on the data from 40 participants (p10 was removed due to completing too few teaching conversations) to investigate whether participants’ behaviour based on the amount of time they spent on different teaching activities reflected different teaching styles. There are 8 features in total—one for each of the 7 buttons, plus the button used to view the agent’s notebook. From these 8 features, 3 were removed upon further inspection; specifically, three buttons were used by participants less than three times on average throughout the entire teaching session:

Compare (, ), Correct (, ) and Tell Joke (, ). In contrast, the 5 other buttons are used more frequently: Describe (, ), Explain (, ), Quiz (, ) and Fun Fact (, ). For each of these 5 teaching activities invoked by clicking button , we create a feature to represent the relative attention that participants paid to that teaching activity, i.e., the number of times a participant clicked divided by the total number of button clicked over the entire session. Having 5 features is an appropriate amount according to Formann, who suggested no more than k features for a sample size of when performing cluster analysis (i.e., ) (Formann, 1984). The correlation between these 5 features were checked to ensure all features were distinctive during clustering (Hair et al., 2009)

. Hierarchical clustering was used as the clustering method; it was chosen given its use in past studies on tutors’ behaviours on learning-by-teaching platforms

(Kinnebrew et al., 2014). The average silhouette width method (Rousseeuw, 1987; Hung et al., 2019) was used to analyze the quality of clustering. Its value ranges and measures how close each point in their cluster is to other clusters, with a higher value indicating better quality clusters. Linear models were used to investigate the effects of other factors (demographics, perception of the agent, etc.) on button click rates. Stepwise selection was performed to select the variables that best explain the variability of each rate. The fit of the models were verified through visual inspection of the QQ-plots (WILK and GNANADESIKAN, 1968).

Figure 7. Comparison of clusters on features used for clustering (deployment 2).

Two clusters were found (), referred to as C1 and C2. These clusters achieve an average silhouette index of with all participants having positive index values, indicating that all participants fitted well within their assigned cluster. As shown in Figure 7, participants in C1 have higher percentages of using Explain () and Quiz (). They also check the agent’s notebook more frequently (). Participants in C2, on the other hand, have a much higher percentage use of Describe (), and a slightly higher percentage use of Fun Fact ().

Taking these results together, C1 participants can be interpreted as teachers who are more engaged than C2 participants. On average, 87% of button clicks by C2 are Describe and Fun Fact (which requires regurgitation, instead of synthesis, of facts), compared to only 42% by C1. C2’s teaching strategy is best described by p31, “I mainly described things (rocks) to Gamma rather than explain”. On the flipside, C1 participants focus more on other aspects of teaching, like providing explanations via Explain, probing agent performance and gaining crucial feedback via Quiz and tracking the agent’s learning by checking the notebook. Fitting linear models on features beyond the final set of clustering features also revealed other significant differences. For instance, C2 participants () have, on average, significantly more notes than C1 participants () with a large effect size (), as the Describe conversation is less time-consuming. This clustering is also in line with Felder and Silverman (Felder et al., 1988), which discussed two different teaching styles—Concrete, which involves teaching in a repetitive manner, as exemplified by C2 participants; and Global, which is associated with a more diverse and creative teaching repertoire, as exemplify by C1 participants (Felder et al., 1988).

We were also interested in the characteristics of the two clusters, and thus examined participants’ individual responses to the questionnaires. The AMS IM-to know scores show that C1 participants are more intrinsically motivated to know () than C2 participants (). A Mann-Whitney test shows significant difference, , and the r-value shows an effect size of , which is a large effect as suggested in (Fritz et al., 2012). Also, the AMS EM-introjected regulation scores show that C1 participants are more likely to internalize their teaching behaviour () than C2 participants (); significance is found by a Mann-Whitney test, , alongside a large effect (). Cohen’s d also showed large effect sizes for both IM-to know and EM-introjected regulation score differences ( and respectively) (Cohen, 1992).

These aspects have all been found to be beneficial not only to teaching, but also learning. C1 scores higher on the IM-to know scale, which relates to the satisfaction of learning, exploring and understanding something new. Through fitting Gaussian linear models, we also found that participants who had higher IM-to know scores scored better in the post-study questionnaire for questions about rocks which do not have articles in the Curiosity Notebook . In other words, these participants scored significantly better on rocks that were not previously seen when teaching the agent; they were better at transferring classification rules learned during the teaching session to new contexts (i.e., new rocks). This suggests that participants with higher IM-to know could end up learning better transfer skills than others due to their tendency to engage in teaching activities involving the integration of facts (as observed in C1).

In short, the two deployments demonstrate the Curiosity Notebook’s configurable features (detailed in Section 3), and, crucially, the ability to generate important insights due to that flexibility. For instance, supporting configurable agent characteristics (CF#1) provided information about the effects of the tutee agent’s characteristics on students’ behaviour and perceptions of themselves and the agent; supporting group-based teaching (CF#4) confirms the need for more studies (that can be carried out using Curiosity Notebook) into understanding group dynamics during learning by teaching; providing a quantification of teaching behaviours (CF#2) allowed for insights into less vs. more engaged participants’ teaching behaviour and learning outcomes. Moreover, supporting configurable learning material (CF#3) allowed for each deployment to be carried out for different age groups. Lastly, supporting flexible agent embodiment (CF#5) allowed rapid pivoting from in-person studies, e.g., NAO robots in Deployment 1, to online studies, e.g., text-only agent in Deployment 2, vice versa.

6. Envisioning Version 3: A Discussion

In this section, we summarize Curiosity Notebook’s design and changes from the first to second version in terms of the configurable features (CF). We also discuss potential improvements for the next version.

The configurability of the platform substantially improved from the first to second version. In the first version, the agent was programmed to be an enthusiastic and quick learner, and its implementation did not allow researchers to easily configure agent characteristics (CF#1). That is, they could not be swapped on-the-fly, nor could agents with different characteristics be deployed simultaneously (e.g., for between-subjects studies). Second, students were not given any autonomy over their teaching behaviour, such as the article taught and the way it is taught (CF#2). Third, there was no easy way for researchers to modify articles or verify/correct the mapping between content (e.g., sentences) to concepts (e.g., categories, features) (CF#3). Fourth, although the platform allowed for easy and quick allocation of students into groups of any size, more personalized methods for encouraging effective collaboration within groups should be considered (CF#4). Lastly, following CF#5, the platform enabled flexible agent embodiments. Compared to version 1, version 2 allowed for easy configuration of agent characteristics through the use of JSON configuration files (CF#1). Second, it provided students with 7 buttons that initiated 7 distinct conversations (CF#2), and the second deployment demonstrated the amount of flexibility allowed in teaching behaviour through the discovery of two distinct behaviours that affected learning. Third, new administrative interfaces allowed researchers to easily associate different agent characteristics with different experimental conditions (CF#1), configure the material (CF#3) and tutor grouping (CF#4).

These improvements allowed us to conduct a number of studies (described elsewhere in (Ceha et al., 2021; Ravari et al., 2021)) where the agent’s characteristics were the experimentally manipulated through quick configurations. It allowed us to release the second deployment easily, even though a complete overhaul of the material was required. Moreover, it allowed us to collect detailed quantitative data on how students go about teaching. For CF#2, a potential improvement is to design the platform to collect more detailed data on tutor-agent interaction, so that complex interactions, such as those mentioned by Roscoe and Chi, can be appropriately captured and analyzed (Roscoe and Chi, 2007)

. In terms of CF#4, we envision the next version of the Curiosity Notebook to include configurability on how teams of students communicate with each other, and with the agent. For instance, a separate JSON file could be used to list preferred details of various aspects of group communication, such as how frequently should the agent explicitly encourage discussions between group members, how frequently should the agent refer to material taught by a group member while being taught by another, and what (if any) ways can group members interfere when a member is teaching. Further improvements can also allow for the integration of machine learning models that impact either agent characteristics (CF#1) or group communication (CF#4). In terms of CF#5, the next version could provide more standardized application programming interfaces (API) to enable any output devices to act as an agent. This would enhance flexibility by further reducing the ease of connecting various software (e.g., text-to-speech software) or hardware (e.g., robots) to the platform.

One of the less satisfying aspects about the learning-by-teaching platform so far is our ability to demonstrate learning gains in the students themselves. A potential reason is that the learning objectives of the agent is too inconspicuous—the Quiz action, which demonstrates how well the agent can classify objects, needs to be invoked explicitly by the student tutor. Prior research has showed that participants learn better when observing their students use the knowledge they taught (Okita and Schwartz, 2013; Okita et al., 2013). In our studies, we asked participants to teach the agent, but we did not tell them why the agent needs to acquire that knowledge. The student’s sense that the knowledge taught to the agent matters could become a source of extrinsic motivation for the user, further enhancing their learning. Thus, having the teachable agent demonstrate its skills openly to students while it is being taught can be both informational (i.e., shows where the misunderstandings lie) and motivational. In version 3 of the Curiosity Notebook, we envision adding a “demonstration panel”, where the agent is performing a task live (e.g., sorting objects into different categories, writing code, describing the steps to put together a recipe) while soliciting feedback and help from the students. Having the agent show its mistakes in an obvious fashion can also prompt for specific forms of teaching, without needing explicit teaching buttons; for example, students may be inclined to teach certain rules to the agent in order to fix its mistakes or to help it reach its goal.

There are a few limitations in our design approach. The first deployment was over 4 weeks, thus allowing us to observe learning-by-teaching behaviours over time. However, the sample size was considerably small, which limited the potential for more complex analysis on participants’ teaching behaviours. Additionally, since the design of the two deployments were not consistent, no direct comparison could be made between the studies. Nonetheless, the two deployments and multiple design iterations allowed us to understand the intricacies of the learning-by-teaching process and what is required for a research platform for learning by teaching to be maximally useful.

7. Conclusions

In this paper, we introduce the Curiosity Notebook, an interface that provides various teaching tools for participants to interact with a teachable agent. We iteratively designed the platform based on observations from two deployments—a field study with 12 elementary school children and an online study with 41 university students. We showed that by providing ways to configure features that were identified by prior research to be important, two learning-by-teaching studies deployed under very different contexts could be conducted on the same platform without much overhead. The Curiosity Notebook’s value as a research utility was demonstrated by the important insights gained from the studies.

Moving forward, there are plans to allow the research community to fully exploit Curiosity Notebook’s utility as a research platform through open sourcing. This greatly reduces the overhead of conducting such research by removing the need to build platforms from the ground up. Moreover, having a platform that many can use for their research will allow for more direct comparisons in findings between various learning-by-teaching studies. This will make it much easier for findings to be generalized or aggregated to a level suitable to be applied directly in the real world by those in the education sector.

8. Acknowledgments

We thank all participants for their contributions, and acknowledge the funding from the NSERC Discovery Grant RGPIN-2015-0454 and the University of Waterloo Interdisciplinary Trailblazer Fund for making this work possible.

References

  • M. Alaimi, E. Law, K. D. Pantasdo, P. Oudeyer, and H. Sauzeon (2020) Pedagogical agents for fostering question-asking skills in children. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’20, New York, NY, USA, pp. 1–10. External Links: Document Cited by: §4.2.1.
  • L. F. Annis (1983) The processes and effects of peer tutoring.. Human Learning: Journal of Practical Research & Applications 2 (1), pp. 39–47. Cited by: §3.
  • S. Aslan (2017) Learning by teaching: can it be utilized to develop inquiry skills?.. Journal of Education and Training Studies 5 (12), pp. 190–198. Cited by: §1.
  • R. K. Atkinson (2002) Optimizing learning from examples using animated pedagogical agents.. Journal of Educational Psychology 94 (2), pp. 416. Cited by: §3.
  • C. Bartneck, D. Kulić, E. Croft, and S. Zoghbi (2008) Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International Journal of Social Robotics 1 (1), pp. 71–81. External Links: Document, Link Cited by: §4.1.
  • S. Basu, G. Biswas, and J. S. Kinnebrew (2016) Using multiple representations to simultaneously learn computational thinking and middle school science. In

    Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence

    ,
    AAAI’16, California, US, pp. 3705–3711. Cited by: §1.
  • A. L. Baylor and Y. Kim (2005) Simulating instructional roles through pedagogical agents. International Journal of Artificial Intelligence in Education 15 (2), pp. 95–115. Cited by: §2.2.
  • C. A. Benware and E. L. Deci (1984) Quality of learning with an active versus passive motivational set. American Educational Research Journal 21 (4), pp. 755–765. External Links: Document, Link Cited by: §2.1.
  • G. Biswas, K. Leelawong, D. Schwartz, N. Vye, and T. T. A. G. at Vanderbilt (2005) Learning by Teaching: A New Agent Paradigm for Educational Software. Applied Artificial Intelligence 19 (3-4), pp. 363–392. External Links: Document Cited by: §1, §2.2, §3, §3.
  • C. Breazeal, N. Depalma, J. Orkin, S. Chernova, and M. Jung (2013) Crowdsourcing human-robot interaction: new methods and system evaluation in a public environment. Journal of Human-Robot Interaction 2, pp. 82–111. External Links: Document Cited by: §5.
  • J. Ceha, N. Chhibber, J. Goh, C. McDonald, P. Oudeyer, D. Kulić, and E. Law (2019) Expression of curiosity in social robots: design, perception, and effects on behaviour. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI ’19, New York, NY, USA, pp. 406:1–406:12. External Links: ISBN 978-1-4503-5970-2, Link, Document Cited by: §4.
  • J. Ceha, K. J. Lee, E. Nilsen, J. Goh, and E. Law (2021) Can a humorous conversational agent enhance learning experience and outcomes?. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–14. External Links: ISBN 9781450380966, Link Cited by: §4.1, §6.
  • S. Chandra, R. Paradeda, H. Yin, P. Dillenbourg, R. Prada, and A. Paiva (2017) Affect of robot’s competencies on children’s perception. In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, AAMAS ’17, Richland, SC, pp. 1490–1492. Cited by: §2.2.
  • C. C. Chase, D. B. Chin, M. A. Oppezzo, and D. L. Schwartz (2009) Teachable Agents and the Protégé Effect: Increasing the Effort Towards Learning. Journal of Science Education and Technology 18 (4), pp. 334–352. External Links: Document Cited by: §1, §2.1.
  • D. B. Chin, I. M. Dohmen, B. H. Cheng, M. A. Oppezzo, C. C. Chase, and D. L. Schwartz (2010) Preparing students for future learning with teachable agents. Educational Technology Research and Development 58 (6), pp. 649–669. Cited by: §1, §1.
  • D. B. Chin, I. M. Dohmen, and D. L. Schwartz (2013) Young children can learn scientific reasoning with teachable agents. IEEE Transactions on Learning Technologies 6 (3), pp. 248–257. Cited by: §1, §1.
  • J. Cohen (1992) A power primer.. Psychological bulletin 112 (1), pp. 155. Cited by: §5.
  • P. A. Cohen, J. A. Kulik, and C. C. Kulik (1982) Educational outcomes of tutoring: a meta-analysis of findings. American educational research journal 19 (2), pp. 237–248. Cited by: §1.
  • E. B. Coleman, A. L. Brown, and I. D. Rivkin (1997) The effect of instructional explanations on learning from scientific texts. Journal of the Learning Sciences 6 (4), pp. 347–365. External Links: Document, Link Cited by: §3.
  • P. M.A. Desmet, M. H. Vastenburg, and N. Romero (2016) Mood measurement with pick-a-mood: review of current methods and design of a pictorial self-report scale. J. of Design Research 14 (3), pp. 241. External Links: Document, Link Cited by: §4.1.
  • P. Dillenbourg (1999) What do you mean by collaborative learning?. Oxford: Elsevier. Cited by: §2.1.
  • M. S. Donovan and J. D. Bransford (2005) How students learn: history, mathematics, and science in the classroom. The National Academies Press, Washington, DC. External Links: ISBN 978-0-309-07433-9, Document Cited by: §4.2.1.
  • D. Duran (2016) Learning-by-teaching. evidence and implications as a pedagogical mechanism. Innovations in Education and Teaching International 54 (5), pp. 476–484. External Links: Document, Link Cited by: §1.
  • M. Emara, R. Rajendran, G. Biswas, M. Okasha, and A. A. Elbanna (2018) Do students’ learning behaviors differ when they collaborate in open-ended learning environments?. Proc. ACM Hum.-Comput. Interact. 2 (CSCW), pp. 49:1–49:19. External Links: ISSN 2573-0142 Cited by: §1, §3, §3.
  • R. M. Felder, L. K. Silverman, et al. (1988) Learning and teaching styles in engineering education. Engineering education 78 (7), pp. 674–681. Cited by: §5.
  • L. Fiorella and R. E. Mayer (2013) The relative benefits of learning by teaching and teaching expectancy. Contemporary Educational Psychology 38 (4), pp. 281–288. External Links: Document, Link Cited by: §2.1.
  • L. Fiorella and R. E. Mayer (2014) Role of expectations and explanations in learning by teaching. Contemporary Educational Psychology 39 (2), pp. 75–85. External Links: Document, Link Cited by: §2.1.
  • L. Fiorella and R. E. Mayer (2015) Eight ways to promote generative learning. Educational Psychology Review 28 (4), pp. 717–741. External Links: Document, Link Cited by: §2.1.
  • J. H. Flavell and H. M. Wellman (1975) Metamemory.. ERIC. Cited by: §3.
  • M. Flores and D. Duran (2013) Effects of peer tutoring on reading self-concept. IJEP – International Journal of Educational Psychology 2 (3), pp. 297–324. External Links: ISSN 2014-3591, Link, Document Cited by: §2.1.
  • A. K. Formann (1984) Die latent-class-analyse: einführung in theorie und anwendung. Beltz, Weinheim, Germany. Cited by: §5.
  • C. O. Fritz, P. E. Morris, and J. J. Richler (2012)

    Effect size estimates: current use, calculations, and interpretation.

    .
    Journal of experimental psychology: General 141 (1), pp. 2. Cited by: §5.
  • L. Fryer and R. Carpenter (2006) Bots as language learning tools. Language Learning & Technology 10 (3), pp. 8–14. Cited by: §2.2.
  • J. J. Gallagher and M. J. Aschner (1963) A preliminary report on analyses of classroom interaction. Merrill-Palmer Quarterly of Behavior and Development 9 (3), pp. 183–194. Cited by: §4.2.1, §4.2.1.
  • K. Garkal, A. Shete, et al. (2018) Learning by teaching: role of??? peer-assisted learning??? in medical education. Journal of Contemporary Mediacal Education 9 (1), pp. 17–20. Cited by: §1.
  • D. Griol and Z. Callejas (2013) An architecture to develop multimodal educative applications with chatbots. International Journal of Advanced Robotic Systems 10 (3), pp. 175. External Links: Document, Link Cited by: §2.2.
  • J. Grzega and M. Schöner (2008) The didactic model ldl (lernen durch lehren) as a way of preparing students for communication in a knowledge society. Journal of Education for Teaching 34 (3), pp. 167–175. Cited by: §1.
  • J. F. Hair, W. C. Black, B. J. Babin, and R. E. Anderson (2009) Multivariate data analysis. Pearson Education Limited, Harlow, Essex. External Links: ISBN 978-1-292-02190-4 Cited by: §5.
  • N. T. Heffernan and E. A. Croteau (2004) Web-based evaluations showing differential learning for tutorial strategies employed by the ms. lindquist tutor. In International Conference on Intelligent Tutoring Systems, Berlin, Heidelberg, pp. 491–500. Cited by: §2.2.
  • K. Hone, F. Akhtar, and M. Saffu (2003) Affective agents to reduce user frustration: the role of agent embodiment. In Proceedings of Human-Computer Interaction (HCI2003), New York, NY, USA. Cited by: §3.
  • D. Hood, S. Lemaignan, and P. Dillenbourg (2015) When children teach a robot to write: an autonomous teachable humanoid which uses simulated handwriting. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI ’15, New York, NY, USA, pp. 83–90. External Links: ISBN 9781450328838, Link, Document Cited by: §2.2, §5.
  • V. Hoogerheide, S. M.M. Loyens, and T. van Gog (2014) Effects of creating video-based modeling examples on learning and transfer. Learning and Instruction 33, pp. 108–119. External Links: Document, Link Cited by: §2.1.
  • P. D. Hung, N. T. T. Lien, and N. D. Ngoc (2019) Customer segmentation using hierarchical agglomerative clustering. In Proceedings of the 2019 2nd International Conference on Information Science and Systems, ICISS 2019, New York, NY, USA, pp. 33–37. External Links: ISBN 9781450361033, Link, Document Cited by: §5.
  • T. J. Keeney, S. R. Cannizzo, and J. H. Flavell (1967) Spontaneous and induced verbal rehearsal in a recall task. Child Development 38, pp. 953–966. Cited by: §3.
  • J. W. Kinch (1963) A formalized theory of the self-concept. American Journal of Sociology 68 (4), pp. 481–486. Cited by: §2.1.
  • J. S. Kinnebrew, J. R. Segedy, and G. Biswas (2014) Analyzing the temporal evolution of students’ behaviors in open-ended learning environments. Metacognition and learning 9 (2), pp. 187–215. Cited by: §5.
  • D. Kuhn and M. Pease (2006) Do children and adults learn differently?. Journal of cognition and development 7 (3), pp. 279–293. Cited by: §3.
  • D. Kuhn (2000) Metacognitive development. Current directions in psychological science 9 (5), pp. 178–181. Cited by: §3.
  • M. Laal and S. M. Ghodsi (2012) Benefits of collaborative learning. Procedia-social and behavioral sciences 31, pp. 486–490. Cited by: §3.
  • A. Lachner, I. Backfisch, V. Hoogerheide, T. van Gog, and A. Renkl (2020) Timing matters! explaining between study phases enhances students’ learning.. Journal of Educational Psychology 112 (4), pp. 841–853. External Links: Document, Link Cited by: §2.1, §4.2.1.
  • E. Law, P. B. Ravari, N. Chhibber, D. Kulic, S. Lin, K. D. Pantasdo, J. Ceha, S. Suh, and N. Dillen (2020) Curiosity notebook: a platform for learning by teaching conversational agents. In Extended Abstracts of the SIGCHI Conference on Human Factors in Computing Systems (Late Breaking Work), CHI ’20, New York, NY, USA, pp. 1–8. Cited by: §4.1.
  • K. Leelawong and G. Biswas (2008) Designing learning by teaching agents: the betty’s brain system. International Journal of Artificial Intelligence in Education 18 (3), pp. 181–208. Cited by: §1, §3.
  • K. Leelawong, J. Davis, N. Vye, G. Biswas, D. Schwartz, K. Belynne, T. Katzlberger, and J. Bransford (2002) The effects of feedback in supporting learning by teaching in a teachable agent environment. In The Fifth International Conference of the Learning Sciences, Boulder, CO, USA, pp. 245–252. Cited by: §1, §2.2, §2.2, §3, §4.2.1.
  • L. Legault (2017) Self-determination theory. In Encyclopedia of Personality and Individual Differences, pp. 1–9. External Links: Document, Link Cited by: §2.1.
  • R. B. MacDonald (1991) An analysis of verbal interaction in college tutorials. Journal of Developmental Education 15 (1), pp. 2. Cited by: §3.
  • N. Matsuda, W. Weng, and N. Wall (2020) The effect of metacognitive scaffolding for learning by teaching a teachable agent. International Journal of Artificial Intelligence in Education 30, pp. 1–37. Cited by: §1, §1.
  • N. Matsuda, E. Yarzebinski, V. Keiser, R. Raizada, W. W. Cohen, G. J. Stylianides, and K. R. Koedinger (2013) Cognitive Anatomy of Tutor Learning: Lessons Learned with SimStudent. Journal of Educational Psychology 105 (4), pp. 1152–1163. External Links: Document Cited by: §1, §1, §1, §2.2, §3.
  • J. E. Miller et al. (1994) Group dynamics: understanding group success and failure in collaborative learning.. New directions for teaching and learning 59, pp. 33–44. Cited by: §3.
  • R. Moreno, R. E. Mayer, H. A. Spires, and J. C. Lester (2001) The case for social agency in computer-based teaching: do students learn more deeply when they interact with animated pedagogical agents?. Cognition and instruction 19 (2), pp. 177–213. Cited by: §3.
  • A. Munshi, R. Rajendran, A. Moore, G. Biswas, and J. Ocumpaugh (2018) Studying the interactions between components of self regulated learning in open ended learning environments. In Proceedings of International Conference of the Learning Sciences, ICLS, IN, USA, pp. 1691–1692. Cited by: §1, §3, §3.
  • NASA (2012) Rock outcrops on mars and earth. External Links: Link Cited by: Figure 6.
  • D. Nichols (1994) Issues in designing learning by teaching systems. In Proceedings of the East-West International Conference on Computer Technologies in Education, Vol. 1, Moscow, Russia. Cited by: §3.
  • A. Ogan, S. Finkelstein, E. Mayfield, C. D’Adamo, N. Matsuda, and J. Cassell (2012) ”Oh dear stacy!” social interaction, elaboration, and learning with teachable agents. In Proceedings of the SIGCHI conference on human factors in computing systems, New York, NY, USA, pp. 39–48. Cited by: §1.
  • S. Y. Okita and D. L. Schwartz (2013) Learning by teaching human pupils and teachable agents: the importance of recursive feedback. Journal of the Learning Sciences 22 (3), pp. 375–412. External Links: Document, Link Cited by: §1, §1, §2.2, §4.2.1, §6.
  • S. Y. Okita, S. Turkay, M. Kim, and Y. Murai (2013) Learning by teaching with virtual peers and the effects of technological design choices on learning. Computers & Education 63, pp. 176–196. External Links: Document, Link Cited by: §1, §2.2, §4.2.1, §6.
  • S. Park and C. Kim (2015) Boosting learning-by-teaching in virtual tutoring. Computers & Education 82, pp. 129–140. Cited by: §3.
  • J. P. Powell and L. W. Andresen (1985) Humour and teaching in higher education. Studies in Higher Education 10 (1), pp. 79–90. Cited by: §4.2.1.
  • P. B. Ravari, K. J. Lee, E. Law, and D. Kulic (2021) Effects of an adaptive robot encouraging teamwork on students’ learning. In 2021 30th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), New York, NY, USA. Cited by: §4.1, §6.
  • B. Rienties, D. Tempelaar, P. Van den Bossche, W. Gijselaers, and M. Segers (2009) The role of academic motivation in computer-supported collaborative learning. Computers in Human Behavior 25 (6), pp. 1195–1206. Cited by: §3.
  • D. R. Robinson, J. W. Schofield, and K. L. Steers-Wentzell (2005) Peer and cross-age tutoring in math: outcomes and their design implications. Educational Psychology Review 17 (4), pp. 327–362. Cited by: §1, §2.1, §2.1.
  • R. D. Roscoe and M. T. Chi (2004) The influence of the tutee in learning by peer tutoring. In Proceedings of the Annual Meeting of the Cognitive Science Society, Vol. 26, Austin, TX, pp. 1179–1184. Cited by: §3.
  • R. D. Roscoe and M. T.H. Chi (2007) Understanding Tutor Learning: Knowledge-building and Knowledge-telling in Peer Tutors’ Explanations and Questions. Review of Educational Research 77 (4), pp. 534–574. External Links: ISSN 00346543, Document Cited by: §1, §1, §1, §2.1, §3, §3, §3, §6.
  • R. D. Roscoe (2014) Self-monitoring and knowledge-building in learning by teaching. Instructional Science 42 (3), pp. 327–351. Cited by: §3, §3, §4.2.1.
  • P. J. Rousseeuw (1987) Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. Journal of computational and applied mathematics 20, pp. 53–65. Cited by: §5.
  • R. M. Ryan (1982) Control and information in the intrapersonal sphere: an extension of cognitive evaluation theory.. Journal of personality and social psychology 43 (3), pp. 450. Cited by: §4.1.
  • J. R. Segedy, J. S. Kinnebrew, and G. Biswas (2012) Supporting student learning using conversational agents in a teachable agent environment. In 10th International Conference of the Learning Sciences: The Future of Learning, ICLS 2012 - Proceedings, IN, USA. Cited by: §1.
  • N. Shah, C. Lewis, and R. Caires (2014) Analyzing equity in collaborative learning situations: a comparative case study in elementary computer science. In Learning and Becoming in Practice: The International Conference of the Learning Sciences, Cited by: §3.
  • A. Shamekhi, Q. V. Liao, D. Wang, R. K. E. Bellamy, and T. Erickson (2018) Face value? exploring the effects of embodiment for a group facilitation agent. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1–13. External Links: ISBN 9781450356206, Link Cited by: §3.
  • N. A. Sprinthall and J. R. Scott (1989) Promoting psychological development, math achievement, and success attribution of female students through deliberate psychological education.. Journal of counseling psychology 36 (4), pp. 440. Cited by: §2.1.
  • R. Srivastava and M. Rashid (2018) Who is at edge–tutors or tutees? academic, social and emotional elevation through peer tutoring. In Proceedings of 1st MEC TESOL Conference, Texas, USA, pp. 64–77. Cited by: §1.
  • J. Tan, G. Biswas, and D. L. Schwartz (2006) Feedback for Metacognitive Support in Learning by Teaching Environments. Proceedings of the Annual Meeting of the Cognitive Science Society 28, pp. 828–833. Cited by: §1.
  • F. Tanaka, K. Isshiki, F. Takahashi, M. Uekusa, R. Sei, and K. Hayashi (2015) Pepper Learns Together with Children: Development of an Educational Application. In Proceedings of the IEEE-RAS International Conference on Humanoid Robots, Humanoids ’15, Whitestone, NY, USA, pp. 270–275. External Links: Document Cited by: §2.2.
  • F. Tanaka and S. Matsuzoe (2012) Children Teach a Care-Receiving Robot to Promote Their Learning: Field Experiments in a Classroom for Vocabulary Learning. Journal of Human-Robot Interaction 1 (1), pp. 78–95. External Links: Document Cited by: §2.2.
  • S. Thellman, A. Silvervarg, A. Gulz, and T. Ziemke (2016) Physical vs. virtual agent embodiment and effects on social interaction. In Intelligent Virtual Agents, pp. 412–415. External Links: Document, Link Cited by: §3.
  • R. J. Vallerand, L. G. Pelletier, M. R. Blais, N. M. Briere, C. Senecal, and E. F. Vallieres (1992) The academic motivation scale: a measure of intrinsic, extrinsic, and amotivation in education. Educational and psychological measurement 52 (4), pp. 1003–1017. Cited by: §4.1.
  • S. Vossen, J. Ham, and C. Midden (2009) Social influence of a persuasive agent: the role of agent embodiment and evaluative feedback. In Proceedings of the 4th International Conference on Persuasive Technology, Persuasive ’09, New York, NY, USA. External Links: ISBN 9781605583761, Link, Document Cited by: §3.
  • J. Wagster, J. Tan, Y. Wu, G. Biwas, and D. Schwartz (2007) Do learning by teaching environments with metacognitive support help students develop better learning behaviors?. In Proceedings of the 29th Annual Meeting of the Cognitive Science Society, Vol. 29, Austin, TX, pp. 695–700. Cited by: §3, §3.
  • M. B. WILK and R. GNANADESIKAN (1968) Probability plotting methods for the analysis for the analysis of data. Biometrika 55 (1), pp. 1–17. External Links: Document, Link Cited by: §5.
  • E. Yadollahi, W. Johal, A. Paiva, and P. Dillenbourg (2018) When Deictic Gestures in a Robot Can Harm Child-robot Collaboration. In Proceedings of the ACM Conference on Interaction Design and Children, IDC ’18, New York, NY, USA, pp. 195–206. External Links: Document, ISBN 9781450351522 Cited by: §2.2.
  • E. Ziegler and E. Stern (2014) Delayed benefits of learning elementary algebraic transformations through contrasted comparisons. Learning and Instruction 33, pp. 131–146. Cited by: §4.2.1.