Subgoals, Problem Solving Phases, and Sources of Knowledge: A Complex Mangle

01/05/2019
by   Kevin Lin, et al.
berkeley college
0

Educational researchers have increasingly drawn attention to how students develop computational thinking (CT) skills, including in science, math, and literacy contexts. A key component of CT is the process of abstraction, a particularly challenging concept for novice programmers, but one vital to problem solving. We propose a framework based on situated cognition that can be used to document how instructors and students communicate about abstractions during the problem solving process. We develop this framework in a multimodal interaction analysis of a 32-minute long excerpt of a middle school student working in the PixelBots JavaScript programming environment at a two-week summer programming workshop taught by undergraduate CS majors. Through a microgenetic analysis of the process of teaching and learning about abstraction in this excerpt, we document the extemporaneous prioritization of subgoals and the back-and-forth coordination of problem solving phases. In our case study, we identify that (a) problem solving phases are nested with several instances of context-switching within a single phase; (b) the introduction of new ideas and information create bridges or opportunities to move between different problem solving phases; (c) planning to solve a problem is a non-linear process; and (d) pedagogical moves such as modeling and prompting highlight situated resources and advance problem solving. Future research should address how to help students structure subgoals and reflect on connections between problem solving phases, and how to help instructors reflect on their routes to supporting students in the problem solving process.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

09/18/2021

Steps Before Syntax: Helping Novice Programmers Solve Problems using the PCDIT Framework

Novice programmers often struggle with problem solving due to the high c...
05/11/2021

Teachers' perspective on fostering computational thinking through educational robotics

With the introduction of educational robotics (ER) and computational thi...
07/08/2020

Study on Computational Thinking as Problem-solving Skill: Comparison Based on Students Mindset in Engineering and Social Science

One of the capabilities which 21st-century skill compulsory a person is ...
12/09/2019

A Continuous-Time Dynamic Choice Measurement Model for Problem-Solving Process Data

Problem solving has been recognized as a central skill that today's stud...
12/02/2012

Problem Solving and Computational Thinking in a Learning Environment

Computational thinking is a new problem soling method named for its exte...
02/01/2018

Edu-Edition Spreadsheet Competency Framework

Based on the Spreadsheet Competency Framework for finance professionals,...
06/10/2017

Computational Thinking in Patch

With the future likely to see even more pervasive computation, computati...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Problem and Motivation

Educational researchers have increasingly drawn attention to how students develop computational thinking (CT) skills (Lye and Koh, 2014; Weintrop et al., 2015), including in science, math, and literacy contexts (Pérez, 2018; Basu et al., 2017; Jacob and Warschauer, 2018). A key component of CT is the process of abstraction. Indeed, abstraction is recognized as a threshold concept (Rountree et al., 2013): a generative idea that once learned provides “a qualitatively different view of subject matter within a discipline.” The process of creating abstractions interleaves multiple problem solving phases: planning, building, and monitoring. In addition, creating abstractions requires attending to subgoals. Finally, pathways to learning about abstractions are structured by complex tools and driven by a variety of sources of knowledge, including perception, testimony, reasoning, and memory (Chinn et al., 2011). Our purpose in this paper is to stitch these elements into a framework that can be used to document how instructors and students communicate about computational thinking.

2. Background and Related Work

Our framework integrates constructs from the learning sciences, computer science educational research, and human computer interaction. Our point of departure is situated cognition, a theoretical framework that inextricably connects learning to interactions between tools, cognition, bodies, and communities of practice (Jordan and Henderson, 1995). Increasingly, other researchers have studied CT teaching and learning from a situated perspective (Lewis, 2012; Lewis and Shah, 2015; Flood et al., 2018a, b). Specifically, our framework connects with prior work on how problem solving involves a balancing act between exploration, building, and monitoring (Basu et al., 2017; Rountree et al., 2013). In addition, we recognize that developing abstractions requires a gradual process of specifying and prioritizing goals and subgoals, including refactoring a previous route by developing new subgoals (Basu et al., 2017). The epistemic actions that advance problem solving are stretched across tools in the environment (e.g., editor, syntax checker, stepper tool, code reference sheet) and multiple sources of knowledge: perception, memory, reasoning, and testimony. Each interaction between a source of knowledge and a tool requires the learner to cross a gulf of execution, where they try to figure out how to express their idea to a tool in the environment, and attend to the outcome by crossing a gulf of evaluation, where they try to figure out the tool’s response (Norman, 2002). Recognizing that causes of failure in programming emerge from complex connections between proximal and distal events (Ko and Myers, 2005), we assembled the framework above to comprehensively integrate multiple constructs and offer a new lens on the pathways students take to learn foundational computer programming concepts.

3. Approach and Uniqueness

There is surprisingly little microgenetic, multimodal qualitative research on how young students program in naturalistic learning environments. For this paper, we selected a 32-minute long sample of a middle school student working in the PixelBots JavaScript programming environment at a two-week summer programming workshop taught by undergraduate CS majors. Using rich observational data including several camera angles and screen recordings, we transcribed the student’s activities, repeatedly watched video, connected our observations to constructs noted above, and gradually developed our framework. This methodology blends interaction analysis with the constant comparative method (Jordan and Henderson, 1995; Glaser, 1965). In this case study, the student solves the problem of programming the PixelBot to paint three jagged lines, horizontally spaced 3 tiles apart. Our purpose is to document the details of one student’s problem solving process, rather than make generalized statements about learning processes. We hope that this research will allow for more rigorous experimental studies to come.

4. Results and Contributions

We identify the student’s key challenge of coordinating subgoals and problem solving phases, and then identify how the student navigates this space by coordinating resources in the environment.

4.1. Coordinating Subgoals and Problem Solving Phases

The focal student’s programming process involves completing three subgoals—writing code to paint a jagged line, writing code that moves to the next line, and orchestrating this code in two functions—each of which entails navigating three phases of problem solving: planning, building, and monitoring. The student prioritizes subgoals and problem solving phases in response to syntax and logic bugs, instructor prompting, and focal sources of knowledge.

The student begins by planning the trajectory of the PixelBot through the jagged line, building using only API movement instructions. She silently monitors without running the code, and then deletes it, re-building with painting actions and a loop. The student then attempts to monitor the resulting PixelBot action by running the code, but quickly stops the program before it advances enough to show the corresponding PixelBot action. The student continues building but a drag-and-drop attempt leads to an error message in the text editor. Her subsequent monitoring and re-building process, which generates more syntax bugs, involves trying to interpret error messages and create symmetry between brackets, parentheses, and quotes.

An instructor walks over and introduces a different monitoring process: comparing the broken syntax in the editor with correct syntax on a handout. Over six minutes, the student foregrounds the subgoal of writing the jagged line code through three additional subgoals: writing the PixelBot trajectory, adding color, and adding a loop. New information from the environment motivates the selection of subgoals. When errors are identified, the student takes immediate action rather than continuing with her current task. The student transitions rapidly between planning, building, and monitoring. In addition, within a single problem solving phase, we see a range of sources of knowledge deployed. For example, the student’s monitoring approach of perceiving error messages and reasoning about symmetry contrasts with the instructor’s monitoring approach of perceiving and reasoning about connections between the handout and editor. After resolving the syntax bugs, the student identifies the subgoal of developing the jagged line function as complete despite having painted several squares the wrong color. Not until this logic bug is flagged in the correctness check at the end of the session does the student re-foreground the subgoal of monitoring the jagged line function.

Throughout this process, the participants extemporaneously structure how to navigate the problem space, selectively managing subgoals. Their pathway is neither linear nor premeditated: planning arises throughout, contingent on syntax bugs that arise, logic bugs not yet noticed, moments of refactoring, and moments of recalibration after subgoals are judged complete.

4.2. Coordinating Resources

How does the work of pursuing subgoals across problem solving phases unfold? This section describes a lower-level, moment-to-moment coordination of media across people and tasks.

The student utilizes multiple sources of knowledge to coordinate resources in the environment. For example, while monitoring the jagged line code, the instructor foregrounds a process for syntax verification by comparing the code token-by-token with an example from a handout: “Look at this [code reference sheet] and then compare every sort of word like ‘function’, ‘function’, […]” The student looks back-and-forth between the screen and the handout, perceiving and reasoning about similarities between the two. These sources of knowledge thus bridge two resources in the environment: the handout and editor. This process is repeated as the student debugs a missing paint instruction while working on moving to the next line. As the student steps line-by-line through the code, her gaze moves right and left between the code editor and the corresponding PixelBot action. Attention to different parts is variable: she steps quickly through parts she has already thoroughly vetted, and slows down, even stopping, when she arrives at code that corresponds with the dispreferred PixelBot action. The student again perceives and reasons about two resources (editor and PixelBot actions).

While monitoring the jagged line code using the instructor’s token-by-token syntax verification strategy, the student runs into a repeat loop which is not documented. The instructor provides expert testimony and suggests clicking the button to insert the repeat template. The student coordinates the two snippets of repeat code in the editor: one serving as the template, and one serving as the target to be fixed, but applies the same process of token-by-token comparison as she did earlier with the code reference sheet. Later, while monitoring the code to move to the next line, the student draws on her memory and uses the same strategy of inserting the repeat template to check the syntax of the program. The student calls upon this strategy in two different instances to help cross the gulf of evaluation and propose a fix for each syntax bug.

The coordination of resources in the learning environment helps the student cross the gulf of evaluation and execution. Before the instructor foregrounds the syntax verification strategy, the student’s method for resolving each syntax bug was to read the error message, interpret the problem, and propose a fix. This process presents a wide gulf of evaluation as the student needs to operationalize the error message by converting a description of the problem into an applicable fix based on prior knowledge of program syntax. The instructor’s method for syntax verification relies on the same sources of knowledge, perception and reasoning, but uses the handout to present a more accessible affordance to cross the gulf of evaluation.

4.3. Conclusion

Through a microgenetic analysis of the process of teaching and learning about abstraction, we document the extemporaneous prioritization of subgoals and the back-and-forth coordination of problem solving phases. Participants navigate these tasks using multiple resources and sources of knowledge to cross gulfs of execution and evaluation. Future research should address how to help students structure subgoals, reflect on problem solving techniques, and recruit productive sources of knowledge as an ensemble process.

Acknowledgements.
This material is based upon work supported by the Sponsor National Science Foundation Rlhttps://doi.org/10.13039/100000001 under the Grant No. Grant #3, Grant #3, and Grant #3. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

References

  • (1)
  • Basu et al. (2017) Satabdi Basu, Gautam Biswas, and John S. Kinnebrew. 2017. Learner modeling for adaptive scaffolding in a Computational Thinking-based science learning environment. User Modeling and User-Adapted Interaction 27, 1 (jan 2017), 5–53. https://doi.org/10.1007/s11257-017-9187-0
  • Chinn et al. (2011) Clark A. Chinn, Luke A. Buckland, and Ala Samarapungavan. 2011. Expanding the Dimensions of Epistemic Cognition: Arguments From Philosophy and Psychology. Educational Psychologist 46, 3 (jul 2011), 141–167. https://doi.org/10.1080/00461520.2011.587722
  • Flood et al. (2018a) Virginia J. Flood, David DeLiema, and Dor Abrahamson. 2018a. Bringing static code to life: The instructional work of animating computer programs with the body. In Rethinking learning in the digital age: Making the Learning Sciences count, Proceedings of the 13th International Conference of the Learning Sciences, J. Kay and R. Luckin (Eds.), Vol. 2. London: International Society of the Learning Sciences, 1085–1088.
  • Flood et al. (2018b) Virginia J. Flood, David DeLiema, Benedikt Harrer, and Dor Abrahamson. 2018b. Enskilment in the digital age: The interactional work of learning to debug. In Rethinking learning in the digital age: Making the Learning Sciences count, Proceedings of the 13th International Conference of the Learning Sciences, J. Kay and R. Luckin (Eds.), Vol. 3. London: International Society of the Learning Sciences, 1405–1406.
  • Glaser (1965) Barney G. Glaser. 1965. The Constant Comparative Method of Qualitative Analysis. Social Problems 12, 4 (apr 1965), 436–445. https://doi.org/10.2307/798843
  • Jacob and Warschauer (2018) Sharin Rawhiya Jacob and Mark Warschauer. 2018. Computational Thinking and Literacy. Journal of Computer Science Integration 1, 1 (aug 2018). https://doi.org/10.26716/jcsi.2018.01.1.1
  • Jordan and Henderson (1995) Brigitte Jordan and Austin Henderson. 1995. Interaction Analysis: Foundations and Practice. The Journal of the Learning Sciences 4, 1 (1995), 39–103. https://doi.org/10.2307/1466849
  • Ko and Myers (2005) Andrew J. Ko and Brad A. Myers. 2005. A framework and methodology for studying the causes of software errors in programming systems. Journal of Visual Languages & Computing 16, 1-2 (feb 2005), 41–84. https://doi.org/10.1016/j.jvlc.2004.08.003
  • Lewis (2012) Colleen M. Lewis. 2012. The importance of students' attention to program state. In Proceedings of the ninth annual international conference on International computing education research - ICER '12. ACM Press. https://doi.org/10.1145/2361276.2361301
  • Lewis and Shah (2015) Colleen M. Lewis and Niral Shah. 2015. How Equity and Inequity Can Emerge in Pair Programming. In Proceedings of the eleventh annual International Conference on International Computing Education Research - ICER '15. ACM Press. https://doi.org/10.1145/2787622.2787716
  • Lye and Koh (2014) Sze Yee Lye and Joyce Hwee Ling Koh. 2014. Review on teaching and learning of computational thinking through programming: What is next for K-12? Computers in Human Behavior 41 (dec 2014), 51–61. https://doi.org/10.1016/j.chb.2014.09.012
  • Norman (2002) Donald A. Norman. 2002. The Design of Everyday Things. Basic Books, Inc., New York, NY, USA.
  • Pérez (2018) Arnulfo Pérez. 2018. A Framework for Computational Thinking Dispositions in Mathematics Education. Journal for Research in Mathematics Education 49, 4 (2018), 424. https://doi.org/10.5951/jresematheduc.49.4.0424
  • Rountree et al. (2013) Janet Rountree, Anthony Robins, and Nathan Rountree. 2013. Elaborating on threshold concepts. Computer Science Education 23, 3 (sep 2013), 265–289. https://doi.org/10.1080/08993408.2013.834748
  • Weintrop et al. (2015) David Weintrop, Elham Beheshti, Michael Horn, Kai Orton, Kemi Jona, Laura Trouille, and Uri Wilensky. 2015. Defining Computational Thinking for Mathematics and Science Classrooms. Journal of Science Education and Technology 25, 1 (oct 2015), 127–147. https://doi.org/10.1007/s10956-015-9581-5