"Ah! I see" - Facilitating Process Reflection in Gameplay through a Novel Spatio-Temporal Visualization System

Educational games have emerged as potent tools for helping students understand complex concepts and are now ubiquitous in global classrooms, amassing vast data. However, there is a notable gap in research concerning the effective visualization of this data to serve two key functions: (a) guiding students in reflecting upon their game-based learning and (b) aiding them in analyzing peer strategies. In this paper, we engage educators, students, and researchers as essential stakeholders. Taking a Design-Based Research (DBR) approach, we incorporate UX design methods to develop an innovative visualization system that helps players learn through gaining insights from their own and peers’ gameplay and strategies.


INTRODUCTION
The phrase "Ah, I see" frequently captures the moment when students grasp a concept being presented to them.However, delving deeper into what facilitates this understanding is crucial, as it holds potential insights to enhance learning for a broader student population.Open Learner Models (OLMs) target this precise query by employing visualizations to reveal a student's learning trajectory and patterns to themselves and thus enabling them to tackle diverse problems [14,39].In this paper, we aim to leverage the work done within the OLM community to propose a novel visualization system, which we will refer to as OPM (Open Player Modeling) -a system that displays open process models, representing strategies of players in their attempt to solve puzzles in an educational serious game.Unlike OLMs, which primarily focus on representing the academic progress and understanding of learners in classrooms, OPMs are designed to convey complex gameplay data on how students' interact with content, such as in educational games.A core distinction between OPMs and OLMs lies in the ability of OPMs to not only present a replay of how a player behaved (e.g., how she solved a specifc problem), but also to bring out opportunities of refection and learning embedded within the player moves and problem solving attempts.This distinction is crucial as it shifts the focus from traditional learning metrics to a more dynamic, strategy-oriented approach as employed in game-based learning environments.
Beyond the conventional avenues of classroom instruction and textbooks, videogames have emerged as a potent medium for imparting learning across various concepts and subjects.Games today have become instrumental in elucidating and transferring concepts that are often deemed challenging [79].Notwithstanding the signifcant role of videogames in education, Zhu and Seif El-Nasr bring forth the marked void in videogame visualizations' capacity to spotlight learning prospects and propose a new approach of Open Player Modeling (OPM) [97].In their paper, they outlined another diference between OPM and OLM, which is specifc to the parameters that are required to build them.Test scores, student submissions, and instructor feedback, which directly encompass educational knowledge, stand out as parameters for designing OLMs.These parameters allow users of the visualization to directly refect on the content.However, data collected from serious games includes player moves and raw actions taken in the game environment, which are not grouped or labeled for their distinct learning objectives.A new modeling and visualization approach to show learning and problem solving strategies using such data is direly needed.This is the subject of this paper.
Videogames employ visualization systems in various ways, from portraying character health to players [8], to displaying heat-maps of kills, hits, and player navigation [32,62,68,81] to analysts.However, a comprehensive literature review on game visualization systems by Wallner et al. [88] highlights that most game visualization systems are analyst or designer-facing, i.e., they help game design stakeholders understand player data to make informed game design decisions.However, given the competitive nature of videogames and the learning potential embedded in them, there is an active need for developing player-facing visualization systems [8,53].Wallner et al. term these player-facing visualization systems as Training visualization systems [91].Such systems might encompass techniques that compare player trajectories with optimal routes [26], spatio-temporal methodologies depicting progression [1,36], or even direct players to external websites for an in-depth visualization of their data [45].Despite the diversity in their execution, these training visualization systems share a unifed objective: to facilitate players in comprehending their gameplay and enhancing their performance.Similar to how Hsiao et al. [40] highlight that OLMs enable students to refect on learning to foster better academic progress, Zhu and Seif El-Nasr [97] also highlight that Open Player Models (OPMs) can potentially allow students to refect on their play strategies and navigate through various academic concepts embedded in the game.This refective process helps players understand the game better and reinforces the acquirement of academic concepts integrated into the serious game, enhancing the overall educational experience.Recognizing the signifcance of serious games in the current academic landscape, this paper initiates an inquiry into the design of an Open Player Model (OPM), the features essential for an efective OPM, and the reactions of student stakeholders when they interact with a fully operational OPM in a serious game context.
A naive approach towards developing OPMs would be to show a replay of the entire play trace of the player and the players in the community for the player to study and refect.However, with the scale of player data, spatiotemporal overlaying visualization systems, e.g., heatmaps, tend to become dense and fall into problems such as the"hairball problem" [48,71], making it hard to comprehend.This prompts three pivotal research questions: (a) How can we selectively flter and present self and peer data to allow players to gain insights?(b) How can we showcase these learning opportunities through an OPM? (c) How do users interact with such an OPM?These questions fall under the category of HCI, necessitating the adoption of a UX approach in the design and development of such an OPM.We hypothesize that utilizing a Design-based Research (DBR) methodology [3], where we use multiple methods integrating feedback and insights from teachers and students to craft a novel modeling approach and visualization system that integrates insights derived from OLM literature but expands it within the domain of educational games.
Within the context of games, developing visualizations of problemsolving strategies that players took as they solved the diferent problems within the game is a complex problem.As discussed above, simply showing all the paths that every player took leads to an incomprehensible graph.Adopting simple visualizations of learning activities will also not work within games.In games, players often have multiple ways to solve a problem and sometimes tinker with the system.Representing this diversity of solutions and approaches would be important for our system.Further, most games involve spatio-temporal interaction and thus are not usually logically laid out as in a learning application.This makes it hard to just adopt the current approaches from OLM or game visualization and necessitates a new approach to modeling and visualization within this space.
We thus make two contributions.Firstly, we developed a modeling technique that allows us to visualize processed data from an educational game, showing students their strategies and problemsolving processes in a comprehensible way.Secondly, we developed a visualization system and interface embedded within an educational game.To develop these contributions we employed a user experience (UX) design approach, where we identifed stakeholders' needs (students, teachers, and researchers) and iteratively developed a novel OPM for an educational game titled "Parallel".Our work is among the frst within this area to engage stakeholders to derive their needs prior to system building, thus contributing to improving UX of AI in games [92].Our user-centered research method and fndings can help future work in this area to beneft students further.Our fndings from playtests and interviews highlight how the players comprehended their own strategies and those of their peers through the proposed model and visualization.
It should be noted that this is the frst novel system that uses open visualizations within educational games.In this paper, we only discuss the system and validate that users can understand the visualizations and gain insight about their own and their peers' playthough.We will not discuss or assess learning using this visualization; this is left as future work as it would involve a larger study.
The rest of the paper is organized into the following sections.First, we will discuss related work, specifcally discussing work on gameplay visualization (Section 2.1) and OLMs (Section 2.2).Then, we will discuss an education game called Parallel [94], which we will use here as a case study to show the approach (Section 2.3).In discussing Parallel, we will show a level and diferent ways that players can solve that level, thus showing the complexity of data traces that will result from this game.We will follow that by describing our methodology, which includes three phases: gathering stakeholder requirements (Section 4), designing and developing the system (Section 5 and Section 6), and evaluation of the proposed OPM (Section 8).Through the frst phase, we will present the various methods we used to understand stakeholders, develop personas, and develop a set of requirements for the proposed OPM.The design and development phase will discuss the various algorithms developed to address the spatiotemporal space of the serious game and the conversion of data into a logical space that can be understood by players.The third phase will involve an evaluation study where we aim to understand if the system is usable and if logical problem-solving traces visualized were understandable by players and can be used to drive insights.

PREVIOUS WORK 2.1 Visualization Systems in Games
Visualization systems play various roles in the realm of videogames.Stakeholders such as researchers, designers, and players utilize these systems designed for videogames to gather information and gain insights into game design and gameplay.We recognize four primary visualization systems used in videogame research: (a) Simple Bar Charts [49,64,65,76], (b) Spatial movement visualization [22,38,66], (c) Heatmaps [5,30,31], and (d) Node-edge visualization systems [2,58,81,82].
Bar charts commonly display aggregated data.This data, which includes metrics like the number of kills, XP earned, gold, or health, provides valuable insights for players [19,49,64].Heatmaps and movement visualizations, on the other hand, highlight how players navigate the game's spatial environment.They convey analytics on areas of death and kills [5], and illustrate player strategies in using the map's spatial layout [30].Node-edge visualizations present multi-dimensional data that doesn't easily ft on 2D maps.For example, the INSPECT system [81] has been instrumental in analyzing skill sequences in MMORPGs like Guild Wars 2 [4] and educational games like Parallel [94].Similarly, the Glyph system [69] employs the Dynamic Time Warping algorithm to identify clusters of players with comparable gameplay patterns in games such as Wuzzit Trouble [9], Parallel [50], and Dota2 [21].
Although many visualization systems have been developed, they primarily cater to designers, researchers, and developers.There's a noticeable gap in literature addressing players' preferences for how their visualization systems should appear or what they should show [52,89].Wallner et al. [90] found, after presenting various visualization systems to players, that player-centric systems should prioritize aggregation, aesthetics, accuracy, readability, and interpretability to serve their needs best.Similarly, Kaun et al. [53] undertook a user-requirement analysis to gauge how visualization might help players interpret data from Starcraft 2 [33].While the requirements highlighted by prior researchers help shape the visual components of these systems, there's a lack of guidance on how these systems can promote refection or underscore how learning strategies can bolster this refection [50].To address this gap, this paper not only uses a UX-frst approach to pinpoint user requirements but also incorporates design principles and goals from Open Learning model literature.

Open Learner Models
Open Learner Models (OLMs) represent learners' profciency, skills, or knowledge levels using various visualization methods [14].Hooshyar et al. [39] mention in their literature review that OLMs help learners become active participants by providing ways to monitor their learning process and performance.In this section, we frst highlight the diferences between OLMs and other learning tools, such as Learner Models and Learning Dashboards.Next, we share various OLMs developed by previous researchers discuss design principles and goals of OLMs.
It is often easy to confuse between the terms Learner models and Open Learner Models.While underscoring the diference between regular learning models and OLMs, Bull et al. [15] share that OLMs, through visualization, provide insight into the learning activity to the learner.This is unlike learner models, where only the system has access.In other words, OLMs externalize the contents of the learner model to promote independent learning.Similarly, Hooshyar [39] highlights the diference between OLMs and Learning dashboards.Learning dashboards are often seen as analogous to Learning Management Sytems (LMS), where they provide instructors an opportunity to monitor and evaluate student progress and provide students an opportunity to see information and resources associated to learning [25].On the contrary, OLMs aim towards the visualization of learning process to facilitate refection and comparison [70].Further highlighting the diference between Learning Dashboards and OLMs, Kay et al. [47] mention that OLMs lay an emphasis on the "learner model" or "student modeling" while Learning Dashboards focus on aiding in data-driven decisions, classroom logistics and management.
2.2.1 Types of OLMs.OLMs are recognized for their capacity to facilitate refection on the learning process.As a result, numerous researchers have created and applied various OLMs in classrooms spanning a wide array of academic subjects.The design considerations for an OLM vary greatly.They can adopt a straightforward approach like a skill meter that showcases a student's progress in comparison to expert knowledge [13,67,93].Some use visualizations to depict the likelihood of a learner understanding a concept [20,37].Others compare an individual's knowledge with the aggregate knowledge of other users in the system [55].
Mabbott et al. [56] emphasize the advantages of OLMs that provide multiple views, rather than a singular perspective.They argue that these multi-faceted views empower students to visualize diverse aspects of their learner model.Dimitrova [27][28][29] advanced this idea by developing an OLM named STyLE-OLM.Through this model, students can navigate their OLM using concept graphs.These graphs enable them to discern the connections between various classroom-discussed concepts.
In parallel, Hochmeister et al. [37] incorporated self-assessments, where students' responses to these assessments help forecast their expertise.The output materializes in the form of concept trees, guiding students on their recommended learning trajectories.Expanding on self-assessments, Bull et al. [17] introduced an OLM titled OLMets.This model not only emphasizes student progress but also involves instructors in the modeling process.Beginning with a set of preliminary questions provided by the instructors, students' responses craft a personalized profciency model related to the course.This model then evolves as the course advances.
The OLMs mentioned primarily concentrate on charting individual progress and ofering refective opportunities based on personal data.Yet, a critical component they often overlook is the classroom environment itself, notably the infuence of peer groups.In the subsequent paragraphs, we will explore OLMs that integrate peer performance data into their learner models.Hsiao et al. [41] note that OSSMs are OLMs that communicate the context, content, and knowledge of the learning model to the individual student and their peers and other users.When developing OSSMs, researchers strive to blend OLMs with social learning.This blend enables students to refect on and compare their progress with that of their peers [40].Given the rapid adoption of games for learning, we choose to build our proposed OPM around utilizing data from student peers.This aligns with the objectives of OSSMs.However, we highlight in the subsequent section, how existing OSSMs and OLMs fall short in contextualizing for games and OPMs.

Designing
OLMs and OSSMs: Guerra et al. [35] introduced an OSSM called Mastery Grids.As shown in Figure 1, this model employs varying color intensities to illustrate a student's performance in relation to the class.In a similar vein, Hsiao et al. [40,42] created an OSSM named Progressor.This model grants students the ability to compare their scores with those of their classmates.It also advises students on the lectures they should focus on to enhance their scores.Additionally, they use color-coded arcs as shown in Figure 1, to highlight when a student's score falls below a certain threshold in comparison to their peers.Falakmasir et al. [34] came up with a tool named KnowViz.It employs colored gauges to depict a student's performance relative to the class (refer to Figure 1).They further enhance this visualization with a colored skill tree, highlighting the profciency of a student in a particular skill or concept compared to the class.While the number of assignments solved, scores, and in-class participation allow faculty to assess a student's relative performance, interpreting meaning from gameplay and establishing relative standings can be challenging for instructors.In this work, we explain how we build a visualization system that uses gameplay data and specially developed knowledge banks to rank players.This ranking is based not only on game scores but also on the students' understanding of the related educational concepts.Several authors have shared their insights and design principles to convey context and content and visualize various aspects of the learning model [6,45,47].However, no proposed systems or models exist for such a vision and no OSSM has been developed for learning games.
Bull et al. [12], Shi et al. [77], and Hsiao et al. [40] have identifed key features essential for an efective OSSM, which are crucial for developing a visualization system.Understanding the types of errors that can occur, the most common ones, their implications, and whether they indicate systematic misunderstandings is important.These authors emphasize the role of instructors in identifying areas for student improvement and in marking errors and their implications.However, this approach becomes challenging in gaming contexts, as each student's play-trace is unique and extensive, making it time-consuming for teachers to review each one and identify areas for improvement.Diverging from traditional OLMs, we demonstrate how our proposed model processes player data.Through our visualization system and developed knowledge banks, we efciently highlight errors and recommendations that help fx those errors.
Bull [14,16] and Bodily [7] emphasize the importance of an OSSM to efciently track and represent the sequence of knowledge acquisition.In classroom settings, where dedicated teaching faculties and planned assignments are the norm, it's straightforward to determine which part of the class covered a specifc academic concept or which assignment facilitated the understanding of a particular concept.However, in video games, the sequence in which students tackle various concepts can vary greatly due to the wide range of approaches available for reaching a solution.In our work, we emphasize that, despite the diversity in how students approach an educational game, we can assist them in comparing and contrasting their play-traces with those of their peers.This is achieved through the use of refection prompts and linear visualization systems, which help to highlight diferent learning paths in game-based learning environments.
Finally, Hsiao [41] and Bakalov [6], in their design of OSSMs such as Progressor+ and KnowledgeSea, emphasize that the developed visualization system should encourage refection on both disagreements and cooperative moments between the student and the learning model.In video games, there is often no single correct way to play, but there are several correct ways to solve the puzzles embedded within them.The challenge lies in identifying areas where students are solving academic problems disguised within game mechanics and pinpointing opportunities for improvement while also reinforcing areas where the student is doing well.In this paper, we demonstrate how we distill academic concepts present in the game by correlating player moves with these concepts, thereby making the learning process more visible and tangible within the gaming environment.
Most OLMs developed and discussed so far have been applied to learning applications rather than educational games.With games, there are many diferent challenges to applying visualization as described above.Zhu and Seif El-Nasr [97] highlight these complexities in terms of three main problems: (a) player data are typically more complex, including aspects of play and tinkering with the system, which are not usually present in other contexts, (b) in a game, data is usually multi-dimensional including spatial and logical components, unlike learning environments where data is only in the logical space, and (c) in a game there are, by design, often multiple ways to solve a problem, to facilitate replay.These aspects make the integration of visualizations using previous work on OLM not an easy matter, thus motivating this paper.Given the unique nature of video games, and considering how existing Open Learner Models (OLMs) do not contextualize for games, this paper adopts a User Experience (UX) approach.We frst gather requirements and expectations of users for an Open Player Model (OPM).Next, we design and develop the OPM and present user perceptions from playtests.

Parallel -Research Platform
Parallel is a 2D, single-player puzzle game crafted to instruct students in the nuances of concurrent and parallel programming concepts [94].Just as one would tackle a parallel programming challenge, the game incorporates key components such as threads, semaphores, locks, and resources, as depicted in Figure 2. Leveraging the maneuvers outlined in Figure 2, players must adeptly coordinate threads to fulfll their designated tasks, sidestepping race conditions.Details of the parallel's game design and how it connects to parallel programming concepts is available at [94] and for further clarity we encourage viewing the attached video fgure.
Given the public accessibility and ongoing maintenance of the game, Parallel1 has been used as a research platform to study and support learning.For instance, Teng et al. [81] adopted data from Parallel to craft a system for presenting skill trees.Meanwhile, Kantharaju et al. [44] harnessed data from the same game to formulate machine learning models predicting students' awareness of parallel programming concepts.Valls-Vargas [83] devised a graph-based representation delineating the potential spaces of boards users might generate.Additional investigations using Parallel encompass works like [50,51,59,60,86,96,97].
Yet, despite the plethora of research and applications stemming from Parallel, a signifcant lacuna persists, as highlighted by Zhu et al. [97].This gap pertains to harnessing community data, visualizing it, discerning emergent patterns, and aptly presenting this data to users within serious gaming contexts.This paper endeavors to bridge this divide.Employing Parallel as a focal case study and underpinned by a design-based research methodology combined with Open Learner model literature, we illuminate the feasibility of constructing a visualization system.

METHODOLOGY
We implemented a four-step approach as illustrated in Figure 3 to develop the novel visualization system and gauge its efectiveness among players.
• Requirement Gathering: Initially, we undertook two steps for gathering requirements.
-We conducted semi-structured interviews with Parallel programming instructors to understand how they evaluate and provide feedback on parallel programming problems and assignments.We elaborate on the protocol and outcomes in Section 4.1.-To comprehend how players navigate the game and identify moments when they require assistance, we organized playtests followed by semi-structured interviews.We elaborate on the outcomes, personas, and identifed requirements in Section 4.2.
• Design and Development Phase: In this phase, researchers with expertise in OLM literature, Parallel Programming, and design participated in iterative processes to craft the visualization system.The supporting algorithms tailored to user requirements are discussed in Section 5. Section 6 elaborates our rationale behind various design choices.• Evaluation: After the system's design, we ran playtests with students.The objective was to ascertain how the visualization system, grounded in the principles of OLM literature, facilitated their refection and problem-solving abilities.In Section 7 we share the protocol for the playtest and how the data was analyzed whose results are reported in Section 8.

UNDERSTANDING STAKEHOLDER NEEDS 4.1 Parallel Programming Instructor Interviews
Parallel programming poses inherent challenges due to the intricate coordination and synchronization of multiple components.
Achieving optimal performance and efciency in parallel computing environments necessitates meticulous orchestration of tasks, data, and communication between parallel threads or processes (Section 2.3).To understand how human instructors facilitate learning in these topics, we conducted workshops with two parallel programming instructors, one from a large public university in the Western United States and one from a private university in the Mid-Atlantic.The workshop was conducted via Zoom, a video conferencing platform.Upon their entry into the session, we secured their consent.Following this, instructors were introduced to the Parallel game (Section 2.3) and then were asked to play through the tutorial levels.
Once they had a grasp of the game and its components, we presented them with a recording that showcased four students attempting to solve a level in Parallel.Next we further delved into the gameplay by displaying the distinct play-traces of these four students.Instructors were then prompted with a series of tasks: Firstly, they were asked to discern the variations observed in the play-traces and the students' respective approaches to the parallel programming challenges.Secondly, they were tasked with ranking these play-traces based on performance, substantiating their rankings with explanations.Lastly, the instructors were asked to pinpoint specifc segments in the same play-traces that the student should refect upon For these highlighted segments, instructors also provided textual feedback, suggesting ways the student could ameliorate their gameplay in that particular segment.In the end, we opened the foor for them to pose any questions and comments.The insights provided by instructors align closely with the design requirements of an OLM, as discussed by Bull [12].For example, instructors see value in 'ranking players based on parallel programming metrics'.This ranking method aligns with how previous OLMs such as Progressor [40] and KnowViz [34] show ranks of students based on performance.Also, the methods instructors use to 'compare parts of play-traces' let us identify 'learning strategies' that could beneft our OPM users.When instructors talk about 'comparing parts of a play-traces', they're trying to understand how students tackle problems.This aligns with the need for OLMs to display the 'acquisition order of target knowledge.'Lastly, by dividing the game 'board into spatial zones', we can show players the areas they need to be 'aware and refect' upon.

Developing Student Personas
The workshop provided a comprehensive understanding of how instructors analyze a play-trace and identify potential areas for ofering feedback to students.However, when designing a visualization system that enables players to enhance, comprehend, and compare their play-traces, it is imperative for designers and researchers to incorporate the perspectives of its users [75].
To understand students' needs for an OPM visualization, we employed the UX methods of Personas to develop empathy for users and synthesize their needs into a tangible form that can guide the subsequent design and development process [23,57].We developed our personas by analyzing how users interacted in the game, Parallel.In a previous study [86], we had ten participants play the frst four levels of the game without access to any form of visualization system.During the one-hour playtest, participants were asked to verbalize their thought process while playing and were further questioned to highlight their struggles, playing strategies, and the intentions behind their moves.Through thematic analysis of the transcripts, researchers clustered topics together regarding their mentioned intentions (i.e., I wanted to make my solution more efcient) and their gameplay behaviors (i.e., adjusted locks and triggers frequently).As a result, we formulated two distinct personas: Jamie, "the thinker," and Alex, "the visualizer."Specifcally, Alex represents a group of players who spent a lot of time incrementally adding locks and triggers and adjusting these components on the track to visualize a solution.Jaime represents players who spent a lot of time idle on an empty or partially built track and reported thinking through solutions in their head.The personas included typical persona features, such as a fctional name, a description of the user's behavior (i.e., how they played the game), and goals (i.e., their described goals for the game) [43,75].
We found three main user needs.First, players wanted to improve their solutions efciency.For example, Jamie wanted to "locate and select more efcient players" to refect on how she could improve her gameplay.Second, players want to easily compare their gameplay with their peers across multiple viewpoints.For example, Jamie wanted a high-level view of the community to see where she stood in terms of her solutions efciency and to locate better performing players.Alternatively, Alex wanted a more detailed view to examine players' gameplay actions to "examine other potential moves that she may not have encountered in her own gameplay." third, players want suggestions on how to improve their gameplay during and after gameplay.For example, Alex wanted to view other players' solutions when struggling to fnd a solution, and in another instance, Jamie does not want help, since she likes fguring the solution out herself but wants to view other approaches after gameplay.From these user needs, we can infer that our OPM design should provide players with (1) a high level view of the community to help players understand where they stand, (2) an easy comparison across multiple views of the data, and (3) provide players with suggestions on how to improve their solution.

ABSTRACTING RICH GAMEPLAY DATA
Guided by stakeholder needs, we understand that our Open Player Model (OPM) must ofer actionable insights for students.For example, Alex seeks examples of peer board states when stuck, and As outlined in Section 2.2, developing a Learner Model for an OLM involves capturing students' attempts and strategies, identifying errors, and presenting learning opportunities.Traditional OLMs use various metrics, such as academic scores, self-assessments, and progress evaluations.However, applying these methods directly to videogames is challenging due to the dynamic and complex nature of gaming.
Learning in videogames involves understanding spatial logic alongside traditional logical reasoning.It's essential to develop a system that integrates this spatial logic with parallel programming concepts to suggest relevant peer game states to players.In creating a mechanism for identifying similar board states in the game 'Parallel', we leverage insights from the instructor workshop (Section 4.1).Instructors highlighted the signifcance of spatial zones on the board, each representing specifc knowledge values.Our strategy for recognizing similar board states, essential for constructing the learner model, is detailed in the next sections and outlined in Figure 4, building on these insights and the spatial analysis of gameplay.

Spatial Abstraction
Based on the instructor's insights, we break the board into Spatial Zones.The board game's spatial arena, represented as , is initially divided into several smaller, discrete sections or cells, denoted by , where is the index for each cell.becomes the union of all these cells.Next, the cells are grouped into conceptual regions or zones, represented as .Each zone, , is a subset of the spatial arena and comprises one or more cells.As illustrated in Figure 5, we break the board for Level 13 in the game parallel into 16 zones based on the zone's role in solving the parallel puzzle.For example, Zones A, D, G, H, and K are zones local to the red thread; Zones I, E, J, F, and L are zones local to the pink thread; and Zones W, X, Y, Z, B, and C are zones shared by both threads or in other words, are critical sections.

Understanding Links and Moves
Instructors analyzed player's element placements across diferent spatial zones, assessing whether links between semaphores and signals were accurate (conceptually correct and contributing positively to the solution) or inaccurate (leading to errors or conceptually erroneous).This analysis enabled developers to pinpoint correct suggestions for inaccurate moves and recommend alternative correct moves for those executed accurately.This approach, termed 'Knowledge States', is detailed in Table 1.For example, a  link 'IG' (connecting semaphore in Zone I to signal in Zone G) is an incorrect approach to prevent a race condition for the pink thread, whereas 'IK' is the correct solution.The complete set of knowledge states is available for download at https://github.com/siddu1998/CodesForPapers/blob/main/knowledge.py The Game State Recommendation Algorithm presented in Algorithm 1 is designed to assist players in enhancing their gameplay by suggesting a collection of community game states that can help rectify a player's inaccurate moves (moves leading to errors).The algorithm employs a set of knowledge states, denoted by , with each state being categorized as either an "accurate" or "inaccurate" move.The player's move is represented by .Additionally, the algorithm has access to a pool of community game states represented by .The algorithm uses a comparison function, (, ), to juxtapose the player's move, , with the knowledge states, , in .The function returns the label of the closest matching knowledge state.If the label is "inaccurate", the algorithm searches for a set of relevant good moves from the community game states, which is denoted by .This is accomplished through the function (, ) which assesses whether a community game state, , is relevant for improving the player's bad move, .For each community game state, , in set , the algorithm computes a similarity score employing a similarity function, (, ), that compares the board states and outputs a score indicating the similarity of the community game state to the player's move.The game states in are then sorted based on their similarity scores in descending order.Finally, the algorithm recommends the ranked list of game states in to the player, who can utilize these suggestions to analyze and rectify their bad move.This algorithm proves especially benefcial in games where players can leverage community knowledge and insights to refne their strategies and decision-making.

Board State Similarity Function In Parallel
In the above section, we mention the similarity function (, ), which takes in a player state and a game state from the community.The visualization system's algorithm for comparing board states is illustrated in Algorithm 2. After the formation of zones (Section 5.1) an adjacency matrix is formed to represent the connectivity between these zones.In the adjacency matrix, the cell is set to 1 if the zones and are connected; otherwise, it is set to Recommend the ranked list of game states in to the player to help them rectify their mistake.end 0. The algorithm then iterates over a set of diferent board states, represented by , where each state is denoted by .For each board state, the adjacency matrix is flled out, marking connections made by players.
Then, for each board state, the adjacency matrix is fattened into a single vector , which serves as a compact representation or embedding of the board state.Next, the algorithm computes the cosine distance between every pair of board states ( , ) in .This is done by calculating the cosine of the angle between their corresponding vectors and .Finally, the algorithm returns a ranked list of board states based on the cosine distance between the player board state and the passed-in board state from the community.
Through Algorithm 1, we provide a system to capture user knowledge, identify accurate and inaccurate moves based on parallel programming concepts and develop suggestions that satisfy the user requirement of getting "suggestions from peers on how to improve their solution before and after gameplay." Similarly, Algorithm 2 allows us to identify board states from peer groups that are comparable with the user board state, hence satisfying the user requirement for an OPM to provide "an easy comparison with peers across multiple views of the data".

DESIGNING THE OPM VISUALIZATION
Based on the requirements gathered from personas, we present our design solutions for an OPM visualization both during and after gameplay.Here, we use Shneiderman's mantra: (a) Overview First, (b) Zoom and Filter, and (c) Details-on-Demand [78] as a guide for our design, as it is a standard framework for both information visualization [78] and OLMs [10].We ofer key challenges Algorithm 2: Board State Similarity Algorithm Data: Spatial Arena , collections of components , zones , and board states Result: A set of board states with similarity distance below a specifed threshold for each ( , ) do = 1 if and are connected in end end for each in do = fatten( ) end for each ( , ) in do end return { : ≤ threshold} of designing OPM visualizations within each level of the mantra and briefy describe various iterations that were tested with two playtesters.Figure 6 and Figure 9 cover important aspects of the visualizations.However, we understand it might be difcult to comprehend the interactive parts of the visualization system, hence we also provide a video fgure (attached), that describes the game and the visualization system.

Overview First
Overview First is a simplistic view of the entire dataset without going into too much detail.This step is meant to be a high-level view that allows the user to "get their head around the entire data" to locate points of interest.We used the concept of "efciency score" of a submitted solution to convey relative standing.As the number of players increase, the number of data points for a player to get their head around would also increase which would go against our requirements.We initially tried three iterations, using Hanging Rootograms, Nonribbon Chord Diagrams as shown in Figure 7, we tried showing all play-traces at once using a timeline that shows efciency scores (y-axis) of various tests and submits over time (x-axis); however, playtest feedback indicated that this made made it difcult to locate and select a more efcient play trace).
We address the user need "provide high-level view of the community to help players understand where they stand" by implementing a quadrant chart labeled as 'Overview Panel' shown in Figure 6.We  chose a quadrant chart because it provides a scalable and straightforward way to represent the community data and organize it by efciency.We also chose quadrant charts because they have 2 axes to present the data, which corresponds to how we determine efciency where the x-axis represents the total time a thread is alive and the y-axis represents the total time a thread is idle.Since the overall relative performance is evaluated post fnishing the level, the overview panel is absent in the during-gameplay visualization (Figure 9).

Zoom and Filter
Zoom and Filter is a more zoomed-in view of the dataset's particular area of interest.Zooming gives the data of interest more resolution and detail within the rest of the data and flters out unnecessary data.We observed that showing multiple play-traces at once resulted in overlapping (Figure 7).Overlapping play-traces were challenging to decipher cognitively and readability was a problem when showing users their own and their peers' processes.
We address the user need "provide an easy comparison across multiple views of the data" by implementing node-link diagrams.Instead of showcasing all play-traces from the community, we employed the learner model (Section 5) to identify specifc player moves within an individual's play-trace that merit review or present a learning opportunity.We integrated a '+' sign beneath the node to denote researchers' selected moves as shown in Figure 6.Here, players may add recommended play-traces onto the visualization for comparison without facing an overlap issue and get to control the complexity.During gameplay, since the student does not have a complete playtrace, we automatically detect the problem they are solving and provide the appropriate recommendation.

Details on Demand
Details on Demand means the zoomed-in view of the dataset should be able to provide details that expand on the selection.Ofering details on-demand gives the user control of the data and the ability to further explore details without cluttering the screen.Our initial two iterations shown in Figure 8 showed the entire play-trace of the recommended users.Simply showcasing the recommended board state to users makes it difcult for users to observe diferences between the player's and the peer's game boards.The design must also account for to compare and simulate the recommended board traces both during and after gameplay.
We address the user need "provide players with comparative suggestions on how to improve their solution during and after gameplay."as follows: During gameplay, given that the player is actively engaged in the game, any OPM presented during gameplay should accommodate potential edits to their game strategy.For this as shown in Figure 9, the learner model highlights accurate and inaccurate links on the board and when the player chooses to use the OPM, for the selected links by the users, they are presented with a recommendation and "refection Prompts", present both postgameplay and during-gameplay.This extends the current literature on educational games, the majority of which only support refection post-gameplay [80,85].These prompts, grounded in insights from OPM discussed in Section 5, consist of two textual elements.The frst highlights areas in the user's board state that warrant attention, explaining the accuracy or inaccuracy of moves.The second guides users on which sections of the peer's board state to concentrate on for optimal learning.For post-gameplay, we introduced a designated area specifcally for snapshots as shown as 'View Ports' in Figure 6.This space comprises two panels: one displays the user's own board state, while the other showcases the selected board state of a peer with the spatial abstraction overlapped, so users can follow the refection prompts (shown in Figure 6).Users can select nodes from both their own and their peers' play-traces, facilitating a comparative analysis of diferent board states.Similar to the during-gameplay OPM, the post-gameplay OPM also consist refection prompts as shown in Figure 6.

EVALUATION
To evaluate our OPM system, we invited students who had either previously completed or were currently enrolled in an undergraduatelevel Computer science course to participate in a 2 hour playtest where we asked them to play the game, use the OPM system, and sit for a short interview.The students were not required to be enrolled in a parallel programming class nor needed to have any parallel programming experience (since the tutorial levels provide adequate required knowledge).Among the ten students (P1-P10) who responded to the invitation, two did not attend.All the participants identifed as male with an average age of 20.4 years.After observing recurring themes deduced from the qualitative coding of the interviews, we determined that eight participants was an adequate number, due to saturation in the dataset.We began the procedure by greeting each participant, introducing the study, and asking them to sign an informed consent form, permitting us to record the session in accordance with our IRB.
To ensure participants were well-acquainted with the game and its various facets, they were encouraged to play the frst three tutorial levels, which conveyed all the required concepts and terminology associated with parallel programming and the game.Students were then instructed to load the 13th level of the game.We highlighted the "community" tab button (as depicted in Figure 9), which serves as a portal to the OPM, allowing players to gain insights and derive logical assistance.
Upon the conclusion of the gameplay session, we engaged the students in a refective exercise, posing a series of questions that asked about: their experience, the rationale for their decisions to use the OPM feature, their strategies, the lessons and insights they extracted from the OPM, the players they found most instrumental, and their sentiments towards their gameplay data being accessible to others in this manner.We followed a similar process to examine their interaction with the post-gameplay OPM feature.

Qualitative Data Coding
Upon interview completion, we used a thematic coding approach to transcribe and analyze them [18].We noticed that users frequently referenced on-screen elements throughout the transcriptions during their explanations.Therefore, it was necessary for researchers to use the video recordings in tandem with the transcript to generate codes.One researcher initially marked events from the transcripts, and then, two researchers independently created open codes for these events.The researchers convened multiple times to discuss the codes, recognize analogous codes, and formulate a codebook, as illustrated in Figure 10.Following its creation, both researchers, using the codebook, independently coded the various events.Our inter-rater reliability on 30% of the data was 0.79, surpassing the accepted threshold and suggesting a strong agreement [54].

RESULTS
In this section, through the identifed codes and supporting participant quotes, we illustrate how our design choices and algorithms used to build the OPM help in fltering play-traces, provide learning opportunities and identify how users interact with the OPM.

RQ1: How can we selectively flter and present self and peer data to allow players to gain insights
Our Personas, discussed in Section 4.2, convey that they expect an OPM to ofer opportunities to explore play-traces from various users.They aim to grasp the concepts and parallel programming knowledge associated with the moves of these various users.In our analysis, we found that the motivation for students to use the OPM enabled them to flter data from the community via (a) Competition, (b) Using Spatial Abstraction, and (c) Aiming to Optimize Solutions.

Competition.
When presented with the post-gameplay OPM, we observed that students frequently begin by examining the overview graph, which illustrates their position in the community.Specifically, P3, P4, P7, and P8 immediately focus on their standings compared to the broader community, paying particular attention to the extremes within that community.For instance, P3 states, "The ones that I'm most interested in are player four and player six because they're at both extremes of the graph, " while P7 notes, "It seems like there's almost a trade-of between the total time and the thread wait time, which is interesting to me.So, thinking about that, yeah, player six stands out because they are on the extreme".P4, after understanding their position relative to the community, explores the solutions of players performing better than them "my solution seems so be diferent in terms of how the stopper (switch) is connected I use more locks, but I seem similar to player 2, and that is why we are closer together."On a similar tangent P8 quotes "Let me look at what the player 6 did because that might show me some information something else I could have done.So that is fair, he defnitely did better then me, because in places like this, I might be able to learn something."All four participants further simulate the recommendations, compare and contrast the recommended play-trace, and refect on learning opportunities.
8.1.2Uses Spatial Abstraction.Student requirements in Section 4.2, highlighted that players sought fltered examples to apply their skills and to act upon suggestions from the learner model.To cater to this, we implemented spatial abstraction to direct players to various sections of the board, illustrating how diferent links were established by other players.Feedback from users indicated that this zoning mechanism was efective in quickly fltering players who have interacted with specifc zones and established interesting links.
Once the learner model showcased pertinent examples, we observed how players utilized the presented zones to juxtapose their solutions with the given recommendations.Participant P8 observed, "The most impactful part of the visualization was the explanation of the zones and the links between them.It prompted me to think more deeply about why I placed my states in certain zones and the potential errors that could arise from those choices." In a similar vein, Player 4 commented, "After I saw how they used the zones in their solution, I understood like what I was supposed to be doing, and I changed my solution.I tried...where they placed each of the items on the map." When addressing parallel programming problems, players frequently want to flter the most optimized and efcient solutions.User P8, for instance, expresses this aspiration by stating, "I wanted to see if there was a more efcient solution or if there were problems with my solution."Similarly, user P4 acknowledges the potential for further optimization in their solution, as indicated by their comment, "Okay, so my solution can be further optimized." The goal to develop efcient solutions is found in students' use of both the during-gameplay and post-gameplay OPM.For instance, while using the during-gameplay OPM, P1 remarked, "The system identifed two inaccurate moves, yet I still passed.I suppose there might be a more optimal approach to explore the connection between Z and Y.I can see that now".On the other hand, while using the post-gameplay visualization, "for me it is not just looking at the right solution, But if it's if I'm looking for a right solution, it should be like a reasonably most efcient solution."

RQ2: How can we showcase learning opportunities through an OPM?
While the preceding section emphasizes how and why players fltered and students compared play-traces, it is crucial to discern which aspects of learning users engaged with during these interactions.As users aimed to optimize their solutions by comparing with other play-traces, those that performed better or worse, we observed the responses that validated OPM accurately capturing user processes.Further, during refections on their own and their peers' processes, users discussed identifying two essential learn ties to make changes on their game board or to refect upon their peers' play-traces.In the previous section, we observed a positive reception to the Spatial abstraction feature among users.In this section, we will delve into examples of how players utilized the spatial abstraction to simulate both peer play-traces and their own playtraces, extracting meaningful insights from the examples presented by the learner models.Apart from using zones to encapsulate concepts and identify similar board states, zones also ofer users a means to reference various components, aiding in simulating the play-trace to comprehend diferences in solutions and identify learning opportunities.While examining a recommendation, P1 voiced their thought process as "What did they do here?Okay, player six did something diferent.So, when examining your link between Zone K and Zone J, I was just wondering if this might be a better way to handle the race condition, as an alternative approach to prevent the race condition by placing a link in Zone A to Zone L. " Similarly, while referring to their own play-trace, P2 commented, "Moving between Zone E and K, yeah.That worked because when the pink arrow goes around and passes the delivery, it will automatically toggle the right arrow.So, that's why that one worked."

Comparing and Contrasting
Recommendations.Beyond simulation, users also weighed their solutions against the community's recommendations.They further proposed enhancements to these suggested strategies, highlighting areas of potential improvement.User P3, for example, noted, "I observed that this person positioned the locks here and here, which difers from my approach.I'm now wondering if they could have placed them over here to minimize the total wait time?"Meanwhile, User P10 mentioned, "The mutex recommendation appears appropriate, but there's room to refne the blocker by using just one signal." 8.3.4Open to share data with community.A pivotal factor contributing to the success of OPM is the reliance on data that is willingly shared.In our study, all participants expressed their willingness to share their data with the community, as the OPM presents an opportunity for learning and comparing their performance with others.P1 enthusiastically states, "I'll defnitely love having my gameplay data as part of the visualization.It's helping people complete the level."Similarly, P10 shares, "I think I will share my data.Right now, in my opinion, I have a less complex solution, so I believe sharing my data would help others and allow me to see if someone has achieved better results than me, as they would appear on the graph." The success of an OPM, which builds on peer groups, lies in its willingness of players to share their data.None of the users expressed any concerns sharing their data with the community

DISCUSSION AND IMPLICATIONS
Our overall goal of this paper is (a) to develop a modeling technique to present multiple individuals' process data in a way that reveals their learning processes and strategies and (b) to develop a visualization system and interface embedded within an educational game.After analyzing playtest data, we note in Figure 10, ten themes highlighting how and why players interact with the visualization system.However, it is important to note, that these ten themes are related to each other.

Relationships between Themes
Users shared various 'Reasons for using the OPM', often linked to identifed inaccuracies or challenges in solving a level.Upon accessing the OPM, users are presented with color-coded play-traces indicating where they made accurate and inaccurate moves.Alongside this, they receive recommendations from peers and see their relative standing within the peer group.This feature highlights two themes: 'Competition' and 'Compare and Contrast'.In their efort to compete and develop optimal solutions, users compare and rectify their mistakes by examining their peers' solutions and processes.As users mention that the OPM efectively 'Captures Process', users 'Simulate recommendations' to better understand the process of the displayed peer play-traces and aim to 'Optimize their solution'.During the simulation, users often turn to the 'Spatial Abstraction', focusing on specifc areas in the game layout that aid in solving the level.This exploration leads to the discovery of 'New Critical Sections' that assist them in developing alternate strategies.In the process of understanding the new critical sections, simulating peer play-traces, and identifying new strategies, players tend to move from 'Sequential thinking to parallel thinking', which helps them grasp parallel programming concepts better and solve the level.As they gain more insights and recognize the benefts of exploring peer solutions, players agree to 'Share their Data with the Community'.
From our identifed personas, in Section 4.2, we observe Jamie's desire to "locate and select more efcient players."A primary advantage of a learner model is that it presents insights and provides the user with an overview of their performance.As shown in Section 6, our proposed OPM utilizes game metrics such as efciency scores, accurate moves, and inaccurate moves to refect players' performance at specifc levels.Our visualization system displays this information across play-traces and a dedicated section within the visualization to show their relative standing.The OPM's ability to visualize the entire gameplay process not only presents relative standings but also enables users to see how their peers approach a problem.This feature allows students to compare and refect on the strategies and academic concepts that facilitate more efective learning.Our analysis of the transcripts also uncovers the theme of 'Competition, ' which discusses how players use the OPM to understand their performance in the domain and identify players to refect upon.
Jamie's persona also shows interest in understanding alternative approaches to problems.Kay [46] discusses that a crucial element of OLMs is the capacity of teachers to ofer actionable insights and feedback to students.This feedback helps students look at and explore alternative approaches.However, instructors face challenges in evaluating videogames due to the vast sample space and the variety of approaches available for solving a level in games.In our OPM, we have observed various ways users identify learning opportunities provided by the OPM.For instance, two themes have emerged: (a) Identifying New Critical Sections and (b) Aiming to Optimize Solutions.These themes illustrate how the OPM aids in pinpointing new critical sections, a vital strategy in solving parallel programming problems.Additionally, to optimize solutions, players often observe those who perform better.This observation helps them understand the placement of various components and strive to reduce their number, analogous to utilizing fewer resources in a computing task.
Our persona, Alex mentions they need a way to understand different players' approaches.The refection prompts featured in the OPM enable students to compare and contrast various play-traces, ofering players opportunities to agree or disagree with the learner model and learn from other player's approaches.These prompts guide users to identify areas for improvement and also highlight where they have excelled.Additionally, a theme emerging from our analysis, 'Simulate Recommendations', underscores how these recommendation prompts facilitate participants' refection.They encourage players to simulate scenarios based on areas of strength and those needing improvement, enhancing their learning experience.Another identifed theme, 'Spatial Abstraction, ' discusses how the OPM helps users focus on specifc areas of the game layout that ofer the greatest learning opportunities.

Implications
Serious games has been of interest for a long time in the CHI community [24,63,73,74,98].Despite this interest, player-facing visualization systems have focused majorly on e-sports and MMORPGs [69,72,87].We attribute this defcit to the lack of visualization systems to comprehend learner models, identify learning opportunities, and present educational content embedded in serious games in the form of visualization systems.Our work has implications to both serious games literature and visualization systems literature.Through our discussion of abstracting rich gameplay data in Section 5 we present techniques how serious game researchers, can generate abstract embeddings (Section 5.1) of their board states that best capture educational content with player moves.Further, these abstract embeddings help visualization researchers, to develop metrics that help identify similar players and peer groups that best support refection as discussed in Section 5.3.We demonstrate through our game how researchers can extract educational value from serious game data and present it in comprehensible ways, which is an important contribution to the serious games and HCI community.
The implications of our work further extend towards OLM and learning science literature.Learning science literature has often argued how refection and understanding a student's own data and peer data helps foster better learning [61,84].OSSMs developed by prior researchers [6,40,41,77] have used aggregated metrics such as academic scores to rank student performance, share feedback and provide opportunities of refection.However, our work has two major implications for this literature (a) Unlike previous OSSMs that are built on aggregate metrics, our work focuses on unrolling processes and identifying precise points in a user's attempt that provide the best opportunity for refection, (b) Through a UX approach we methodologically highlight how OLM literature or future OPM systems can identify requirements and embed them in visualization systems.As for the OLM community, given our work is the frst visualization system designed for process refection in serious games, its implications lie in opening up discussions and research directions focused on how OLMs can be used in classroom environments that are actively using serious games.Finally, our work has implications towards the growing interest around the game Parallel.Prior research [44,86,95] focus on AI approaches to develop hint systems, or identify features to best solve levels.This work contributes to the research platform, Parallel, by: (a) introducing a learner model for classrooms, and (b) ofering a visualization system incorporating this learner model, thereby facilitating discussions and refection on gameplay.

LIMITATIONS
Given that this paper is among the pioneering discussions about OLMs in the context of serious games, several areas for improvement emerge.Firstly, we recognize that the algorithms we employ may not be suitable for all serious games, especially those that don't revolve around spatial logic.In such situations, we encourage researchers and designers to devise their own recommendations and similarity functions for OPM development.
Secondly, In our OPM design, we chose to depict various game steps linearly, with events occurring sequentially.However, we understand that certain games may feature simultaneous events, making a linear representation not universally appropriate.Thus, we advise researchers to conduct a thorough requirements analysis to determine the most efective visualization for their context.
Despite our open call to recruit participants for playtesting and using the OPM, our participant sample is composed solely of individuals who identify as male.Although this may mirror the classroom demographic, we recognize that it could introduce bias into our results.Consequently, our analysis might not fully capture the range of potential attitudes and interactions that students of diferent genders could have while using an OPM.
Lastly, the objectives of this study were focused on determining whether our design choices satisfed the requirements for an OPM.We didn't extensively investigate the depth of refection experienced by students.Moving forward, we intend to conduct a more comprehensive study to better understand how students internalize and retain the recommendations provided by the OPM.

CONCLUSION
In this paper, we developed a novel OPM-based visualization system to an educational game in order to facilitate process refection.We used UX methods to illicit stakeholder needs and Incorporated them in our design.Drawing on established OLM literature and design work, we then craft various algorithms and develop the visualization system.Subsequently, we introduce this visualization system to users and observe their interactions.Through interviews, we glean insights into how the system is utilized, confrming that it meets the stated requirements and aids in process capture and spurs discussions around refection.

Figure 4 :
Figure 4: An illustration of the learner model, and the various algorithms used in diferent steps

Figure 5 :
Figure 5: A level (level 13) from the game Parallel, marked with various components associated with the game and Parallel Programming.

Figure 6 :
Figure 6: The post-gameplay OPM Used for the Serious Game Parallel along with the Contributions of Learner Model

Figure 7 :
Figure 7: Design iterations for showing Overview -From Right to Left: Linear Visualization of Efciency, Non-ribbon Chord Visualization of Efciency, Hanging Rootogram Visualization of Efciency

Figure 8 :
Figure 8: Design Iterations for Showing Details on Demand -From top to bottom: Showing only submissions and tests, Showing all the play-traces in the community

Figure 9 :
Figure 9: During gameplay OLM showing inaccurate links placed by the user and providing suggestions for the selected inaccurate link with an example.

Figure 10 :
Figure 10: Code-book used to code transcripts from participant playtests

Figure 11 :Figure 12 :
Figure 11: Participant 2 uses during gameplay OLM and Post gameplay OLM to learn about various available Critical Sections

Table 1 :
A snapshot of some moves from the knowledge map

Algorithm 1 :
Algorithm used to Recommend Game States Input: A set of knowledge states = { 1 , 2 , ..., }, a set of community game states = { 1 , 2 , ..., }, the player's move Output: A ranked list of recommended game states for the player Identifies new Critical Sections.Identifying critical sections is crucial in parallel programming, and it is also key to solving puzzles in the game Parallel.In Figure11, User P2, during their frst attempt at the level, fails to pinpoint the critical section.Instead, they mistakenly connect signals with the switch from both Zone J and Zone H.These actions are fagged as errors during the gameplay visualization, which subsequently suggests an alternative move to address the issue.This guidance aids the user in completing the level.Figure12illustrates how User P1 utilized the OPM visualization to grasp the concept of mutual exclusion, a fundamental step towards parallel thinking.Initially, the user set up links AK (connecting a signal in Zone A with a semaphore in Zone K) and IJ.These connections were fagged as erroneous by the in-game OLM.Consequently, the learner model suggested the links AJ and KI.Participant P1 remarked, "I tried stopping the pink thread, but the pink didn't move for some reason.I could replicate a scenario where the red doesn't move, but then the red does not move.Something is not right.I'll consult the community for insights." After adjusting the board based on the OPM's recommendations, the player completed the level and then engaged with the post-gameplay visualization.The player delved deeper into the OPM to discover two additional methods to manage mutual exclusion, commenting, "Right, because I did set up a link between Zone I and Zone K. Zone I is down here, and Zone K is over there.It's suggesting I explore an alternate approach between Zone I and Zone H. Essentially, it involves allowing more time for it to approach the switch."In a similar vein, P4 noted, "I understand the recommendation.It suggests blocking the pink thread by linking Zone A and Zone L. It's a variation of my original strategy."