skip to main content
research-article
Open access

Factors That Predict K-12 Teachers' Ability to Apply Computational Thinking Skills

Published: 14 January 2024 Publication History

Abstract

Background and Objective. Teacher assessment research suggests that teachers have good conceptual understanding of CT. However, to model CT-based problem-solving in their classrooms, teachers need to develop the ability to recognize when and how to apply CT skills. Does existing professional development (PD) equip teachers to know when and how to apply CT skills? What factors should PD providers consider while developing trainings for CT application skills?
Method. This retrospective observational study used a binomial regression model to determine what factors predict teachers’ probability of performing well on a CT application skills test.
Participants. Participants of this study were 129 in-service K-12 teachers from a community of practice in India.
Findings. Results show that teachers who have received at least one CT training, who have a higher teaching experience, and are currently teaching CT will have a higher probability of applying CT skills correctly to problems irrespective of the subject they teach and their educational backgrounds. However, receiving a higher number of CT PD trainings was a negative predictor of teachers’ performance.
Implications. Implications for school administrators, professional development providers, and researchers are discussed. Teachers need ample opportunity to teach CT in their teaching schedules. Continuous professional development does not necessarily result in improved CT application skills unless careful consideration is given to the pedagogies used and to the resolution of misconceptions that teachers may have developed in prior training. Mixing plugged and unplugged pedagogical approaches may be beneficial to encourage transfer of CT application skills across different types of problems. Last, there is a need to develop valid and reliable instruments that measure CT application skills of teachers.

1 Introduction

The term computational thinking (CT) describes a conceptual approach to systematically solving complex problems through efficient processing of information and tasks [48]. In a rapidly changing world where technology has become an integral part of our life, being able to solve complex real world problems using technology's computational capabilities is a necessary skill. Wing [72] drew attention to the need for the inclusion of CT into the list of essential skills like reading, writing, and arithmetic processes. A new movement arose following Wing's article [72] for the integration of CT in K-12 curriculum. This movement has taken a perspective of CT as a 21st century skill governed by the constructs of decomposition, pattern recognition, algorithm thinking, abstraction, data analysis and representation, debugging, and automation [2, 6, 32].
CT skills are commonly associated with their application in the fields of Science, Technology, Engineering, and Mathematics (STEM) [67]. However, Wing [71] asserts that CT is influential in various other fields such as economics and humanities. This resonates with Bundy [16] who states that CT is found to impact a variety of fields from psychology and law to astronomy and geosciences. CT as a fundamental subject-agnostic skill can thus benefit students irrespective of the careers that they choose to pursue.
However, integrating CT across all subjects poses a challenge. Teachers have a significant role in the process of integration of CT in K-12 curriculum [54, 72]. To integrate CT in their subjects, all teachers need to develop pedagogical and content knowledge (PCK) in CT with an ability to present relevant subject-specific examples. This involves not only understanding the concepts in CT but also the ability to transfer the acquired conceptual understanding to identify subject-specific examples [39]. Both conceptual understanding and procedural skills are necessary to successfully transfer to another context [49]. In the context of CT-based problem solving, the author adapts the definition of procedural fluency by the National Council of Teachers of Mathematics [53] to define CT-procedural skills as follows:
The ability to choose from and flexibly employ various strategies to apply the concepts of CT to solve problems. CT-procedural skills are not mere rote application of methods, but they are the ability to leverage the conceptual understanding of CT to reason and think through possible strategies of approaching the problem and then successfully applying those CT concept to solve it.
Teachers should practice applying CT skills to solve problems, effectively gaining CT-procedural skills. They also need to use appropriate teaching methods to teach CT-based problem solving [35]. CT teachers need to adapt CT to the needs of the students and to their subject [11, 38] to design CT integrated questions for students. Pedagogical techniques and CT integration models such as unplugged activities, coding activities, problem solving, maker space activities, and others [8, 13, 17] are introduced to teachers during CT professional development (PD). However, teacher assessment research has been mainly focused only on conceptual understanding of CT [2, 77]. There is very little research available on teachers’ CT-procedural skills—their ability to use CT skills to solve problems. It is important that teachers’ CT-procedural skills be made a research focus to enable successful transfer to integration of CT in their classrooms [32, 76].

1.1 Defining CT

Wing's latest work [73] defines CT as “the thought processes involved in formulating a problem and expressing its solution(s) in such a way that a computer/machine or human can effectively carry it out” (p. 1). Aho's [1] definition further clarifies that CT skills are “the thought processes involved in formulating problems so their solutions can be represented as a computational steps and algorithms” (p. 832). More recently, based on synthesis of literature on computational thinking, Shute et al. [66] defined CT as the conceptual foundation necessary to solve problems with or without computers in such a way that the solution is reusable in different contexts. Across these and various other definitions of CT [36, 44, 63] is the core function of solving complex real-world problems [33, 79]. Wing's definition involves computers yet does not equate CT skills to programming skills. Programming is not an essential skill for teaching computational thinking [19, 48]. The National Research Council (NRC) [54] presented an extensive argument on the meaning of “programming” and states that programming can occur in cases where a computer or a machine is not involved, for example while giving directions to someone. Although programming can become a tool to demonstrate and express CT skills for students in higher grades, it should not get in the way of conceptual understanding of CT [71]. In fact, computational thinking skills have been assumed to have stemmed from unplugged human approaches to problem solving [19]. CT skills, as thought processes involved in problem solving, can be taught in an unplugged (without the use of computers/devices or technology) teaching environment [33]. Unplugged activities have been successfully used for teacher professional development in CT [24] and have been proven to be an effective pedagogical technique to develop CT skills among students [12, 42]. Computer Science Teachers Association (CSTA's) model for systemic change recommends gradual progression from “no-tech” and “low-tech” to “active tech” learning activities [20]. Their K-12 Computer Science Framework advocates the use of both plugged and unplugged activities for CT to improve equity in access, especially in schools that do not have functional computers and internet connection. It also presents research evidence on the benefits of using unplugged activities that include enhanced problem solving, abstraction, and verbal abilities among children [23].
For the integration of unplugged CT activities in primary and secondary school education, grade- and age-appropriate curriculum for CT has been identified as a need [46, 72]. The past decade has seen great progress toward meeting this need. The CSTA and Association of Computing Machinery in a joint effort generated the operational definition for effective integration of CT in the K-12 curriculum in collaboration with the International Society for Teachers in Education [20]. They defined CT as a problem-solving process that involves formulating a problem, analyzing and organizing data logically, using abstractions (models/simulations) to represent data, algorithmic thinking to automate solutions, optimizing to implement the most effective solution, and transferring this problem-solving process to different problems. They also created a CT vocabulary and progression chart that gave examples of how to embed core computational thinking concepts into plugged and unplugged activities across different grades and subjects in K-12. This progress toward successful integration of CT in K-12 classrooms has happened alongside a global policy-level drive to integrate CT in K-12 curricula.

1.2 Integration of CT in K-12 Education

The growing inclusion of CT in school education is evident from the fact that several countries have now included CT in their K-12 curricula—for example, Israel, Russia, U.K., South Africa, New Zealand, Australia, 25 countries from the European Union [9, 32], and, more recently, India. The National Education Policy (NEP) of India outlines curricular integration of “computational and mathematical thinking skills” from foundational stage (K-2) onwards through innovative and enjoyable activities in K-12 education in India [50]. The role of computational thinking in education has been envisioned not as a way to promote computer science as a career but an essential 21st century skill [33] that empowers students to be good problem solvers and analytical thinkers across subjects that they already learn [75]; this is resonated by the MHRD [50] in the NEP.

1.2.1 Challenge to CT Integration: Teacher Readiness.

The primary challenge that is faced in integration of CT in school education is that there is a dearth of CT-trained K-12 teachers [9, 74]. Curriculum priorities in their busy schedules and lack of funds limits their opportunities to receive professional development in CT [9, 31]. Existing research in teacher professional development in CT has found that teachers lack confidence in integrating CT into their classroom activities due to low self-efficacy [11]. Teachers from both STEM and non-STEM areas of expertise as well as primary and secondary school teachers have little basic understanding of what CT skills are and lack awareness on how to integrate these in the classroom [62], often conflating CT with other scientific data practices [41]. In-service teachers performed better than pre-service teachers when tested for CT conceptual understanding [34] due to on-the-job experiences. Studies that found science, math, and computer science teachers have good conceptual understanding of CT also reported that teachers attributed their understanding to regular subject-specific professional development [2, 76]. That PD in science, math, and computer science implicitly results in understanding of computational thinking skills is further evidence of the applicability of CT skills in STEM fields. However, good conceptual understanding needs to be supported by procedural knowledge and skills for their successful transfer to performance context [49].
PD that is focused on the application and transfer of CT skills to their classrooms is lacking [69]. To integrate CT in other subjects, teachers need to be proficient at CT concepts as well as CT processes that can be used for teaching [38, 70]. Integration of CT in various curricular subjects in K-12 education entails applying computational methods and algorithmic problem-solving practices across disciplines [6]. The NRC [54] report recommends that teachers model CT-based application skills in their classrooms and help students understand when and how to apply these skills. To be able to model CT application skills, teachers must first themselves know when and how to apply CT skills. Thus, it is important that focused professional development for CT is offered not only on the conceptual understanding of CT but also for the CT-procedural skills.
K-12 teachers in India face similar challenges as seen in other parts of the world. Until recently, the subject of computer science was being taught in K-12 education in India with its focus being mainly on digital literacy [65]. Specific data on PD for in-service K-12 computer science teachers in India is sparse. However, reports on teacher PD for all K-12 teachers indicate that although the recommended guideline is at least 20 hours of training for every teacher per year, only 43% of the total primary and secondary education teachers in the country receive training [15]. It is quite evident that the teacher professional development in the K-12 education system is highly inadequate. It has been observed in a survey conducted by the CSPathshala initiative that most teachers from rural India do not have any computing background [65]. Moreover, while 59% of all teachers from urban schools have had some computing exposure in their education, only 10% have a degree in computer science [65]. This reflects on a steep lack of trained teachers for CT within the K-12 classrooms in India. Raman et al. [56] concur with the general academic belief that the content and pedagogical knowledge of teachers is a crucial factor that affects the quality and learning outcomes of students and assert this also applies to India. Moreover, according to the Unified District Information System for Education Plus report by Ministry of Education, India, as of 2022 only 45.8% of schools in India have functional computers and 33.9% schools have internet facility [40]. A majority of the schools that have functional computers and internet are private and government-aided schools that form only 28% of the total number of schools in India [40]. Thus, unplugged CT activities are an apt approach to provide equitable access to CT and CS education to students in India and for ease of facilitation by CT-novice teachers [36]. It is thus evident that good professional development using unplugged CT activities is a need among in-service K-12 teachers in India.
The inclusion of CT in K-12 curriculum through India's National Education Policy 2020 has catalyzed the need to transform teacher professional development so that it includes CT-procedural skills for teachers. Since existing research in teacher professional development in CT indicates that STEM teachers might have an advantage over the non-STEM teachers with respect to their competence in CT, it is evident that there may be other demographic factors such as teachers’ educational background, teaching experience, and subject taught that affect teachers’ competence in CT-procedural skills. It is important to identify what other factors need to be taken into consideration while designing professional development trainings in CT-procedural skills for different subject teachers. Do teachers with computing related educational background have an advantage? Does teaching a STEM or non-STEM subject predict a teacher's ability to apply CT-procedural skills? Does teaching experience play a role? What factors must professional development providers consider while designing teacher trainings in CT for teachers with various backgrounds and professional experience? This study takes a first step in this direction.

1.3 Purpose and Research Questions

The purpose of this study is to ascertain if existing professional development in CT is a predictor of in-service K-12 teachers’ competence in application of CT-procedural skills. It aims to determine what other factors predict teachers’ performance in CT-procedural skills. The study aims to answer the following research questions:
To what extent do in-service K-12 teachers who have received professional development in CT perform better in CT-procedural skills than those who haven't received any CT professional development?
To what extent are the number of CT professional development trainings taken and the time since the last training, predictors of in-service K-12 teachers’ competence in CT-procedural skills?
To what extent are the demographic factors such as educational qualification, subject taught, general teaching experience, and CT teaching experience, predictors of in-service K-12 teachers’ competence in CT-procedural skills?

2 Methods

2.1 Research Design

In this retrospective observational study [45], quantitative data were collected through a survey instrument. The survey included a CT procedural skill assessment and demographic information about the teachers. The study uses teachers’ performance on the assessment to determine if the number of CT professional development trainings received and time, since the last training received predict in-service teachers’ CT-procedural skills. The demographic data collected includes educational background, subjects taught (STEM or non-STEM), number of years of teaching experience, grade level taught, and others to determine other factors that may or may not be predictors of their CT-procedural skills. These were treated as covariates in the data analysis.

2.2 Data Source

The research survey comprised two sections. Section 1 was a CT skill transfer test designed and validated by Tang et al. [68] which consisted of six “real-life-like” problems from the Bebras challenge for school students. Each question in this challenge is a multiple-choice question designed to be answered within 3 min.
Section 2 of the research survey consisted of demographic questions including educational background of participants, subjects they teach (STEM or non-STEM), number of years of teaching experience, grade level taught, number of CT trainings taken, and time since the last CT training. These questions were designed to be either multiple choice, Likert scale, or open-ended short answer questions. Section 2 had a total of 23 questions.
Both sections of the survey were designed and delivered using an online survey instrument called Qualtrics. The link of the survey was shared with the participants through email.

2.2.1 CT-Procedural Skills Assessment Instrument.

Several tools and instruments have been developed and used over the years to assess CT conceptual knowledge, CT vocabulary, CT attitudes, and programming abilities at various levels such as elementary, intermediate, middle, and high school [44, 59]. For example, the Computational Thinking Scale developed by Korkmaz et al. [44] measures CT perceptions and attitudes, the Computational Thinking Test (CTt) [60] measures conceptual understanding of CT, and environments such as Dr. Scratch allow for automatic formative assessment using instant feedback while engaged in block-based programming. Assessments designed using unplugged CT problems to measure CT procedural skills are few, such as TechCheck [57], and are typically designed for one specific developmental age group (in this case for early childhood education). There is a lack of a commonly accepted standard approach to measure CT-procedural skills, especially designed to assess teachers.
Bebras is an international competition held every year since 2004 with a twofold view to engage learners from elementary, middle, and high school in computational thinking skills and to assess their ability to apply CT to solve problems [25, 26]. The Bebras tasks are annually designed by an international committee of experts from the field of computer science based on meticulously defined criteria [25] and undergo multiple rounds of review before tasks are shortlisted [26, 27]. The tasks consist of unplugged multiple-choice questions designed in five age-appropriate categories—Castors (ages 8–10), Benjamins (ages 10–12), Cadets (ages 12–14), Juniors (ages 14–16), and Seniors (ages 16–18). It is aimed at measuring CT skill application abilities to different types of problems [60] through the cognitive processing of information using concepts from computer science such as data structures, algorithms, and programming constructs in an unplugged environment [25]. Although the Bebras competition is designed for school students, Bebras-based assessments [47] have been successfully tested to assess CT skills among school students, undergraduate students (adults), and pre-service teachers [10, 28, 51]. Bebras tasks were recently found to be one of the most used tool for CT assessment in Europe [5].
Despite the rigorous process of task creation and selection, it has been found that the reliability and validity of the psychometric properties and difficulty level of every question ever asked in Bebras cannot be ascertained [37, 68]. However, the valid and reliable CTt [60], which measures conceptual understanding and recall of CT, correlated positively with some Bebras tasks. Since CTt is designed to measure summative aptitude and not procedural skills the correlation was moderate, an expected outcome. Several efforts have been made to validate the Bebras tasks. Lockwood and Mooney [47] created two CT assessment quizzes based on Bebras tasks ensuring a consistent difficulty level for each test through a threefold process of selection by a panel of teachers, undergraduates, post-graduates, and academics. These tests have been used to measure CT skills in undergraduates in a latter study [51]. Tang et al. [68] tested a specific set of six Bebras tasks designed for high school students and found that the tasks had a high reliability and high internal consistency yet inferred that the construct validity of the Bebras tasks needs to be tested with a larger sample size.
The Bebras tasks may not be able to measure the constructs of CT (e.g., decompositions, abstraction, pattern recognition, algorithm thinking) individually; however, they are able to measure learners’ ability to “holistically” apply a combination of those constructs to complex real-world like problems, an approach supported by Angeli et al. [3]. Researchers’ consensus on the need for validating the psychometric properties of Bebras tasks notwithstanding, they have been used to measure skill transfer of CT skills across several studies [5, 10, 28, 47, 51]. It has been agreed that Bebras tasks can be used to measure the ability to “apply” and “analyze” CT skills [60]. Since the six tasks tested by Tang et al. [68] are the only ones known that have been tested for reliability and internal consistency, and they are also suitable for a high school–level difficulty, these were believed to be a good choice to use for the current research. The six tasks from Tang et al. [68] used for this study are as shown in Appendix A.

2.3 Participants and Context

Based on the K-12 Computer Science Framework, in 2016, an initiative called CSPathshala was launched to promote CT in K-12 education in India. CSPathshala has created lesson plans and teaching aids for grades 1–8 teachers with unplugged CT activities. These grade- and age-appropriate CT activities are aligned to various subjects that the students already learn at their grade level such as mathematics, science, and languages. They conduct activity-based unplugged training sessions for K-12 teachers in India and provide teaching aids and teaching material that includes lesson plans and presentations as an open source resource [22]. Since CSPathshala conducts regular ongoing teacher training for K-12 teachers, they have created a CT teachers’ community of practice. Some of these K-12 teachers affiliated with CSPathshala have undergone at least one CT training with them while some (typically non-STEM teachers) have not received any training yet due to their school administration's priority and assumption of introducing CT in STEM subjects first. The participants of this study are teachers from this community of practice created by CSPathshala.
CSPathshala is also the official organizer of the Bebras challenge in India with students from more than 900 schools in 20 states of India participating in this challenge in 2022 [7]. These Bebras participants include students from the schools at which several teachers from the community of practice teach. Thus, using the Bebras tasks to assess teachers’ ability to apply CT skills is appropriate for this population as a way to ascertain whether teachers have the skills that they endeavor to prepare their students for.
The survey was sent to all teachers in the community of practice with participation being voluntary. A total of 129 teachers participated in the survey. Of these, 4 teachers declined consent to the use of their survey response for the purpose of research. Of the remaining 125, 7 responses were found to be incomplete. These were excluded from the dataset in the data cleaning process. A total of 118 responses were taken as the cleaned dataset for the purpose of data analysis.
Of these 118 teachers, 109 (92.3%) were female and only 9 (7.6%) were male. A majority of the teachers (97/118) were from private schools, while there were also a few from government/public schools (15), international schools (2), and government-aided schools (4). Most teachers belonged to a semi-urban or urban area (89) and few (29) from a rural area of the country. It was found that 97 of the participating teachers taught a STEM subject and 82 of these also had an educational background in STEM. Table 1 summarizes the participant demographics. All teachers in this community have access to internet through either a desktop computer, a laptop, or smartphones.
Table 1:
Demographic CharacteristicNumber (n)Percentage (%)
Gender  
 Female10992.3
 Male97.6
Type of School  
 Private9782.2
 Government / Public1512.7
 International21.7
 Government Aided School43.4
Region in India  
 Semi-urban / Urban8975.4
 Rural2924.6
Subject Taught  
 STEM9782.2
 Non-STEM2117.8
Educational Background  
 STEM8269.5
 Non-STEM3630.5
Table 1: Participant Teachers’ Demographics

2.4 Data Analysis

This section describes how the data were cleaned and coded before performing the analysis procedures on it. The open source R statistical software was used to carry out all data analysis for this study. Approximately 40% of the participants had not received any CT training in their teaching career. The statistical analysis was done by splitting the dataset into two—one set of teachers who have had at least one CT training (72 teachers) and one set of teachers who have had no CT Training (48 teachers). This allowed for the examination of how various demographic factors affect teachers’ CT application skills with or without training.
The categorical data on subject taught by teachers and teachers’ educational qualification each was coded into STEM and non-STEM categories to help the researcher study the impact of having an exposure to a STEM related subject. The teaching experience had been captured in six categories—less than 1, 1 to 5, 6 to 10, 11 to 15, 16 to 20, and more than 20. For the ease of predictive regression analysis, the last two categories (16 to 20 and more than 20) were merged and then the feature engineering technique [55] was used to convert the categorical data to numerical data. Feature engineering involves the transformation of variables from raw data into a format that better represents the variable and reduces complexities or biases within the data [55]. In this study, for each of the categories in the teaching experience variable, a random number generator was used to generate a number that lies in the range of the category. For example, for participants with teaching experience that lies in the 1 to 5 years category, the recoded value would be a random number between 1 and 5. This was repeated for all participants in each category. The new coded data (TeachExrecoded) were used for the regression model. For reliability and to ensure that the fitted model was not a mere chance for one set of randomly generated numerical values for TeachEx, the process of recoding and running the model on it was repeated 1,000 times to ensure that the model fits and TeachExrecoded is significant each of the 1,000 times.
Teachers were scored +1 for each correct answer on the CT test and +0 for every wrong answer. As there were six questions in total, the dependent variable (the score on the test) is a whole number ranging from 0 to 6. A non-parametric equivalent of a t-test, the Wilcoxon rank-sum test was performed to check whether teachers who received at least one CT training performed better than those who did not receive any CT training. Since the score is not continuous and is in fact count data, the binomial regression method was used to analyze the two datasets. The researcher initially started with six predictor variables as given in Table 2.
Table 2.
Variable NameDescription
Score (dependent variable)Teachers’ total score on the test
CTTrainings (predictor)Number of CT trainings taken
Tgap (predictor)Time since the last training taken in years
TeachCT (predictor)Do they currently teach CT? [Yes = 1; No =0]
TeachEx (predictor)Years of teaching experience
TeachExrecoded (predictor_recoded)Years of teaching experience recoded for regression
SubTaught (predictor)The subject that they teach [STEM = 1; non-STEM = 0]
EduQual (predictor)Educational qualification [STEM = 1; non-STEM = 0]
Table 2. Primary Variables for Regression

3 Results

The research data were analyzed to understand what factors including professional development in CT predict teachers’ CT-procedural skills. The following sections summarize the findings of the study.

3.1 Existing Professional Development in CT as a Predictor

Since the data were count data, the non-parametric equivalent of the t-test, the Wilcoxon rank-sum test was conducted on the two datasets—one set of teachers who have had at least one CT training (68 teachers after removal of outliers) and one set of teachers who have had no CT Training (48 teachers). This test was conducted to check whether teachers who have taken at least one CT training perform better than those who have taken no CT training. From the outcome it was found that at a significance level of 0.05, the difference of scores between teachers who have received at least one CT training and those who have not received any CT training is symmetric about a number that is greater than 0. It can be inferred that the teachers who have received at least one CT training perform significantly better than those who have received no CT training. This indicates that existing professional development is a predictor of teachers’ CT-procedural skills. Table 3 summarizes the result.
Table 3.
GroupMeanSDMedianTest statistic Wp-value
Teachers_CTTrainings3.891.31418680.036*
Teachers_no_CTTrainings3.431.253  
Table 3. Wilcoxon Rank-sum Test

3.2 Factors That Predict CT Application Skills

As stated earlier, the data were divided into two datasets—one set of teachers who have had at least one CT training (68 teachers after removal of outliers) and one set of teachers who have had no CT Training (48 teachers). For the dataset of teachers who have received at least one CT training, at a significance level of p < 0.05, four factors were found to significantly predict teachers’ performance on the CT test: Tgap, TeachCT, TeachExrecoded, and the interaction between Tgap and CTTrainings. Surprisingly, both SubTaught and EduQual did not contribute significantly to the model. CTTrainings as an individual factor also did not contribute significantly to the model.
For initial analysis, a boxplot of time since the last training taken (Tgap) was plotted. As seen in Figure 1, four outliers were found whose gap since the last training taken was more than 4 years. It was noted that the CSPathshala initiative to provide CT trainings to K-12 teachers was started in 2016, and hence a training gap of more than 4 years at the time of data collection (April–June, 2021) could either be an error in participants’ data input through the survey or does not refer to the training provided by CSPathshala. Thus, the four outliers were removed to maintain consistency in the training attributes of the teachers.
Fig. 1.
Fig. 1. Boxplot of training gap.
A binomial model was then built (N = 68) with all the six predictor variables. SubTaught and EduQual were found to be non-significant variables at this point. An interaction was observed between CTTrainings and Tgap. Thus, the interaction term was introduced in the model. When teachers’ scores were plotted against Tgap, a non-linear relationship between the two was observed from the scatterplot. Based on the observed curve of the plot, the square-root of Tgap was introduced in the model.
On running the binomial regression again, the following model was found as the best fit for this dataset:
\begin{align*}\ln \left( {\frac{{{p}_c}}{{{p}_i}}} \right) &= - 1.54\sqrt {Tgap} + 0.79{\rm{\ }}Tgap + 0.99{\rm{\ }}TeachCT + 0.1{\rm{\ }}TeachE{x}_{recoded} \\ &\quad - 0.14{\rm{\ }}Tgap{\rm{*}}CTTrainings, \end{align*}
where
\({p}_c = the\ probability\ of\ the\ teacher\ answering\ a\ question\ correctly,\)
\({p}_i = the\ probability\ of\ the\ teacher\ answering\ a\ question\ incorrectly.\)
Table 4 presents the results of the regression with the coefficients and p-values.
Table 4:
Predictor VariableCoefficient estimateStandard Errorp-value (α = 0.05)
\(\sqrt {{\boldsymbol{Tgap}}}\)–1.540.6310.0143*
Tgap0.790.3560.0249*
TeachCT0.990.2590.00012***
TeachExrecoded0.100.0209.698e-07***
Tgap:CTTrainings–0.140.0550.0107*
Table 4: Binomial Regression for Teachers Who Received CT Training
Note: \(^{*}\)indicates significance at p < 0.05; \(^{***}\)indicates significance at p < 0.001.
The chi-squared test was conducted on this model to verify that this model is a good fit and is better than the null model, i.e., the model with none of the predictor variables (p = 1.964e-12). Figure 2 visually demonstrates the relationship found between the predictor variables and teacher performance. Multicollinearity was suspected due to the high standard errors in the significant model. However, on plotting a correlation matrix and testing for correlation at 95% confidence interval, none of the variables were found to have high correlation (see Table 5), and all except the correlation between TeachExrecoded and TeachCT tested non-significant. TeachExrecoded and TeachCT were found to be moderately correlated with a significant correlation test (r = –0.49; p = 0.000). This can be explained by the likelihood that whether the teachers currently teach CT may be impacted by how long they have been teaching. To check whether this moderate correlation impacts the prediction model, the observed probability of getting a correct answer was plotted against the predicted probability of the model. It was found that all predictions were within 1 standard deviation of the observed value thus indicating that the model is still a good fit (see Figure 3).
Fig. 2.
Fig. 2. Fitted model for teachers who have received CT training.
Table 5:
 TgapTeachCTTeachExrecodedCTTrainings
Tgap1.0   
TeachCT–0.00451.0  
TeachExrecoded0.08–0.491*1.0 
CTTrainings–0.1890.1830.0111.0
Table 5: Correlation Matrix for Variables in the Regression Model
Note: \(^{*}\)indicates significance at p<0.05
Fig. 3.
Fig. 3. Observed vs. predicted probability of getting a correct answer.

3.3 Teachers Who Received No CT Training

For the dataset of teachers with no CT training (N = 48), at a significance level of p < 0.05, only one factor was found to significantly explain teachers’ performance on the test: TeachExrecoded. No other predictor variables or interaction of predictor variables was found to have a significant impact on the model. Since teachers did not receive any CT training, the variables Tgap and CTTrainings were not applicable to this set of participants. SubTaught and EduQual both proved to be insignificant in their contribution to the model of teachers’ performance on the test.
The following model was found to be the best fit for this dataset. Table 6 summarizes the result.
\begin{equation*}\ln \left( {\frac{{{p}_c}}{{{p}_i}}} \right) = 0.028{\rm{\ }}TeachE{x}_{recoded}, \end{equation*}
where
Table 6:
Predictor VariableCoefficient estimateStandard Errorp-value (α = 0.05)
TeachExrecoded0.0280.0120.025*
Table 6: Binomial Regression for Teachers Who Have No CT Training
Note: \(^{*}\)indicates significance at p < 0.05.
\({p}_c = the\ probability\ of\ the\ teacher\ answering\ a\ question\ correctly,\)
\({p}_i = the\ probability\ of\ the\ teacher\ answering\ a\ question\ incorrectly.\)
The chi-squared test was performed on this model. It was found that this model is a good fit and is better than the null model (p = 0.0243).
This finding shows that teaching experience was a positive predictor of the teachers’ ability to apply CT skills even when they have not received any CT training. However, irrespective of whether it is STEM or non-STEM, educational qualification and subject taught do not predict teachers’ CT-procedural skills. Figure 4 illustrates the relationship between the predictor variable and teacher performance.
Fig. 4.
Fig. 4. Fitted model for teachers who have no CT training.

3.4 Understanding the Results

3.4.1 RQ1: Do in-Service K-12 Teachers Who Have Received Professional Development in CT Perform Better in Application of CT-Procedural Skills Than Those Who Have Not Received Any CT Professional Development?.

The results of the Wilcoxon Rank Sum test (Table 2) indicate that the teachers who received at least one CT training performed significantly better than those who received no CT training. Hence it can be inferred that having at least one CT training has a positive impact on teachers’ ability to apply CT skills. This finding also suggests that existing professional development plays a role in preparing teachers for CT-procedural skills.

3.4.2 RQ2: Are the Number of CT Professional Developments Taken and the Time since the Last Training Predictors of In-Service K-12 Teachers’ Competence in Applying CT-Procedural Skills?.

To understand the fitted model for teachers who have received CT training (Figure 2), consider a case where Teacher A has taken one CT training more than Teacher B while all other predictor variables in the model in Figure 2 remain the same for both teachers. According to the model, the probability of Teacher A answering a question correctly on the test is lower than that of Teacher B. Thus, given all other predictor variables remain the same, when the number of CT trainings taken by a teacher increased, the teacher's probability to answer a question correctly decreased.
On conducting the same analysis for Tgap, it was found that given all other variables remain same, when the gap in years since the last training taken decreased, the teacher's probability of answering the question correctly also decreased. As per this result, the longer the gap since the last training taken, the better the teacher was likely to perform.
Thus, contrary to the researcher's expectation, the number of CT trainings (CTTrainings) negatively predicted teachers’ ability to apply CT skills while gap since the last training taken (Tgap) was a positive predictor. This implies that taking higher number of CT trainings may not necessarily benefit teachers’ performance and a longer training gap might benefit teachers’ performance. Thus, simply offering recurring continuous CT professional development trainings may not be a successful strategy to implement in schools. The implications of this finding will be discussed in further detail in the Discussion section.

3.4.3 RQ3: Do Demographic Factors Such as Educational Qualification, Subject Taught, General Teaching Experience, and CT Teaching Experience Affect the Prediction of In-service K-12 Teachers’ Competence in Applying CT-Procedural Skills?.

A similar analysis can be done for other demographic factors. Consider a case where the teaching experience of Teacher A is 1 year more than that of Teacher B, while all other predictor variables remain the same for two teachers. According to the model, the probability of getting a correct answer for Teacher A is 10% higher than that for Teacher B. Thus, given all other predictor variables in the model remain the same, if teaching experience increases, then the probability that the teacher will answer a question correctly also increases. This is true for teachers who have not received any CT training as well. Figure 3 indicates that teaching experience is a positive predictor of teachers’ ability to apply CT skills even when they have not received any CT training.
For the predictor variable TeachCT, consider a case where Teacher A teaches CT and Teacher B does not teach CT, while all other predictor variables in the model remain same for both teachers. It is observed that the probability of Teacher A answering a question correctly is 169% higher than that of Teacher B. Thus, teaching CT positively predicts teachers’ performance.

4 Discussion

This section synthesizes the findings of this study to understand whether existing professional development is a predictor of teachers’ ability to use CT-procedural skills to solve problems and identifies other factors that might predict these skills. The findings show that teachers who had received at least one CT professional development training performed significantly better on the test than those who had received no CT training. The probability of answering a question correctly for teachers who have received at least one CT training is predicted by the variables Tgap, TeachCT, TeachExrecoded, and the interaction of CTTrainings and Tgap.
The overall teaching experience positively predicts teachers’ procedural skills in CT. Teaching experience is also seen to have a similar effect for teachers who have not received any CT training. The higher the teaching experience, the higher the probability of better performance of the teacher in CT-procedural skills. This finding extends Günbatar's [34] research further by showing that teaching experience not only improves conceptual understanding but also plays a positive role in predicting teachers’ ability to apply CT skills. Prior studies with in-service STEM teachers report that their CT conceptual understanding can be attributed to subject-specific professional development [2, 76]. However, this study presents a new revelation that teaching experience not only helps STEM teachers but also helps non-STEM teachers in applying CT skills. Moreover, both the subject taught (STEM or non-STEM) and the educational qualification (STEM or non-STEM) of the teachers did not predict their probability to answer correctly on the test. Therefore, there are perhaps more factors associated with teaching experience other than subject-specific professional development that enhance teachers’ ability to apply CT skills.
Teaching CT was a positive predictor of teachers’ probability of answering the question correctly. This may be attributed to the fact that when teaching CT, teachers themselves engage in applying CT skills to model problem-solving for the students. Kong et al. [43] have also highlighted that having the opportunity to practice teaching CT in their classroom for a prolonged period improved in-service teachers’ content knowledge and technological pedagogical content knowledge in CT. Rich et al. [58] observed that after a year-long professional development, in-service teachers attributed their increased confidence in teaching CT not only to the PD, but also to their actual experience in teaching CT. Teaching CT and working with students in the classroom has been shown to help pre-service teachers strengthen their understanding of CT concepts [18] and is believed to help teachers see firsthand the value in CT integration [46]. Collectively with the results of this study, it can be concluded that teaching CT helps improve teachers’ conceptual knowledge, their technological and pedagogical knowledge, their ability to apply CT skills, and their confidence to teach CT. It is widely accepted that PCK is essential for integration of CT in classrooms [35, 39, 48]. This finding indicates that PCK might also contribute to teachers’ ability to apply CT skills.
For the sample of this study, a higher number of CT trainings had a negative impact on the probability of teachers’ success on the test. This indicates that although taking CT training was significantly better than not taking any CT training, a higher number of CT trainings were in fact counter-productive in improving their CT-skill application abilities. To understand this phenomenon a closer look needs to be taken at the nature of the training that was given to the teachers and at the impact that may have on teachers’ CT procedural skills. Several CT researchers have reported that teachers develop misconceptions about CT during or after professional development and these are reflected in their teaching [29, 30, 52, 61]. Teachers who have not received any professional development are also known to have inherent misconceptions about CT from the partial awareness created by consumption of news, media, and other awareness initiatives [21]. Common misconceptions and issues include not understanding how to use or apply a concept even though they understand the meaning (e.g., they understand the term “algorithm” but do not understand how to use it in their classroom context) and prior knowledge from a different field overshadowing CT concepts (e.g., the term “decompose” also means “to rot”) [52]. This may result in teachers rote learning the process or rule of the activity to demonstrate the same to their students rather than understanding the underlying principle of application (e.g., in a binary numbers activity, where black and white cards are used to represent 0 and 1, teachers get fixated on which color is 0 and which color is 1) [29]. Such misconceptions, if accumulated over time, may result in teachers’ diminished ability to apply CT skills to different contexts and a lack of their ability to successfully integrate CT in their teaching. We know that the trainings provided to the teachers in this study through CSPathshala involved unplugged activities. It might be the case that the trainings failed to clarify the misconceptions that teachers developed over time and with every new training, teachers may have accumulated further misconceptions, which may be why a high number of trainings led to a negative impact on the probability of teachers’ success. It is suggested that continuous professional development include periodic evaluation that aims to capture misconceptions and misunderstandings that teachers may have developed and resolution to such misconceptions must be provided in the upcoming trainings. Ongoing training can also include explicit discussions on common misconceptions found among teachers, use examples of concepts being applied to different contexts, and encourage teachers to actively participate in a community of practice for CT to ensure sufficient practical experience in application of CT skills is gained [52].
The nature of the PCK provided in these trainings may also have elicited other tensions around teaching practices and assessments. For example, as Brennan [14] mentions, constructionist learning activities used in CT trainings are at odds with the reality of teaching experience in K-12 classrooms. More recently, Yeni et al. [78] found that pre-service teachers were unable to integrate CT concepts, computing tools, subject content, and pedagogy to create a seamless CT experience due to disconnect between the CT trainings and the teaching theories and methods that they are taught in other courses. Hence, the teaching approach involving unplugged activities used in the trainings may have been at odds with teachers’ regular teaching methods in their classrooms and in turn impact their ability to apply CT skills negatively. Although verification of this hypothesis is beyond the scope of this study, this phenomenon also points to the need for preparing teachers to be able to transfer CT skills across various types and forms of problems irrespective of the types of activities and pedagogies involved. Experts recommend that CT professional development use scaffolding and learner-centered approaches to ease teachers’ transition into the new topic and to encourage deep learning and higher order thinking [46, 64]. Angeli et al. [3] suggest the use of authentic learning approaches using real-world scenarios that will enable better contextual understanding of CT application. There is also perhaps a need to involve a mix of both plugged and unplugged activities in the CT professional development training to enhance CT skill transfer across various types and forms of problem solving.
Hickmott and Preito Rodriguez [35] state that the measures used to assess teachers may not be appropriate for the nature of the professional development taken. This may cause further tensions around the appropriateness of the assessment instrument to the PCK developed in the professional development. The test used in this study is a validated set of Bebras tasks. Bebras is an international competition held every year aimed at measuring CT skill transfer abilities to different types of problems [60]. Specifically, this study used a specific set of six Bebras tasks that have been tested for reliability and internal consistency [68]. A more recent study that conducted a confirmatory factor analysis on Bebras tasks could not find evidence that they truly assess the various constructs of decomposition, abstraction, algorithm thinking, and generalization [4]. Thus, the suitability of the assessment may be a factor that affected the kind of impact that the number of trainings would have on the probability of a teacher to answer correctly on this test. This also highlights the need to develop and validate instruments that measure teachers’ CT-procedural skills in an unplugged setting.
In summary, teachers who have received at least one CT training, who have a higher teaching experience, and are currently teaching CT will have a higher probability of applying CT skills correctly to problems irrespective of the subject they teach and their educational backgrounds.

4.1 Implications for Professional Development Providers

This study found that teaching CT increased teachers’ probability of better performance on CT application skills. Thus, along with professional development it is important to provide teachers with ample opportunities to teach CT in their classrooms by making specific time allocations for CT activities within their teaching schedules.
The findings of this research indicate that recurring trainings in CT, if not carefully designed, do not necessarily lead to improved skill development in CT and may in fact affect teachers’ performance negatively. Thus, schools that engage their teachers in continuous professional development in CT over a period of one or more academic years need to carefully design follow up trainings to either align them to teachers’ existing teaching practices or provide appropriate transition into a new form of PCK. Careful design considerations for follow up trainings including both plugged and unplugged pedagogies might be crucial to prepare teachers for CT integration.
It was also found that the subject taught by the teacher and teachers’ educational qualification (STEM or non-STEM) did not affect their probability of performing well on application of CT skills. Thus, both STEM and non-STEM teachers can perform equally well in CT skills if the appropriate training is provided and if they get ample opportunity to teach CT. As an inference from these findings, instructional designers should take the following factors into account while developing trainings depending on the context of the teachers and their teaching practices:
(1)
Have the teachers had any prior trainings?
(2)
Are there any misconceptions or misunderstandings about CT based on prior trainings?
(3)
Do the teachers currently teach CT? Will they have enough opportunity to implement what they learn in their classrooms?
(4)
What teaching styles do teachers use in their classroom teaching?
(5)
Does the PCK of the training suit teachers’ teaching style? If, not, then how can the professional development training be designed to provide appropriate transition into a new PCK.
(6)
How can the PD be designed to provide opportunities for transfer of CT application skills to various contexts? Consider including a mix of plugged and un-plugged activities in the professional development programs to promote skill transfer.
(7)
What is the teaching experience of the teachers? How will that impact teachers’ self-efficacy?

4.2 Future Research Implications

Teaching experience had a positive impact on the probability of teachers’ performance on CT-procedural skills for all teachers irrespective of their educational background, subject taught, or whether they have received any CT trainings or not. Further research needs to be conducted to ascertain how teaching experience affects teachers’ CT skills.
There is a need to examine whether there are tensions around the nature of activities used in the professional development vis-à-vis their regular teaching practices in K-12 classrooms and how these might affect teachers’ self-efficacy in CT application skills. Studies that consider the nature of the continuous professional development provided and its impact on teachers’ CT-procedural skills need to be conducted.
Last, the lack of appropriate measures/instruments to assess CT skills persists as researchers strive to find valid and reliable methods to assess CT skills. Most CT assessment instruments are either self-report scales or measure conceptual understanding in CT. There are very few that measure CT application skills. There is also a need to develop instruments that truly assess the different constructs of CT for teachers.

4.3 Limitations

The primary limitation of this study is the use of only six questions in the test given to the teachers limiting the possible score outcomes to count data ranging from 0 to 6. This was done to cut down on the time spent by participants in taking the survey in this research. However, the small number of questions may have affected the power of the statistical analysis tests performed to analyze the results. Future studies should perhaps use a longer test so that the scoring range is large. Although the Bebras tasks are highly used by researchers as an assessment for CT procedural skills, they have their own limitations. The lack of validation of the psychometric properties of the Bebras tasks to measure individual constructs, such as decompositions, pattern recognition, and algorithm thinking, raises concerns around how accurately it measures CT skills and how many tasks are needed to measure the holistic CT application skills. Additionally, a survey-based approach using Bebras tasks does not allow for recording the process data of participants while they solved the tasks, which is crucial to witness the CT constructs being applied in the process. Hence future studies that use Bebras tasks for CT skills assessment may record process data using various techniques such as think aloud interviews and process log files.
The data were found to be inherently noisy as the standard errors for the regression model were high and yet this could not be explained by multicollinearity. There may be demographic factors other than those measured in this study that affect the predictor variables and directly or indirectly also affect teachers’ ability to apply CT skills that this study has not been able to capture. Thus, the study needs to be repeated in various settings and more factors that might affect teachers’ application skills should be measured while doing so to establish a better model for predicting teacher performance.

5 Conclusion

This research provides insight into factors that might predict teachers’ ability to apply CT skills and identifies essential considerations while designing teacher professional developments. Teachers’ ability to recognize when and how to apply the CT skills taught in the professional development trainings is likely to translate directly to better CT integration in their classroom teaching. Professional development providers need to consider these factors while designing continuous professional development for CT. This study establishes that CT professional development improves teachers’ ability to apply CT skills compared to the teachers who have no CT training. However, the study discovers that taking higher number of professional developments may not necessarily be beneficial and in fact may have a negative impact on teachers’ ability to apply CT skills if the training is not appropriately designed. Thus, simply providing continuous professional development in CT may not be an appropriate strategy for in-service teachers’ training in integration of CT in classrooms. Further planning is necessary into the nature of the professional development provided to teachers vis-à-vis their classroom practices and existing PCK.
It was found that teaching experience and teaching CT specifically have a positive impact on teachers’ ability to apply CT skills. School administrators and teachers need to take this into account while planning teaching schedules and curriculum plans. Care needs to be taken to provide teachers with time and resources to be able to gain experience in teaching CT.
This study also questions the suitability of Bebras tasks to assess the ability to apply CT skills among teachers. There is a need for the development of appropriate CT assessment instruments for teachers that assess not only conceptual understanding but also the ability to identify when and how to apply CT skills, and their ability to engage in problem-solving using CT.

A Research Survey - Section 1

1.
Mother Beaver bought ten balloons of three colours with the numbers as shown:
0
Green
1
Yellow
2
Red
3
Green
4
Yellow
5
Red
... etc.
If Mother Beaver was born in the year 1983, can you pick up the balloons in the correct order to show Mother Beavers’ year of birth?
A.
Yellow, Red, Green, Red
B.
Yellow, Green, Green, Green
C.
Yellow, Red, Red, Green
D.
Yellow, Green, Red, Green
2.
Girish was playing in the woods. He used nuts and sticks to create four nice animals.
His sister managed to bend the animals around without removing any of the sticks. Girish was
very upset, because he really loved the figure of a dog.
Question:
Which of the following figures can be bent back to make the figure of the dog again?
3. Beavers build rafts. For river traffic control, all rafts should be registered. This means that each raft should have a license plate with unique text. The text is made up of letters and digits using the diagram below. The license must start with the letter B and end with the digit 0 or 1.
Which one of these plates cannot be registered?
A.
BB0100
B.
BSA001
C.
BE0S01
D.
BBB100
4. Sara the beaver loves to draw stars. She has devised a system for labelling her stars according to their shape. She uses two numbers:
A number of dots for the star.
A number indicating if a line from a dot is drawn to the nearest dot (the number is 1), the second closest dot (the number is 2), etc.
Here are four examples of Sara's labelling system.
Question:
How would Sara label the following star?
A. 10:3
B. 9:4
C. 10:4
D. 10:5
5. Three spotlights are used to light the theatre stage in the beavers' forest, a red one, a green one and a blue one. The colour of the stage depends on which of the three spotlights are turned on. This table shows the possible combinations of colours.
From the beginning of the show, the lights will be switched on and off in this pattern:
The red light repeats the sequence: two minutes off, two minutes on.
The green light repeats the sequence: one minute off, one minute on.
The blue light repeats the sequence: four minutes on, four minutes off.
Question:
What will the colour of the stage be in the first 4 minutes of the show? Drag the correct colour onto the block of the minute.
6 Beaver the magician can convert objects into new objects. He can convert:
Two clovers into a coin
A coin and two clovers into a ruby
A ruby and a clover into a crown
A coin, a ruby, and a crown into a kitten.
After an object has been converted into another object, it disappears immediately.
Question:
How many clovers does Beaver the magician need to create one kitten?
A
5
B
10
C
11
D
12

Acknowledgement

I am grateful to the Bebras Community (bebras.org) for making the tasks from the Bebras Challenge available under Creative Commons License and to Dr. Valentina Dagiene for approving the use of the tasks in this research study.
I also thank Ksheera Sagar from Purdue University for his support and guidance on the statistical data analysis for this study.

Ethical Declaration

I confirm that this research meets ethical guidelines and adheres to the legal requirements of the study country. This research has been approved by the Institutional Review Board at Purdue University. The approval number is IRB-2020-1817.

References

[1]
A. V. Aho. 2012. Computation and computational thinking. Comput. J. 55, 7 (July 2012), 832–835. DOI:
[2]
Abdulaziz A. Alfayez and Judy Lambert. 2019. Exploring saudi computer science teachers’ conceptual mastery level of computational thinking skills. Comput. Schools 36, 3 (2019), 143–166. DOI:
[3]
Charoula Angeli, Joke Voogt, Andrew Fluck, Mary Webb, Margaret Cox, Joyce Malyn-Smith, and Jason Zagami. 2016. A K-6 computational thinking curriculum framework: Implications for teacher knowledge. Educ. Technol. Soc. 19, 3 (2016), 47–57.
[4]
Ana Liz Souto O. Araujo, Wilkerson L. Andrade, Dalton D. Serey Guerrero, and Monilly Ramos Araujo Melo. 2019. How many abilities can we measure in computational thinking?: A study on bebras challenge. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education. ACM, New York, NY, 545–551. DOI:
[5]
Masiar Babazadeh and Lucio Negrini. 2022. How is computational thinking assessed in European K-12 education? A systematic review. Int. J. Comput. Sci. Educ. Schools 5, 4 (Oct. 2022), 3–19. DOI:
[6]
Valerie Barr and Chris Stephenson. 2011. Bringing computational thinking to K-12: What is involved and what is the role of the computer science education community? ACM Inroads 2, 1 (2011), 48–54. DOI:
[7]
Bebras India. 2022. Bebras India Computational Thinking Challenge. Retrieved June 15, 2023 from https://www.bebras.in/
[8]
Quentin Biddy, A. Gendreau Chakarov, J. Bush, C. Hennessy Elliott, J. Jacobs, M. Recker, T. Sumner, and W. Penuel. 2021. A professional development model to integrate computational thinking into middle school science through codesigned storylines. Contemp. Issues Technol. Teacher Educ. 21, 1 (2021), 53–96.
[9]
Stefania Bocconi, Augusto Chioccariello, Panagiotis Kampylis, Valentina Dagiene, Patricia Wastiau, Katja Engelhardt, Jeffrey Earp, Milena Horvath, Egle Jasute, Chiara Malagoli, Vaida Masiulionyte-Dagiene, and Gabriele Stupuriene. 2022. Reviewing Computational Thinking in Compulsory Education: State of Play and Practices from Computing Education. Publications Office of the European Union, LU. Retrieved June 9, 2023 from.
[10]
Kay-Dennis Boom, Matt Bower, Jens Siemon, and Amaël Arguel. 2022. Relationships between computational thinking and the quality of computer programs. Educ. Inf. Technol. 27, 6 (Jul. 2022), 8289–8310. DOI:
[11]
Matt Bower, Leigh N. Wood, Jennifer W.M. Lai, Cathie Howe, and Raymond Lister. 2017. Improving the computational thinking pedagogical capabilities of school teachers. Austr. J. Teach. Educ. 42, 3 (2017), 53–72. DOI:
[12]
Christian P. Brackmann, Marcos Román-González, Gregorio Robles, Jesús Moreno-León, Ana Casali, and Dante Barone. 2017. Development of computational thinking skills through unplugged activities in primary school. In Proceedings of the 12th Workshop on Primary and Secondary Computing Education, ACM, New York, NY, 65–72. DOI:
[13]
K. Brennan and M. Resnick. 2012. New frameworks for studying and assessing the development of computational thinking. In Proceedings of the Annual Meeting of the American Educational Research Association.
[14]
Karen Brennan. 2015. Beyond technocentrism supporting constructionism in the classroom. Construct. Found. 10, 3 (2015), 289–304.
[15]
India British Council. 2019. The School Education System in India. Retrieved from https://www.britishcouncil.in/sites/default/files/school_education_system_in_india_report_2019_final_web.pdf
[16]
Alan Bundy. 2007. Computational thinking is pervasive. J. Sci. Pract. Comput. 1, 2 (2007), 67–69.
[17]
Leonard Busuttil and Marquita Formosa. 2020. Teaching computing without computers: Unplugged computing as a pedagogical strategy. Inf. Educ. 19, 4 (Dec. 2020), 569–587. DOI:
[18]
Deirdre Butler and Margaret Leahy. 2021. Developing preservice teachers’ understanding of computational thinking: A constructionist approach. Br. J. Educ. Technol. 52, 3 (May 2021), 1060–1077. DOI:
[19]
Elisa Nadire Caeli and Aman Yadav. 2020. Unplugged approaches to computational thinking: A historical perspective. TechTrends 64, 1 (2020), 29–36. DOI:
[20]
Leslie Conery, Chris Stephenson, David Barr, Valerie Barr, John Harrison, Jayne James, and Carolyn Sykora. 2011. Computational Thinking Leadership Toolkit—ISTE. Retrieved November 17, 2022 from https://www.yumpu.com/en/document/read/43967234/computational-thinking-leadership-toolkit-iste
[21]
Isabella Corradini, Michael Lodi, and Enrico Nardelli. 2017. Conceptions and misconceptions about computational thinking among italian primary school teachers. In Proceedings of the ACM Conference on International Computing Education Research, ACM, New York, NY, 136–144. DOI:
[22]
CSPathshala. 2016. CSPathshala Curriculum. Retrieved June 15, 2023 from https://cspathshala.org/curriculum/
[23]
CSTA. 2016. K-12 Computer Science Framework. Retrieved from https://dl.acm.org/citation.cfm?id=3079760
[24]
Paul Curzon, Peter W. McOwan, Nicola Plant, and Laura R. Meagher. 2014. Introducing teachers to computational thinking using unplugged storytelling. In Proceedings of the 9th Workshop in Primary and Secondary Computing Education, ACM, New York, NY, 89–92. DOI:
[25]
Valentina Dagiene and Vladimiras Dolgopolovas. 2022. Short tasks for scaffolding computational thinking by the global bebras challenge. Mathematics 10, 17 (Sep. 2022), 3194. DOI:
[26]
Valentina Dagiene and Gerald Futschek. 2008. Bebras international contest on informatics and computer literacy: Criteria for good tasks. In Lecture Notes in Computer Science, Vol. 5090 (Springer, Berlin, 2008), 19–30. DOI:
[27]
Valentina Dagienė and Gabrielė Stupurienė. 2016. Bebras—A sustainable community building model for the concept based learning of informatics and computational thinking. Inf. Educ. 15, 1 (Apr. 2016), 25–44. DOI:
[28]
Havva Delal and Diler Oner. 2020. Developing middle school students’ computational thinking skills using unplugged computing activities. Inf. Educ. 19, 1 (2020), 1–13. DOI:
[29]
Caitlin Duncan, Tim Bell, and James Atlas. 2017. What do the Teachers Think?: Introducing computational thinking in the primary school curriculum. In Proceedings of the 19th Australasian Computing Education Conference, ACM, New York, NY, 65–74. DOI:
[30]
Georgios Fessakis and Stavroula Prantsoudi. 2019. Computer science teachers’ perceptions, beliefs and attitudes on computational thinking in greece. Inf. Educ. 18, 2 (Oct. 2019), 227–258. DOI:
[31]
Google Inc., & Gallup Inc. 2016. Trends in the State of Computer Science in U.S. K-12 Schools. Retrieved from https://services.google.com/fh/files/misc/trends-in-the-state-of-computer-science-report.pdf
[32]
Shuchi Grover and Roy Pea. 2013. Computational thinking in K-12: A review of the state of the field. Educ. Research. 42, 1 (2013), 38–43. DOI:
[33]
Shuchi Grover and Roy Pea. 2018. Computational thinking: A competency whose time has come. In Computer Science Education: Perspectives on Teaching and Learning in School, Sue Sentance, Erik Barendsen and S. Carsten (Eds.). Bloomsbury Publishing, London, 19–37.
[34]
Mustafa Serkan Günbatar. 2019. Computational thinking within the context of professional life: Change in CT skill from the viewpoint of teachers. Educ. Inf. Technol. 24, 5 (2019), 2629–2652. DOI:
[35]
Daniel Hickmott and Elena Prieto-Rodriguez. 2018. To assess or not to assess: Tensions negotiated in six years of teaching teachers about computational thinking. Inf. Educ. 17, 2 (Oct. 2018), 229–244. DOI:
[36]
Wendy Huang and Chee-Kit Looi. 2021. A critical review of literature on “unplugged” pedagogies in K-12 computer science and computational thinking education. Comput. Sci. Educ. 31, 1 (Jan. 2021), 83–111. DOI:
[37]
Cruz Izu, Claudio Mirolo, Amber Settle, Linda Mannila, and Gabriele Stupuriene. 2017. Exploring bebras tasks content and performance: A multinational study. Inf. Educ. 16, 1 (2017), 39–59. DOI:
[38]
Ugur Kale, Mete Akcaoglu, Theresa Cullen, Debbie Goh, Leah Devine, Nathan Calvert, and Kara Grise. 2018. Computational What? Relating computational thinking to teaching. TechTrends 62, 6 (2018), 574–584. DOI:
[39]
Ugur Kale, Mete Akcaoglu, Theresa Cullen, Debbie Goh, Leah Devine, Nathan Calvert, and Kara Grise. 2018. Computational what? Relating computational thinking to teaching. TechTrends 62, 6 (Nov. 2018), 574–584. DOI:
[40]
Anita Karwal, Venkataramna Hegde, Vikram Tanwar, R. S. Verma, Sagar Choudhary, Chandertara Das, Santan Singh, Jagdish Kumar, Geetanjali, Vinay Kumar, Saba Akhtar, Abhishek Kundu, Ashwani Kumar, Prabhat Mishra, S. K. Tarun, B. J. Gosh, Vikas Jain, and Harish Singh. 2023. Unified District Information System for Education Plus. Retrieved from https://www.education.gov.in/sites/upload_files/mhrd/files/statistics-new/udise_21_22.pdf
[41]
Diane Jass Ketelhut, Kelly Mills, Emily Hestness, Lautaro Cabrera, Jandelyn Plane, and J. Randy McGinnis. 2020. Teacher change following a professional development experience in integrating computational thinking into elementary science. J. Sci. Educ. Technol. 29, 1 (Feb. 2020), 174–188. DOI:
[42]
Aycan Çelik Kirçali and Nesrin Özdener. 2022. A comparison of plugged and unplugged tools in teaching algorithms at the K-12 level for computational thinking skills. Tech. Know. Learn. 28, 4 (2022), 1--29. DOI:
[43]
Siu-Cheung Kong, Ming Lai, and Daner Sun. 2020. Teacher development in computational thinking: Design and learning outcomes of programming concepts, practices and pedagogy. Comput. Educ. 151, (July 2020), 103872. DOI:
[44]
Özgen Korkmaz, Recep Çakir, and M. Yaşar Özden. 2017. A validity and reliability study of the computational thinking scales (CTS). Comput. Hum. Behav. 72, (July 2017), 558–569. DOI:
[45]
Michael H. Kutner, Christopher J. Nachtsheim, John Neter, William Li, and Chris Nachtsheim (Eds.). 2005. Applied Linear Statistical Models (5. ed.). McGraw–Hill, Boston, MA.
[46]
Qing Li. 2021. Computational thinking and teacher education: An expert interview study. Hum. Behav. Emerg. Tech. 3, 2 (Apr. 2021), 324–338. DOI:
[47]
James Lockwood and Aidan Mooney. 2018. Developing a computational thinking test using bebras problems. In CEUR Workshop Proceedings.
[48]
James J. Lu and George H. L. Fletcher. 2009. Thinking about computational thinking. SIGCSE Bullet. Inroads 41, 1 (2009), 260–264. DOI:
[49]
Richard E. Mayer. 2002. Rote versus meaningful learning. Theory Into Pract. 41, 4 (Nov. 2002), 226–232. DOI:
[50]
India Ministry of Human Resource Development. 2020. New Education Policy 2020. Retrieved from https://www.education.gov.in/sites/upload_files/mhrd/files/NEP_Final_English_0.pdf
[51]
Aidan Mooney and James Lockwood. 2020. The analysis of a novel computational thinking test in first year undergraduate computer science course. All Ireland J. High. Educ. 12, 1 (2020).
[52]
Bhagya Munasinghe, Tim Bell, and Anthony Robins. 2021. Teachers’ understanding of technical terms in a Computational Thinking curriculum. In Australasian Computing Education Conference. ACM, New York, NY, 106–114. DOI:
[53]
National Council of Teachers of Mathematics. 2023. Procedural Fluency in Mathematics: Reasoning and Decision-Making, Not Rote Application of Procedures Position. Retrieved from https://www.nctm.org/Standards-and-Positions/Position-Statements/Procedural-Fluency-in-Mathematics/
[54]
National Research Council, National Academies, and National Academy of Sciences. 2010. Report of a Workshop on the Scope and Nature of Computational Thinking. National Academies Press, Washington, D.C.
[55]
Sinan Ozdemir. 2022. Feature Engineering Bookcamp. Manning Publications Co, Shelter Island, NY.
[56]
Raghu Raman, Smrithi Venkatasubramanian, Krishnashree Achuthan, and Prema Nedungadi. 2015. Computer science (CS) education in Indian schools: Situation analysis using darmstadt model. ACM Trans. Comput. Educ. 15, 2 (2015). DOI:
[57]
Emily Relkin, Laura De Ruiter, and Marina Umaschi Bers. 2020. TechCheck: Development and validation of an unplugged assessment of computational thinking in early childhood education. J. Sci. Educ. Technol. 29, 4 (Aug. 2020), 482–498. DOI:
[58]
Peter J. Rich, Stacie L. Mason, and Jared O'Leary. 2021. Measuring the effect of continuous professional development on elementary teachers’ self-efficacy to teach coding and computational thinking. Comput. Educ. 168, (Jul. 2021), 104196. DOI:
[59]
Marcos Román-González, Jesus Moreno-León, and Gregorio Robles. 2017. Complementary tools for computational thinking assessment. Retrieved from https://www.computacional.com.br/files/Gerais/ROM%C3%81N-GONZ%C3%81LEZ%20-%20Complementary%20Tools%20for%20CT%20assessment.pdf
[60]
Marcos Román-González, Jesús Moreno-León, Gregorio Robles, Marcos Román-gonzález, Jesús Moreno-león, Rey Juan Carlos, and Universidad Rey Juan Carlos. 2017. Complementary tools for computational thinking assessment modeling of software designs in open source projects view project complementary tools for computational thinking assessment.
[61]
Olgun Sadik, Anne-Ottenbreit Leftwich, and Hamid Nadiruzzaman. 2017. Computational thinking conceptions and misconceptions: Progression of preservice teacher thinking during computer science lesson planning. In Emerging Research, Practice, and Policy on Computational Thinking, Peter J. Rich and Charles B. Hodges (eds.). Springer International Publishing, Cham, 221–238. DOI:
[62]
P. Sands, A. Yadav, and J. Good. 2018. Computational Thinking in K-12: In-service teacher perceptions of computational thinking. In Computational Thinking in the STEM Disciplines (1st ed.). Springer International Publishing, Cham, 151–164. Retrieved from
[63]
Cynthia Selby and John Woollard. 2014. Computational thinking: The developing definition. In Special Interest Group on Computer Science Education (SIGSCE).
[64]
Sue Sentance, Erik Barendsen, and Carsten Schulte (Eds.). 2018. Computer Science Education: Perspectives on Teaching and Learning in School. Bloomsbury Academic, London.
[65]
Vipul Shah. 2019. CSPathshala: Bringing computational thinking to schools. Commun. ACM 62, 11 (2019), 54–55. DOI:
[66]
Valerie J. Shute, Chen Sun, and Jodi Asbell-Clarke. 2017. Demystifying computational thinking. Educ. Res. Rev. 22 (Nov. 2017), 142–158. DOI:
[67]
Cary Sneider, Chris Stephenson, Bruce Schafer, and Larry Flick. 2014. Exploring the science framework and NGSS: Computational thinking in high school classrooms. Sci. Teach. 81, 5 (2014), 53–59. DOI:
[68]
Xiaodan Tang, Yue Yin, Qiao Lin, and Roxana Hadad. 2018. Making Computational Thinking Evident: A Validation Study of a Computational Thinking Test. AERA Online Paper Repository.
[69]
Ann D. Thompson, Denise L. Lindstrom, and Denise A. Schmidt-Crawford. 2020. Computational thinking: What went wrong? J. Digit. Learn. Teach. Educ. 36, 1 (Jan. 2020), 4–5. DOI:
[70]
Joke Voogt, Petra Fisser, Jon Good, Punya Mishra, and Aman Yadav. 2015. Computational thinking in compulsory education: Towards an agenda for research and practice. Educ. Inf. Technol. 20, 4 (2015), 715–728. DOI:
[71]
Jeanette Wing. 2008. Computational thinking and thinking about computing. Philos. Trans. Roy. Soc. 366, (2008), 3717–3725. DOI:
[72]
Jeannette M. Wing. 2006. Computational thinking. Commun. ACM 49, 3 (March 2006), 33–35. DOI:
[73]
Jeannette M. Wing. 2011. Research Notebook: Computational Thinking—What and Why? Retrieved November 17, 2022 from https://www.cs.cmu.edu/link/research-notebook-computational-thinking-what-and-why
[74]
[75]
Aman Yadav, Hai Hong, and Chris Stephenson. 2016. Computational thinking for all: Pedagogical approaches to embedding 21st century problem solving in K-12 classrooms. TechTrends 60, 6 (2016), 565–568. DOI:
[76]
Aman Yadav, Christina Krist, Jon Good, and Elisa Nadire Caeli. 2018. Computational thinking in elementary classrooms: Measuring teacher understanding of computational ideas for teaching science. Comput. Sci. Educ. 28, 4 (2018), 371–400. DOI:
[77]
Aman Yadav, Christina Krist, Jon Good, and Elisa Nadire Caeli. 2018. Computational thinking in elementary classrooms: Measuring teacher understanding of computational ideas for teaching science. Comput. Sci. Educ. 28, 4 (Oct. 2018), 371–400. DOI:
[78]
Sabiha Yeni, Natasa Grgurina, Felienne Hermans, Jos Tolboom, and Erik Barendsen. 2021. Exploring teachers’ PCK for computational thinking in context. In Proceedings of the 16th Workshop in Primary and Secondary Computing Education, ACM, New York, NY, 1–10. DOI:
[79]
Baichang Zhong, Qiyun Wang, Jie Chen, and Yi Li. 2016. An exploration of three-dimensional integrated assessment for computational thinking. J. Educ. Comput. Res. 53, 4 (Jan. 2016), 562–590. DOI:

Index Terms

  1. Factors That Predict K-12 Teachers' Ability to Apply Computational Thinking Skills

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Computing Education
    ACM Transactions on Computing Education  Volume 24, Issue 1
    March 2024
    412 pages
    EISSN:1946-6226
    DOI:10.1145/3613506
    • Editor:
    • Amy J. Ko
    Issue’s Table of Contents
    This work is licensed under a Creative Commons Attribution International 4.0 License.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 14 January 2024
    Online AM: 21 November 2023
    Accepted: 02 November 2023
    Revised: 25 October 2023
    Received: 21 November 2022
    Published in TOCE Volume 24, Issue 1

    Check for updates

    Author Tags

    1. Computational thinking skills
    2. teacher professional development
    3. pedagogical content knowledge
    4. 21st century abilities

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 1,372
      Total Downloads
    • Downloads (Last 12 months)1,372
    • Downloads (Last 6 weeks)120
    Reflects downloads up to 01 Dec 2024

    Other Metrics

    Citations

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Login options

    Full Access

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media