Video-based Training for Meeting Communication Skills

Background: Discussing and sharing information in development teams is part of any software project. Therefore, software engineers spend significant time in meetings with their team. Communicating effectively and efficiently in those meetings is essential. However, software engineers often do not possess the right skills. On the other hand, training face-to-face meeting communication skills in university settings is resource- and time-consuming. Aims: Our goal is to develop and evaluate a method to support the training of face-to-face meeting communication skills. Method: We develop a method based on active video-watching. Active video-watching supports deep learning by systematically engaging students with video-based learning material. We also implement this method in an online platform for classroom use. Furthermore, we empirically develop a new measurement instrument to assess face-to-face meeting communication skills. To evaluate the training method, we used it in three instances of a second-year software engineering project course. To assess learning gain, we assessed (a) the conceptual knowledge about face-to-face meeting communication, and (b) skills based on our newly developed measurement instrument, both before and after the training. Results: Both conceptual knowledge as well as skill measurement scores based on our instrument increased. Increases are statistically significant. Conclusions: We show the effectiveness of active video-watching for training face-to-face meeting communication skills, one specific soft skill relevant for software engineers. The measurement instrument that we developed can also be used as a stand-alone tool to assess skills of students and potentially practitioners.


INTRODUCTION 1.Background and Motivation
Soft skills, i.e., skills that enable a software engineer to interact with others effectively and efficiently (e.g., the ability negotiate and collaborate) are essential for professionals, highly desired by employers [15] and often required by accreditation bodies of computer science and software engineering programs (e.g., ABET in the United States, the Accreditation Board for Engineering Education of Korea, Engineering New Zealand, Engineers Canada) [23].One specific soft skill is the ability to communicate verbally.In short, we define verbal communication skills as the "ability to exchange ('send' and 'receive') information in different forms orally (e.g., in formal presentations or informal conversations) [18]." Software professionals communicate in different settings, e.g., with customers as well as technical and non-technical colleagues within and outside their team.A significant amount of that communication happens in meetings [27,28,53].While distributed and online meetings were common during COVID19 lock-downs, face-to-face meetings are generally preferred [5,49].Face-to-face is typically the most effective mode of communication and better problem specifications and solutions emerge from teams that communicate face-to-face [37].As Schwaber and Beedle argue, meetings are essential for executing a project and ensuring a smooth flow of information [54].However, not all meetings are useful and enjoyable, and behaviour and interactions in a meeting impact individuals, a team, and a project as a whole [53].On the other hand, even though meeting skills impact productivity [51,55], many software engineers are not aware of the importance of their behavior in meetings [53].Also, software engineers and in particular new graduates may not possess the right skills to effectively and efficiently communicate in meetings or only gain those skills slowly over time [4,31].
However, training meeting communication skills is time-and resource-intensive [3].Even established global firms such as Google acknowledge the difficulty of training those skills [14].In higher education, those skills are typically taught in software project courses, group projects, workshops or through coaching [38].Effectively practicing meeting communication skills requires a real project setting with teams that work together over a longer period of time, with various roles, as well as coaching and frequent feedback from instructors (in university settings) or senior colleagues (in industry settings).Academic institutions, software organizations and teams often do not have the resources (time, budget, etc.) to systematically support such training.
Video-based training has the potential to provide scalable and less resource-intensive training that can also partially replace training that is focused around teachers [50]."Active video-watching" aims at overcoming some of the limitations of video-based training which stem from passive consumption of videos, leading to shallow learning [17].In short, active video-watching enhances passive watching of educational videos with activities to increase engagement, motivation and eventually learning.Furthermore, active video-watching allows learners to engage with the learning material at any time and in their own pace (see also Section 2.3).Previous research shows that active video-watching can positively contribute to the learning of soft skills (e.g., verbal communication and presentation skills [12,17]).

Paper Goal and Contributions
In this paper, we aim to develop and evaluate a method based on active video-watching to support the training of face-to-face meeting communication skills.The target audience of this paper includes software engineering educators as well as researchers in the area of software engineering education.Our work may also benefit providers of educational tools that support the training of soft skills in the context of software engineering.We make the following contributions: The rest of the paper is organized as follows.In Section 2 we present related work.Section 3 describes our video-based training method, while in Section 4 we design a new measurement instrument for face-to-face meeting communication skills.We evaluate our training method using that instrument in Section 5 and discuss our findings in Section 6, including limitations and threats to validity.We conclude in Section 7.

Face-to-Face Meeting Communication
The majority of software engineers work in teams rather than as individuals [1].Software engineers communicate with a wide range of stakeholders and also within their team.In fact, communication skills are amongst the top three critical skills for software engineers [19].One particular instance of communication is verbal face-to-face communication.Software engineers frequently communicate with technical and non-technical co-workers and other stakeholders about requirements, design, implementation, etc. to complete their job.This is because information sharing in teams is one of the most important aspects of successful software development [47].For instance, project guidelines, requirements and design decisions need to be communicated with the whole team or specific team members.Poor communication between developers significantly increases the chances of software project failures [7].Face-to-face meetings are an effective way to communicate with team members [28].In the scope of our work, we define meetings as formal instances of team discussions (e.g., planning meetings, review meetings, stand-up meetings), rather than random or casual discussions of team members during lunch or over a coffee.
Furthermore, in our work we focus on communication in meetings within a team, rather than communication with clients or other external stakeholders outside a team. 2 2 We define a team as those who are responsible for designing, implementing and delivering the software product.This is what Kruchten refers to as the "internal focus" of software engineers (i.e., communication regarding the design, implementation, documentation, etc.), unlike the "external -inwards" focus (getting input from the outside world by communicating with other stakeholders, such as customers, users, product managers) or the "external -outwards" focus (i.e., communicating to share information or help other stakeholders, e.g., communicating the design and the product or project in general to others) [32].

Teaching Soft and Communication Skills
Some common methods for training soft skills, including communication, in educational settings include debates, role playing and demonstrations, as well as case studies, field visits that allow students to observe certain behaviours, and mock interviews [6,56].Another common approach for teaching soft skills in software engineering education is to develop dedicated courses [2].However, students do not always value such courses [52].Therefore, a frequently used approach in software engineering education is project-based learning [9,46], where students work as a team to build a software system in an educational setting [22].In project-based learning students also practice soft skills [36].Furthermore, Caeiro et al. identified game-based learning as a method to teach soft skills [6].Examples include game-based learning using physical environments such as board games and educational escape rooms, but also electronic environments and software-based gamified teaching methods.Such methods have been used to train soft skills like problem solving or team work.For example, Lala and colleagues developed a serious game and "simulation" of interactions and applied it over three semesters to improve the learning related communication skills [34].

Active Video-Watching
Watching videos is a powerful tool for learning of soft skills [10].Video-based learning is particularly helpful when learning topics that require contextualization based on the learner's personal experience.Well-designed, assessment-focused, and easy-to-use video tutorials improve student satisfaction and learning, because they enable students to learn how and when they want [57].Video-based learning allows students to learn independent of time and location, sharing of teaching resources amongst educators, educating independent and self-guided students and the opportunity to balance pedagogical needs and resource constraints.
However, only watching videos is a passive activity and leads to low engagement and shallow learning [8].A successful learning experience requires students to actively engage with video content.This can be achieved by integrating interactive activities into videos [16].Interactive activities could include commenting on videos, answering questions about videos, or discussing videos with peers [12].Previous research has shown that active video-watching can support the training of soft skills, e.g., presentation skills as one specific communication skill [17].
An example platform that supports active video-watching is AVW-Space [12].AVW-Space is a web-based platform that facilitates learner engagement during video-watching through interactive note-taking [17].In our work we create our own instance of the AVW-Space platform to implement our training method (Section 3).

PROPOSED TRAINING METHOD
In this section we first outline several design principles that we considered when developing our face-to-face meeting communication training method.We then discuss how we implemented the method on the AVW-Space platform.

Design Principles for Training Method
When developing the training method, we followed several principles to ensure its relevance and rigor: • Grounding in educational pedagogy: Any "intervention" in an educational context should be firmly grounded in educational pedagogy.Our training method is based on active video-watching, a technique with a sound theoretical foundation and that has been evaluated in previous studies (e.g., [12]).The overall goal of our training method is to improve engagement with video-based material since engagement increases learning [8].Also, activities involved in the method allow students to reflect, since reflection contributes to learning [33,45].We include further references to educational pedagogy below in Section 3.2.
Moreover, the training method should support learners with meaningful activities.Therefore, we include activities that students are familiar with (e.g., watching and reviewing videos, see Section 3.2).• Relevancy to software engineering: While communication is a general skill relevant across many professions, the training method needs to consider the particular context of software engineering and the context in which software engineering students learn.Therefore, the training method must be firmly rooted in the software engineering domain (e.g., consider the types and nature of meetings that software engineers and software engineering students would typically engage with).• Practicality: To be successfully applied in a software engineering course, the training method should neither disrupt students' learning workflows nor cause disruption to a student's typical way of working [41].The training method must feel to students like a cohesive activity within a course setting.Therefore, our training method can be integrated into courses as an activity that complements lectures or becomes part of lab activities.Furthermore, the training method should not add significantly to the workload of course teaching staff [13].We believe that once the method has been set up, it can be reused with minor adjustments.

Implementation of Training Method
We created an instance of AVW-Space (see Section 2.3) specifically for face-to-face meeting communication skills.AVW-Space utilizes the learners' familiarity with viewing and commenting on videos on social networks such as Facebook and video platforms such as YouTube (this also addresses the design principles outlined above).

Activities. Micro-scaffolds facilitate two types of activities:
(1) Commenting on videos to further engage with the content of videos.When students write a comment, the video is paused.
Students can enter their comments in a box and also need to select one aspect.An aspect specifies what a comment is related to and stimulates learners to reflect on their experience.Since our method utilizes two types of videos (tutorial videos and example videos, see Section 3.2.2),we provide two types of aspects: For tutorial videos we adopted the same aspects as in previous studies with AVW-Space [12]: (1) "I am rather good at this", (2) "I didn't realise I wasn't doing it", (3) "I did/saw this in the past", ( 4) "I like this point".We defined the following aspects for writing comments on example videos: (1) "verbal communication", (2) "giving feedback", (3) "receiving feedback", (4) "active listening", ( 5) "meeting contributions", corresponding to the concepts introduced in the tutorial videos.(2) Reviewing of comments made by other learners to reflect on the experience and insights of other learners.When reviewing comments, we asked students to rate comments using the following categories [12]: (1) "This is useful for me", (2) "I hadn't thought of this", (3) "I didn't notice this", (4) "I don't agree with this", ( 5) "I like this point".Furthermore, two types of nudges are provided adaptively to learners: (a) Nudges to students who are not active (e.g., who do not comment on videos).These nudges encourage activity and engagement with videos.We used nudges defined in the pedagogical literature [43].Nudges are shown when students have watched 30% and 70% of a video without comments or if aspects are underutilized.(b) Nudges based on low-quality comments.We used comment quality criteria defined in the literature (e.g.just repeating the video content, short comments) [44].
Figure 1 shows a screenshot of a student writing a comment.The colored dots above the timeline indicate timestamps of comments made by other learners, including the aspect assigned to it.The screenshot also illustrates a nudge (yellow box) to encourage the learner to use an underutilized aspect for a comment.Freely available videos is a requirement for cost-efficient and continuous updates to video-based learning systems (in particular for organizations with limited budgets) [17].
We selected videos based on several criteria: (1) Suitability with regards to content for the skill taught: While we aimed for videos that illustrate communication in software engineering-related meetings, the content of videos is not exclusively applicable to software engineering.(2) Pedagogical value of the videos: Videos need to present content in a manner accessible for learners (e.g., we excluded videos from domains that learners may not be familiar with or videos with highly opinionated content).Also, we aimed at videos that explain concepts (tutorial videos) as well as videos that illustrate concepts (example videos).(3) Length: Each video is fewer than six minutes long to ensure that learners do not lose focus or get distracted.(4) Video and sound quality is acceptable.
Four members of the research team searched for videos on YouTube.We then reviewed and discussed the videos to select the videos.Table 1 shows the selected videos: six tutorial videos on various elements of face-to-face communication, and four example videos with real (or acted) meetings.

SKILL ASSESSMENT INSTRUMENT 4.1 Overview
To assess the level of skills and to evaluate the training method described in Section 3, we needed a valid and reliable assessment instrument.However, there are currently no appropriate measures of face-to-face meeting communication skills.Assessment of soft skills like face-to-face meeting communication is typically more complex than assessing technical skills, because it is challenging to assess them in a realistic and pragmatic way [20].Soft skills can be assessed either through self-assessment or assessment by others (e.g., via observations).Since we aimed for a scalable training method that could potentially integrate skill assessment into the learning path, we opted for a self-assessment instrument.We therefore developed a psychometric scale for this purpose.
We followed a typical measurement instrument development process, including the phases of (1) construct definition, (2) item development, (3) content validation, and (4) exploratory factor analysis [11,21].This scale allows learners to self-assess different aspects of their face-to-face meeting communication skills with multiple items (i.e., questions), using a 7-point Likert-type scale, ranging from "1 = Never" to "7 = Always".
In this instrument items are the questions given to those who are assessing their skills using the instrument.Instruments can also have different dimensions, or factors, that together represent the unobserved construct under investigation; in our case, face-toface meeting communication skills.Factor analysis is the method that groups items that "go together" to form these factors [21].As Graziotin et al. explain, "The fundamental idea behind psychological testing is that what is being assessed is not a physical object, such as height and weight.Rather, we are attempting to assess a construct, that is, a hypothetical entity [...] constructed by humans to represent concepts referring to various, concrete entities that are perceived in the moment, such as behaviors, experience, and attitudes [...]."In our work, those constructs relate to the self-assessed behaviour about face-to-face meeting communication.Hence, the analysis of any learning is conducted at the factor level, rather than individual item (i.e., question) level.

Construct Definition
We defined meeting communication skills as personal knowledge, perceptions, and assessment of verbal communication, giving and receiving feedback, active listening, and meeting participation or contributing behaviours during face-to-face meetings [24].

Item Development
We developed an initial item pool for our instrument from two existing scales: Jackson's verbal communication scale [24] and Mishima et al. 's Attitude of Active Listening scale [42].Jackson's scale consisted of three sub-dimensions that were relevant to our research (we excluded the written communication dimension), including verbal communication (three items, e.g., "I express technical ideas clearly, so that every meeting participant can understand them"), giving and receiving feedback (six items, e.g., "I am mindful of other meeting participants' feelings when providing feedback"), and meeting participation (six items, e.g., "When other meeting participants are hesitating to contribute their ideas, I encourage them to contribute their ideas and suggestions").In addition, we drew seven items from Mishima et al.'s scale (e.g., "I listen to the other meeting participants, paying attention to her/his body language.").
In total, we drew 22 items from these existing scales.

Content Validation
We conducted content validation by getting the items reviewed by subject matter experts.We consulted software engineering industry experts ( = 2) and those with research experience in computer science, software engineering and scale development ( = 3).We made minor wording amendments to items as a function of this feedback, including simplification of wording, and dividing doublebarrelled items into separate items.

Exploratory Factor Analysis
We administered the scale to 147 second-year software engineering students at the University of Canterbury, over three years.Using data from over these three years, we conducted an Exploratory Factor Analysis (EFA) to investigate scale dimensionality.Kaiser-Meyer-Olkin measure indicated adequate sampling adequacy (0.73), and Bartlett's test was significant ( < .001)indicating adequate correlation between items to conduct EFA.We used Principal Axis Factoring to extract common factors [26], oblique rotation to allow for correlation between factors, and Keiser criterion of Eigenvalues > 1 to retain factors [25], resulting in a four-factor solution.We adhered to DeVellis' criteria for item reduction [11].That is, items which cross-loaded onto more than one factor (> 0.3) were removed, and only items with loadings of 0.4 and above were retained (note: one item had a loading of 0.39, but no cross loadings).Loadings here indicate the correlation between the item and the factor.The final scale resulted in 17 items (Table 2).
We renamed dimensions with feedback from a focus group discussion with content experts to reflect the new dimension content.Empathetic engagement (previously Giving and Receiving Feedback) had five items, with an internal reliability of 0.69.Selfregulation (previously Active Listening) had five items, with an internal reliability of 0.78.Expression of feelings (previously Meeting Participation) had three items, with an internal reliability of 0.82, and Contribution of ideas (previously Verbal Communication) had four items, with an internal reliability of 0.78.Below we provide the definition of each dimension (construct): • Empathetic engagement: Ability to give and receive feedback constructively and respectfully to individual or multiple meeting participants.• Self-regulation: Ability to exhibit superior listening skills and to ensure that all meeting participants can contribute to discussions.• Expression of feelings: Ability to take an active and productive role in meetings.• Contribution of ideas: Ability to communicate orally clearly and compassionately depending on the audience and topic discussed in a meeting.

EVALUATION OF TRAINING METHOD 5.1 Teaching Context
We used active video-watching in an undergraduate project course for second-year software engineering students at the University of Canterbury in 2020, 2021 and 2022.Below, we briefly detail the context in which the teaching method was applied, specifically the students in the course, the context and format of this course, and the role of meetings in this course.
• Students: Students have already passed an introductory course to software engineering and are familiar with software engineering principles, practices, techniques and processes.
• Content: The course is purely project-based and the project includes business analysis, requirements analysis, architecture design, implementation, testing, deployment, etc. for a non-trivial software system over twelve weeks.• Format: Students work in teams of five to six students and deliver increasingly comprehensive versions of their design and product in three deliverables.Students follow an interactive and incremental spiral model and adopt some practices from Scrum, e.g., stand-ups (weekly), iteration planning and review.Weekly Tutorials and seminars are used for weekly "stand-ups", for teaching topics relevant to the project as well as for students to present their work.This enables students to develop communication skills as early as possible [39].Weekly quizzes complement the material covered in tutorials.
We assessed students and teams at the end of each iteration.• Role of meetings: Teams have at least one weekly face-toface team meeting outside scheduled course sessions.Teaching staff are not attending those sessions, but students need to register their regular meeting times to ensure that these sessions indeed happen outside scheduled course sessions.The purpose of those meetings is for students to have additional scheduled and regular times to jointly work on the project.The content of the meeting depends on the status of the project (e.g., teams may use those meetings to review work, plan future activities).While there is no formal training regarding communication skills, at the beginning of the course all students receive a short general introduction to meetings and what to do before, during and after a meeting.
Students used AVW-Space in their own time to learn about faceto-face meeting communication skills as one component of communication skills of a professional software engineer.Learning about face-to-face communication was part of the learning experience of the course and students received 5% credit if they completed all exercises.We received approval from our Human Ethics board to also use the data for analysis and research purposes.

Activities Performed by Students
Table 3 shows the activities performed by students: • Week 1: Teaching staff formed teams and informed students about the learning activity.• Week 2: (1) We conducted an initial assessment where students completed a survey that included demographic questions and questions about training and experiences with face-to-face meetings (Survey 1).Furthermore, we included the measurement instrument for face-to-face meeting communication skills (see Section 4), as well as a question to assess conceptual knowledge about face-to-face meeting communication skills (see Section 5.3).( 2) Students watched and commented on videos (and selected an aspect for each comment).We instructed students to first watch and comment on tutorial videos, and later use what they have learned to critique the example videos.• Week 3: We made anonymized comments available to the whole class for rating.• Week 4: Students recorded one face-to-face meeting of their team outside the scheduled sessions.We uploaded this recording as a new video to AVW-Space for each team.• Week 5: We asked students to watch and comment on the recording of their team meeting (similar to what students did in week 2 for tutorial and example videos).Members of the teams could only watch the recording of their own team's meeting, but not the recordings of other teams.• Week 6: We asked students to rate each others' comments on the video of their team meeting (similar to what students did in week 3 for tutorial and example videos).• Week 7: We asked students to complete a final survey (Survey 2).This survey included the same instrument and conceptual knowledge questions as Survey 1.

Conceptual Knowledge
We included an open question as a proxy to test students' conceptual knowledge related to face-to-face meeting communication skills.We asked students to "Write all words/phrases (one per line) that you associate with effective communication in software engineering meetings." We marked student responses automatically, Watch and comment on the recorded team meeting 6 Review and rate comments written by team members on recorded meeting 7 Final assessment using text analytics methods and a vocabulary of face-to-face meeting communication skills developed by the authors (see below).
The conceptual knowledge score is the number of the vocabulary concepts that appear in a student's response to the conceptual knowledge question.
To develop the vocabulary, we first generated a corpus from the transcripts of tutorial videos.Tokens were extracted and lemmatized after lowercasing texts and removing punctuation and stop words.Next, using collocation statistics [40] implemented in the Phrases module of the Genism library 7, words and bigram phrases that appeared more than twice in the corpus were extracted automatically.In addition to collocation statistics, we extracted the most relevant and similar words using Global Vectors (GloVe) Word Representation to represent each word [48].We extracted a total of 225 words and phrases along with 225 synonyms.Three independent expert coders verified whether the extracted words should be in the domain vocabulary.Each word was coded with 1 or 0, depending on whether a particular word was relevant or not.Pairwise Cohen's Kappa test revealed moderate (0.55), substantial (0.61) and nearly perfect (0.91) agreement between the coders [30].Fleiss' kappa (0.69) also showed substantial inter-coder agreement [35], but Krippendorff's alpha coefficient(0.31) was low [30].The three coders then reviewed their codes with a fourth coder to resolve differences, using the majority vote to achieve agreement.As the result, eleven words were excluded, and ten new words were added.

Results
The goal of our study was to understand whether video-based training made a difference to students' learning.

Demographics.
In Table 4 we provide an overview of participants.Note that not all students enrolled in the course participated (the course typically has 50-60 enrolled students).We only show the number of students who completed all tasks and who we included in the analysis.The sample of students was rather consistent across the three years with most students being male, 18-23 years old and English native speakers (typical demographics of computer science and software engineering programs at our institution).
Furthermore, in Table 5 we provide an overview of the participants' background regarding communication training and videobased learning.Most participants did not have any software engineering experience outside university and only few had received training in face-to-face communication in meetings.Those who had received some training received it at high-school, extra-curricular Regarding the familiarity with YouTube, we asked a Likert-scale question about how often they watched YouTube (0: never, 1: occasionally, 2: once a month, 3: every week, 4: every day).Finally, we asked how often they watched YouTube videos for learning using the same Likert scale.We show averages and standard deviations (in parentheses) in Table 5.As can be seen, students were familiar with YouTube in general and some used YouTube for learning.No participant indicated that they never used YouTube for learning.
Given the similarity of participants between years, in the following analysis we pooled all participants into the final dataset of 80 students.

Skill Assessment using Instrument.
In Table 6 we show the overall comparison of meeting communication skills before and after the video-based training using our scale.As can be seen, for all factors except self-regulation the improvement is statistically significant (Wilcoxon's test).Future work may explore why there was no improvement in self-regulation.One possible reason could be that at the beginning of the course and before starting the training, students may have thought that they had self-regulation, but with training they became aware of their interruptions when another participant spoke and thus this construct decreased.Note that we did not find a difference before the training between the three years.When digging deeper into the years (not shown in Table 6), we found that the difference between before and after the training was not significant only for one year, 2021.This could be due to the small set of students who completed Survey 2.

Conceptual Knowledge.
To complement the skills assessment using our face-to-face meeting communication scale, we also used conceptual knowledge.As can the seen in Table 7, using the paired samples t-test we found that there was a statistically significant difference between conceptual knowledge scores before and after the training.This complements the findings from the face-to-face meeting communication skills instrument, which also shows an increase after the video-based training.Active video-watching stimulates the interaction of students with videos to keep them engaged in the learning process.At the same time, once set up, active video-watching requires little intervention from teaching staff and therefore can be a scalable tool for learning.On the other hand, the learning experience using active video-watching may depend on the background, experience, motivation and attitudes of learners.However, this is not much different to what educators need to consider in more "traditional" classroom settings.While our training method focuses on face-to-face meetings and "internal meetings", our findings may also apply to other meeting settings.However, developing guidelines for adapting and applying the training method for other types of meetings is subject to future work.similarly, applying a similar training method for the teaching other soft skills is also part of future work.
We assessed the skill level based self-assessment using our measurement instrument.Alternatively, we could have assessed students through observation.However, this raises several issues.First, the behaviour of students in meetings may have changed if teaching staff were present.Second, any observational technique is subjective, so we could not guarantee that assessment through observation is consistent, in particular if multiple assessors are involved (we may use observations to triangulate the results as we outline in Section 6.2).Third, considering the design principles outlined in Section 3.1, our training method should not increase the load of teaching staff.Observing students in large project courses is simply not feasible.

Practical Challenge when Developing Training Method.
A practical challenge for creating the training method is creating or selecting appropriate videos.This is time-consuming and requires a well-defined process and criteria to select videos which meet the specific learning goals [50].In our work, we selected publicly available videos rather than creating new videos.One reason was to provide a flexible and cost-efficient solution also for organizations with potentially limited resources for developing new learning material.
6.1.3Other Factors that Contribute to Meetings.One big issue hindering productive and satisfying meetings is inappropriate behavior such as complaining.In particular, talking about problems without at least trying to solve them decreases motivation and mood of the team [27].In our work we considered inappropriate behaviour in the choice of videos (e.g., where meeting participants behave rude towards other participants).Furthermore, our instrument partially covers inappropriate behavior, but assumes that students are aware of such behavior.Integrating recordings of their own meetings into the platform aims at facilitating reflection on behavior from the perspective of an outsider to increase the probability of spotting one's own inappropriate behaviour.
Moreover, our work only focused on the communication between individuals.Other factors that contribute to successful and useful meetings (such as planning and meeting facilitation) were not considered.However, such factors are more "mechanical" and can be taught in lecture-style settings (e.g., by providing guidelines for how to prepare meetings).

Face-to-face Meeting Communication Training in Practice.
As argued by others, video-based learning has the potential to also support training in professional contexts [17].Therefore, we envision that our technique (which has already successfully been used in a classroom setting) may also be transferred to professional training programs.However, this would require several steps: (1) The videos need to be reviewed to meet the level of expertise of professionals, considering their experience and background.(2) Aspects may need to be adjusted to also allow learners to reflect n their professional development (e.g., by adding aspects like "I have experienced this when leading a team."or "Given my experience, I do not think this is relevant.").( 3) Nudges need to be adjusted based on the difference in quality comments that practitioners may produce (one would expect that a practitioner has stronger conceptual knowledge about meeting communication skills).Finally, if our instrument is used to assess the skill level of professionals, we also need to collect data from professionals following the same process as outlined in Section 4 to ensure that items and constructs represent aspects of face-to-face meeting communication that are relevant for practitioners and those with industry experience.

Threats to Validity
Our study is subject to various threats to validity: • External validity: We conducted the evaluation of the video-based training with 80 students over three years in a second-year software engineering course.The training may have different effects for students in different years or students in different programs.However, based on our findings we are confident that our results apply to at least junior software engineering students.Additional research is needed to determine the effect of the online training on larger populations of students.Also, our study only includes one educational institution from one country.Since communication (and perceptions about communication) depends on culture, our findings are contextualized and may not necessarily generalize to other regions.• Internal validity: Our results may be subject to various confounding factors.For example, students may have used other resources that contributed to their skill development outside the video-based learning.Furthermore, students may have learned as part of working on their course project.However, we did not explicitly teach meeting communication in the course and the training happened over a relatively short period of time.Finally, we implemented nudges.Some students might have made a comment simply out of "obligation".We did not control for this (e.g., by checking the quality of comments that followed from nudges).• Construct validity: We used one question as a proxy for assessing the students' conceptual knowledge of the face-toface communication skill.This is not ideal for several reasons.Answering surveys can be annoying to students and their responses do not necessarily represent their true knowledge.Furthermore, listing concepts related to face-to-face meeting communication skills does not necessarily mean that the students know how to perform the skill.Ideally, the extent of learning can also be assessed by a human expert observing the students in real meetings before and after the training.This would allow us to triangulate the results from other forms of assessment.This requires substantial resources and is not always practically possible in a learning environment.Therefore, we complemented the knowledge question with a measurement scale.We systematically developed that measurement scale to make sure we measure what is intended.However, more evaluations of the instrument is needed to increase its validity.

CONCLUSIONS
The growing importance of soft skills in software development calls for new methods to educate future and current software professionals.We presented active video-watching as a method to integrate the training of communication skills into academic education.Our paper contributes to the literature by showing the effectiveness of active video-watching in teaching face-to-face communication in software development meetings.We acknowledge that our findings may not be specific to software engineering; however, the chosen soft skill was selected based on its relevance for practicing software engineers.Also, study participants were chosen based on the target audience of our learning platform -software engineering students.
In future work, we will repeat the study at other institutions.Furthermore, we will evolve the measurement scale to also consider practitioners and test the instrument for communication skills assessment in focus groups with industry partners.Also, we will explore automated forms of skills assessment (as already being explored for communication skills [29]) which can be integrated into our learning platform to provide "real-time" feedback to learners and adaptations of activities as learners progress through activities.

DATA AVAILABILITY
Raw data from the study presented in Section 5 (questions used in the study and anonymized data from participants) are available on Zenodo: DOI 10.5281/zenodo.10408454(https://doi.org/10.5281/zenodo.10408454)

( 1 )
We design a video-based training method for one specific soft skill related to communication, i.e., face-to-face meeting communication skills. 1 (2) We empirically design a new measurement instrument to self-assess face-to-face meeting communication skills.(3) We present an experimental study that uses the video-based training for meeting communication skills in three consecutive years of a second-year software engineering project course.

Figure 1 :
Figure 1: Screenshot of training platform

6. 1 . 1
Summary of Findings.We found that video-based training of face-to-face meeting communication skills contributes to the development of students' face-to-face meeting communication skills.

Table 1 :
Videos for Face-to-Face Communication

Table 2 :
Face-to-face meeting communication skills (FFMCS) scale ('R' in the "Self-regulation" dimension indicates reverse coded items)

Table 3 :
Active video-watching activities

Table 6 :
Overall comparison based on skill assessment instrument

Table 7 :
Comparison of conceptual knowledge