Human Understanding and Perception of Unanticipated Robot Action in the Context of Physical Interaction

Anticipating a future scenario where the robot initiates its own actions and behaves voluntarily when collaborating with humans, our research focuses on human understanding and perception of unanticipated robot actions during physical human-robot interaction. While the current literature searches for key factors that make the human-robot collaboration successful, the question of how people experience the robot’s unanticipated action as cooperative or uncooperative seems to remain open. We designed a game-based experiment (N = 35) where the participant played a “catch-falling-coins” game by moving a robotic arm. Our experiment introduced unanticipated robot actions in an “active session” where the robot targeted higher-valued coins without first informing the participants. Through semi-structured interviews and statistical analysis of questionnaires (Big Five Personality Test, SAM, NARS and CH33), we examined the participants’ understanding of the robot’s “intention” and their positive or negative perception of the robot as cooperative or uncooperative. Among the participants who understood that the robot’s “intention” was to catch the higher-valued coins, the majority of them reported a positive perception of the robot (cooperative or helpful) while this was not the case among those who did not understand the robot’s intention. We also observed relevant relationships between some personality traits and a person’s understanding of the robot’s intention. Qualitative analysis of the interviews allowed us to structure the process of perception change during the game into three phases: confusion, investigation, and adaptation. We believe that our research contributes to the study of human perception, and particularly to the relationship between a human’s understanding of unanticipated robot actions and their positive or negative perception of the robot.


INTRODUCTION
Through further development of robot autonomy, we anticipate that robots will be able to initiate their own actions and behave voluntarily during interaction with humans.Such robot actions may be perceived as unanticipated or unexpected and may trigger certain reactions and feelings in the human interactant.Unanticipated robot action would be perceived as curious or interesting if we would like to design personalized robots [30].Conversely, spontaneous robot actions may have to be controlled carefully for safety reasons in scenarios such as manufacturing [31].We can imagine a future scenario in which humans work together with an autonomous robot that might initiate its action in an unexpected way.We believe that it is essential to take into account human perception at such a moment in order to ensure successful collaboration.However, the question of how humans experience the robot's anticipated action as positive or negative during the joint task remains open.
In our studies [21,22], we investigated "active physical interaction", defined as a type of interaction where a robot may take an unanticipated action.By unanticipated action, we mean that the robot may initiate without first informing the human partner during a task shared with the latter.Therefore, the human partner might not know when or how the action is being executed.More clarifications on the specific nature of the actions will be explained in the description of the experiment in the methodology section and can also be found in [21].
Following our previous studies, the present article focuses on the user's understanding of the unanticipated robot action and their perception of the robot.Our primary question is to what extent a person who is physically interacting with a robot can understand the robot's "intention" without prior knowledge.Secondly, we examine how an understanding of the robot's intention influences the perception of the robot as cooperative by the person.We suppose that there may be a relationship between the person's understanding of the robot's "intention" and the perceived cooperativeness or uncooperativeness of the robot.In addition, we explore the relationship between this understanding of the robot's intention, a positive or negative perception of the robot, and a variety of factors such as age, personality, performance, and emotional and psychological attitudes towards the robot, based on existing studies that have investigated these factors [14,18,26,33,36,39].
Our research goal is twofold: to explore the human interactant's interpretation (i.e., what they notice, think of, and understand) of a robot's "intention" which is concealed from them during experiments, and to seek for real-time measurable interaction factors related to this interaction.The latter was addressed in [21], while in this article, we focus on the former.To this end, we designed a game-based experiment in which participants "play a game" by directly and physically controlling a robot that initiates its own actions in the course of the interaction without first informing the participant.In the context of the experiment, the actions of the robot are intended to help the participants during the game in a collaborative way.We used both qualitative and quantitative 9:3 methods for the analysis of the experiment.We collected qualitative data by semi-structured interviews, and quantitative data via questionnaires (Big Five Personality Test [6], Self-Assessment Manikin (SAM) [9], Negative Attitude Towards Robots Scale (NARS) [36], and CH33 [26]).
The former allowed us to investigate in-depth what the participants were thinking or experiencing at the moment when the robot took its action, while the latter allowed us to systematically analyze the relationship between the collected data.Our study is exploratory in nature and the combined qualitative and quantitative method brought us to a holistic understanding of human perception of the robot's unanticipated action.

RELATED WORK 2.1 Unanticipated Robot Action
In current studies of human-robot interaction, the word "unanticipated" is often used to describe an "unanticipated environment" or "unanticipated human behavior" to which the robot is required to adapt.The word is rarely employed to denote "unanticipated robot behavior".By "unanticipated robot behavior", we mean that the robot initiates actions during human-robot interaction without first informing the human user of its imminent actions.In general, research on robot-initiated or proactive action aims to design an effective and meaningful human-robot interaction with the robot by increasing the level of robot autonomy and taking into account human expectations and preferences [32,34].Initiation is also linked to the question of decision-making in collaborative work or teamwork such as which agent (i.e., robot or human) initiates action at the appropriate moment in response to the actions of others [24,25].In the context of a collaborative task, Baraglia et al. study a robot that can initiate assistance for the human partner or even proactively perform its action taking into account the human's actions [5].In their research, however, the experiment participants were informed beforehand that the robot will take an action to help them, although the details of the action were not disclosed.In the experiments of Willemse and Van Erp [44], which investigated the beneficial role of robot-initiated touch and human-robot social bonds under stressful conditions, the participants were not informed that they would be touched by the robot while watching a scary movie.Although [44] demonstrated the positive effect of robot-initiated touch in attenuating physiological stress responses and increasing the perceived intimacy of the humanrobot bond, they did not focus on the effect of the robot's initiative as unanticipated action.Indeed, these researchers did not consider a situation in which a human user encounters an unexpected robot behavior and is to some extent surprised by the robot's action.
The study by Chen et al. [11] in a nursing context is slightly similar to what we aim to investigate.In their research, a robotic nurse could make an action towards a participant in a "no warning" condition, where the participant was not informed of the robot's action of touching a participant.Participants' perceptions and preferences towards the robot's action in the "no warning" condition were compared with those in the "warning" condition, where the robot verbally informed the participant before touching them.Interestingly, the "no warning" condition obtained more favorable responses from participants, contrary to Chen et al. 's hypothesis that the participant would "have higher feelings of positive affect and lower feelings of negative affect when they receive a warning".In the investigation of the meaning of construction of collaborative robots in a human user, Abe et al. [1,2] designed an experimental setting in which human-robot interaction was disrupted by robot's motion to simulate an "unexpected" or "uncertain" situation for the human.There is a similarity between what they call an "uncertainty of the robot motion" during the interaction and what we focus on in the present research.As Levillain and Zibetti [30] state, a robot's unanticipated behavior is beneficial for a human-robot interaction because it avoids getting used to the robot's behavior and renews the human interest.However, in the context of manufacturing scenarios, the robot's unpredictable behavior should be avoided in order to ensure safety [31].Our aim is to explore in detail how participants feel when they encounter unanticipated situations during the collaboration with a robot, and how they interpret the robot's action.From the current literature review, these questions remain open.

Physical Interaction
In the field of physical interaction, robot touch is often investigated in different contexts.For instance, robot touch that aims to mimic a human touch has been shown to increase human users' trust and comfort with robots [35,40].However, as shown in [28], robot touch can also lead participants to perceive the robot as inappropriate.As mentioned above, the studies by Willemse et al. [43,44] have shown that robot-initiated touch can attenuate physiological stress responses (e.g., heart rate) and increase the perceived intimacy of the human-robot bond.Robot hugging has also attracted increasing interest from the research community as it can reduce human stress and increase comfort, with active hugging by the robot being shown to provide positive user experiences [7], although the hugging action alone is not always perceived as positive [42].
Physical interactions have also been studied in the context of exercising using robots [16], where social-physical exercises improved user experience and engagement.Rehabilitation robots are also used to perform exercises [4,29].In these cases, the tasks to be performed by the users are mostly game-based exercises that are designed to address specific rehabilitation goals, and the users' perceptions are not always analyzed in the research.Granados et al. [37,38] investigated the effect of robot-initiated actions and users' perceptions in the context of dance learning with a robotic dance partner, where the robot acted as the physical dance partner.Their long-term analysis showed that the adaptation of the robot to the user increased the user's comfort.
In the current literature on human-robot physical interaction, physical contact and/or actions, even when initiated by the robot, do not necessarily generate an unanticipated moment.Even when robot-initiated actions are not formally announced to the participants of an experiment, the robot's actions are still predictable due to the experiment setting of the exercise or dance.Therefore, the question of how people would react and perceive an unanticipated physical action from the robot during an interaction remains largely unexplored.

Collaborative Experience/Perception of Cooperation
In addition to our specific focus on unanticipated robot actions and physical interaction, our research addresses the process of interpreting robot behavior as cooperative or uncooperative during collaborative tasks.A successful collaboration would have been analyzed along different dimensions.For example, mutual understanding, sharing knowledge, or a common strategy to achieve the goal are central to a collaboration [12,20].Fluency in a shared activity, i.e., well-synchronized movement between a person and a robot, is also an important factor to ensure the collaboration [19].Similar to our approach, Zuckerman et al. [45] reported that the "Adaptive condition, " where a robotic device signals non-verbal cues, led to improved teamwork quality, a better sense of control, and a more positive perception of the shared-control experience.
Law et al. [27] showed the fact that even when people perform the same collaborative task with the robot, the perception of cooperativeness or partnership with the robot will differ.For example, in a turn-taking task between a person and a robot, people can interpret the robot's role as subordinate, leader, adversary, or colleague.Zuckerman et al. [46] reported that the collaborative experience depends on the interaction style between the person and the interactive device.Their experiment showed that the participants felt collaborative with the device when they had a 'joint' interaction, where the interactive device and the participant performed a task simultaneously, while they experienced the task as competitive when they had a 'turn-taking' interaction where the task was performed alternatively between the participant and the device.
Based on a theory of interactionism from sociology [10], we demonstrated that uncertainty created by an unanticipated robot action during the interaction confuses human interactants, affects their perception of the robot, and leads to a negative interpretation (uncooperativeness) of the robot [3].
Thus, observing how people experience cooperativeness or collaboration from a shared task with the robot is not a straightforward way even though the goal of the shared task is well defined.In the case where a person encounters an unexpected or unanticipated robot behavior, the experience of collaboration is not ensured.The question of the process of interpreting the robot's action as cooperative or uncooperative remains open.

HYPOTHESIS AND CONTRIBUTIONS
Our objective is to explore the moment when a human encounters an unanticipated robot behavior (action) during physical interactions and to investigate the process of the interpretation of the robot as cooperative or uncooperative.To do so, we focus on people's responses at this moment; their comprehension, perceptions, feelings, and/or thoughts toward the robot's unanticipated behavior.In addition, we also determine whether there are relationships with their personal background (e.g., age, gender, personality) and performativity that may further explain their interpretations and perceptions.In Suchman's terms [41], we analyze a "situated action" of the user by simulating "surprising moments" in an experimental setting.The underlying questions of our research are "What does the human think or feel towards an unanticipated robot action?", and "How does their perception or opinion towards the robot change before and after the unanticipated moment?".
We hypothesize that the positive or negative evaluation of the robot by the participant is related to the participant's understanding of the robot's action.Specifically, in this paper, we refer to the understanding of the robot's action as "correct" when the participant is able to infer what the intention of the robot is, i.e., the meaning of the robot's action(s), the positive evaluation as the participant perceiving the robot as cooperative, and the negative evaluation as the participant perceiving the robot as uncooperative.
We consider that the situation studied here is similar to what we termed "uncertainty" during an interaction where the human cannot understand the robot's action [3].We refer to their study to establish our hypothesis as follows: Hypothesis: Participants who correctly understand the robot's intention of targeting highervalue coins (refer to the next Section 4.2 for the definition of the robot's intention) will have a positive perception of the robot (and negative otherwise), i.e., the robot is perceived as cooperative (and uncooperative otherwise).
The main contributions of this paper are the following: -A framework tool in the form of a classification of the evolution phases of participants in a physical human-robot interaction (pHRI) scenario where robots can take unanticipated physical actions.-A detailed combined qualitative-quantitative analysis of unanticipated physical actions during pHRI scenarios.
To our best knowledge, this is the first article that explores the effect of unanticipated physical actions on participants, and a method for classifying their perception phases, which outcomes are further supported by the analysis conducted on the quantitative data collected on the perception, affective state, personal information, and performativity.

Game-based Experiment: "Catch-falling-coins Game"
We designed a game-based experiment where the participant is asked to "play the game" using a "robotic arm" (Figure 1).The game, which is displayed on a screen in front of the participant, consists of catching virtual coins that fall from the top of the screen by moving a virtual catcher tray left or right along the bottom of the screen.The participant must hold the end effector of the robot and move the arm (robotic arm) to move the catcher tray left or right and thereby catch the coins.If the participant releases the end-effector, the robot stops moving.Direct physical contact is therefore required to play the game.The coins can have four values: 1, 5, 10, and 20.To maintain the participant's motivation and engagement in the game during the experiment, the sum of the caught coins determines the value of a bonus payment made to the participant, with the virtual coins worth 1, 5, 10, and 20 Japanese Yen.The coins fall at regular time intervals of 1 s but at different speeds.Coins of value 1 and 5 falls at different speeds in the range 3-36 cm/sec, and coins of value 10 and 20 always fall at the maximum speed of 36 cm/sec (screen size 110cm wide × 60cm height).We use a pre-generated sequence of coins and speeds, so that each participant plays the same game, allowing for a better comparison of the data collected across all participants.
The game was implemented using the PyGame package and was developed by the authors specifically for this experiment.For the robot, we used a Rethink Robotics Sawyer collaborative manipulator with 7 degrees of freedom.The robot was position-controlled when not in contact with the participant, and torque-controlled otherwise.The torques are generated using an optimizationbased scheme to constrain the velocity and forces to ensure physical safety.The Cartesian position of the end-effector was not constrained to move in only one direction, to allow the participants to freely adjust the height of the end-effector based on their comfort, but only the horizontal direction (x) was projected into the game to move the catching tray.In this article, we omit further details of the robot control and invite the readers to refer to our previous article [21].

Robot Unanticipated Action and Robot's Intention
When the participant held the robot's end effector during the game, the robot could display two behaviors (modes): a compliant mode, and an active mode.During the compliant mode, the participant could move the robot freely and the robot would simply comply (the participant is fully in control of the robot's movements).During the active mode, the robot has its own "intention" and does not comply with the participant, rather, it "forces" the participant to comply with its intention (i.e., it may move against the participant's desired movements).
More specifically, the active mode is implemented so that it is felt as a physical action from the robot, i.e., the participant feels a force as their hand is pushed or pulled by the robot.This robot action was programmed to be activated only when higher-valued (10 and 20) coins appeared on the screen.As soon as a 10-or 20-valued coin appeared and started falling on the screen, the robot's end-effector acted to move the catcher tray towards those coins until the coins were caught or fell out of the screen.In other words, in the active mode, the robot behaved as if it "intended" to catch higher-value coins.We call this robot action of seeking higher value coins "robot's intention".This action is implemented as a position regulation task in the controller of the robot, which regulated the position of the end-effector towards the higher-value coins locations, generating higher torques against the human action.The participant is then "forced" to follow the robot towards the higher-value coins.Note that the generated torques are strong enough for the participant to feel the active movement of the robot, but given the torque control nature of the controller, the participants could also apply sufficient force to the robot to act against its direction of movement.However, the robot does not perform a fine regulation to actually catch the coin (i.e., it has not positioned the catch tray at the exact point where a high-value coin would fall), so the participant has to refine the positioning if they want to catch the coin.This implementation is meant to force the participants not to rely/depend fully on the robot's actions, and to keep the relationship between the robot and the participant as a collaboration, and that the robot is helping, rather than the robot performing the task in their place (and thus, that participants have no control of the situation).When there are no 10-and 20-value coins on the screen, the robot is in compliant mode.The participant was not informed beforehand of the robot's active mode (in the instructional video that the participant watched, only the compliant mode is shown) and experienced it as an "unanticipated action" during the interaction.
This robot action is sequential: that is, if multiple high-valued coins are present on the screen, the robot will aim at the first one that appeared, then once it has disappeared, will aim in sequence at the second one that is still on the screen, and so on.A single game lasted 150 seconds.There were 86 falling coins in each game, comprising 15 coins of value 20, 16 of value 10, 36 of value 5, and 19 of value 1.The robot, therefore, acted with "intention" 31 times in a game of 86 falling coins.The composition of the coins was computed so that a sufficient number of coins could be displayed within a sufficiently long game session.The number of actions was frequent but not overly continuous, and the total amount of the coins would not exceed 1,000 Japanese Yen.
Participants were exposed to two different types of sessions of game-play with the robot: one where the robot only displays the compliant mode (Trial sessions), and one where the robot displays the active mode (Active sessions).Each participant first performs the "Trial sessions", and then the "Active sessions".This means that during the first sessions of the game, the participants experience only a compliant robot, and since they have not been informed about the robot's actions, they would not expect the robot to behave differently in the following sessions.Thus, when the active mode is displayed, the participants would experience the unanticipated action as a "surprising" event.Given that the event is repeated throughout a single game session (31 times as explained above), participants could then "expect" the event to occur, but not necessarily understand the meaning of these actions, nor understand when they would occur as the actions were not formally announced before they occurred.Thus, the action remained unanticipated in nature according to our definition.Further details and analysis are provided in the remainder of the paper.

Participants
A total of 40 people participated in this study.All participants were recruited via a recruiting company to avoid any conflict in the process and to give a diverse group of participants.The participant eligibility criteria are described in [21], we briefly describe them here: not to have previously participated in similar studies, to be in good health conditions (specifically, no heart disease, no movement disorders), to be younger than 50 years of age, to be less than 180 cm and more than 150 cm tall, to weight less than 80 kg and more than 50 kg.Physical constraints were imposed due to limitations of the available sensor suite (e.g., size of the clothing for motion capture markers), and to have a more uniform perception of the physical interaction.No constraints were imposed on their profession or familiarity with robots/technology.All participants were born and raised in Japan, between 20 and 50 years old, and had no prior experience with similar experiments.We chose only Japanese participants to avoid possible cultural-dependent variations and the difficulty of recruiting participants with different nationalities in Japan.
Among the 40 participants, 5 were removed because some data were not acquired during the experiment, so 35 were retained for this study.Of the 35 participants, 17 were female and 18 were male; 12 were in their 20s (5 female and 7 male), 12 were in their 30s (6 female and 6 male), and 11 were in their 40s (6 female and 5 male).No participants identified themselves as "other" in the gender category.The average age of all participants is 36.4(std 10.6), the average age of the participants in their 20s is 22 (std 1.6), in their 30s 34.3 (std 3.5), and in their 40s 46.5 (std 3.0).The participants received a bonus payment determined by their game result as described in Section 4.1, in addition to a reward based on the time that they spent doing the experiment.
Approval of all ethical and experimental procedures and protocols was granted by the ethics committee at the National Institute of Advanced Industrial Science and Technology (AIST) in Tsukuba, Japan, under application 2019-0544.Before the experiment, participants received proper information and gave their informed consent to participate in the study.

Questionnaires
All participants were asked to fill in a questionnaire on their background information: age, gender (female, male, other), and familiarity with robots.This information was collected to analyze whether it influenced the participant's understanding and interpretation, as other studies have shown that gender [39] and age [33] can influence the perception of a robot in interaction scenarios.We used well-established validated questionnaires to assess the participant's personality traits, attitudes to robots, perceptions of robots, and their affective state during the experiment.Before the experiment participants were asked to complete a simplified version of the Big-Five personality traits questionnaire [6], consisting of 15 questions that project into five personality traits, and the Negative Attitude Towards Robots Scale (NARS) [36], which reflects different aspects of the psychosocial attitudes of people towards robots.During and after the experiment we used the visual Self-Assessment Manikin (SAM) [9] on a 9-point Likert scale to assess changes in the participants' emotions in the affective dimensions "pleasure", "arousal", and "dominance".We used the CH33 questionnaire [26] to evaluate the participant's perception of the robot, both with and without robot actions.This questionnaire was established via an extensive study carried out by a psychologist and contains 33 questions that are used to identify six factors: "performance", "acceptance", "harmlessness", "toughness", "humanness", and "agency" which reflect the psychological safety of robots.For the details of the Big-Five and CH33 questionnaire the reader may refer to [22], while for the SAM and NARS please see [9] and [36], respectively.9:9 a The questions are translated from Japanese.

Semi-structured Interviews
We conducted semi-structured interviews with participants immediately after their experiment.The semi-structured interview is one of the most effective and frequently used methods of data collection in social sciences [8].The method consists of exploring subjective viewpoints [17] of what the participant was feeling, thinking, and experiencing [15] during an experiment by a researcher asking then predetermined open-ended questions.The semi-structured interview allows participants to recall and reflect on their experiment during the interview and to express their experience in their own words.It can reveal elements that are difficult to find via questionnaires and complements information established by questionnaires in a holistic way.The interview contains 11 questions (Table 1) designed to examine the following analysis points: -The participant's perception changes during the game.(How does the participant's perception of the robot change when the robot initiates an unanticipated action?) -The participant's understanding of the robot's unanticipated action.(Does the participant understand correctly the "intention" of the robot in aiming towards higher value coins?) -The participant's interpretation of the robot's unanticipated action.(Does the participant perceive that the robot is cooperative or uncooperative?) Interviews were conducted in Japanese by a Japanese-speaking researcher and remotely through a laptop computer placed in the experimental room using a platform Microsoft Teams.The interview recording is analyzed by two experts (a sociologist and an HRI researcher).We performed the Cohen's Kappa to measure the inter-rater reliability and the reliability between the two experts is proven by the score above 0.8.

Experimental Procedure
The experiment was held in an experimental room at the National Institute of Advanced Industrial Science and Technology (AIST) in Tsukuba, Japan, during September and October of 2020.The experiment followed a strict protocol and was conducted by two experimenters; one explaining 9:10 N. Abe et al. the participant's rights by reading the same document to all participants and obtaining their consent, and one setting up and carrying out the game session.When necessary, a Japanese-speaking translator joined the experiment because the experimenters were non-Japanese speakers.An interviewer conducted an interview remotely via Teams at the end of the experiment.
To reduce the interaction between the experimenters and the participant and to maintain the most similar conditions possible for all participants, each participant was asked to read a document containing the full experiment protocol and then watch an instructional video that explained the game and how to use the robot to play the game [21].The instructional video contained the minimum information necessary to play the game, where only the compliant mode of the robot is shown.No further instructions about how to approach the robot, how to grasp the robot, and in general how to behave during the interaction, were given, although the participant had an opportunity to ask questions during the experiment trial session.The experiment was organized as follows: Welcome and preparation (1) The experimenter welcomed the participant, and the participant read the instruction document and watched the video about the game.(2) The participant completed the preliminary questionnaire (on background information), the Big Five personality trait questionnaire, and the NARS questionnaire.(3) The participant was fitted with equipment for physiological measurements meant for the investigation of measurable interaction factors related to unanticipated robot actions during pHRI (please refer to [21] for details), these factors are out of the scope of this paper.Trial session (4) The participant played the game twice in the trial session, during which the robot did not take any action.One game session lasted about 3 minutes.The coins caught by the participant during the trial session were not paid to the participant as a bonus payment.(5) The participant completed the CH33 and the SAM questionnaires.Active session (6) The participant played the game three times in the active mode, in which the robot initiated its actions, assisting the participant to catch higher-value coins.The participant was not informed that the robot would initiate such action.The sum of the coins caught by the participant during the active session was paid to the participant as a bonus payment.(7) The participant again completed the CH33 and the SAM questionnaires.Interview (8) The participant removed the equipment for physiological measurement.(9) The participant was interviewed by the interviewer.The interview lasted about 10-15 minutes.(10) The experimenter announced the end of the experiment to the participant.The experiment required approximately 1.5 to 2 hours to complete.Since this article focuses on the analysis of the participant's perceptions as measured by the questionnaires and semistructured interviews, the analysis from the physiological measurements has been omitted.Please refer to [21] for the result and analysis of the physiological measurements.

RESULTS AND ANALYSIS
Data from 35 participants of the 40 recruited were retained and analyzed.For the analysis, we considered the second trial session (here designated "T") and the three active sessions ("A1", "A2", and "A3").The data from the first trial was discarded as this session was intended to allow the Perceived some kind of robot's intention (33) Correctly understood the robot's intention (22) "The robot is cooperative." (15) "The robot is sometimes cooperative." (3) "The robot is not cooperative." (3) No answer (1) Did not correctly understand the robot's intention (11) "The robot is cooperative." (1) "The robot is sometimes cooperative." (3) "The robot is not cooperative." ( 6) "I don't know." (1) The number of participant responses analyzed was 35 and the number of people in each category is shown in parentheses.
participants to become familiar with the robot.During the first trial, participants could ask any questions of the experiment, therefore the data are not meaningful for the analysis.
We performed both qualitative and quantitative analysis.The former is based on the semistructured interviews conducted at the end of the experiment, and the latter is based on the background information of the participants, their answers to the questionnaires, and their performance with the game.

Comprehension of the Robot's Intention and Perception of Robot's Cooperativeness.
From the analysis of the semi-structured interviews of 35 people (Table 2), we determined that 33 people understood that the robot has its own intention in the game, while 2 people did not perceive any robot's intention.Among the 33 people who perceived that the robot had an intention to influence the game, 22 people were identified that correctly understood the robot's intention to catch highervalued coins (they noticed that the robot was targeting higher-value coins), and 11 people were identified that did not correctly understand the robot's intention (they did not notice that the robot was targeting higher-value coins).Furthermore, among the 22 people who correctly understood the robot's intention, 15 answered that the robot was helpful or cooperative, 3 people answered that the robot was sometimes helpful but not always, 3 people answered that the robot was neither helpful nor cooperative and 1 person did not answer.Regarding the 11 people who did not correctly understand the robot's intention, only 1 person answered that the robot was helpful or cooperative, 6 people answered that the robot was not helpful nor cooperative, 3 people answered "sometimes cooperative but not always" and 1 person responded, "I don't know".
During the interviews, participants were encouraged to describe their experience of the experiment in general.We identified common recurrent points in the discourses of both categories of participants: those who correctly understood the robot's intention and those who did not understand the intention.This allowed us to structure a process of participant perception change during the interaction with the robot.We can classify this process as having three distinct phases: confusion, investigation, and adaptation (acceptance) as follows.(

1) Confusion
In the "confusion" phase, participants realized that the robot was active during the game.They perceived a force exerted by the robot which they had not perceived in the trial session and felt confused about this change.In the interviews participants expressed their feelings at this moment as "I am perplexed", "I am surprised", and "I thought that there was a malfunction in the robot." (2) Investigation Once the participants perceived the robot's action and the force that it sometimes exerted, they tried to investigate ways of controlling the robot.The participants tried different strategies to adapt themselves to the force exerted by the robot and to regain control of the robot's motion.These strategies consisted mainly of physical adaptation such as trying to reduce the force they exerted or putting more effort into controlling the robot or trying to move the robot in different directions.Participants typically described this phase with phrases such as "I try to move the robot in different ways.If pushing does not work, I pull it.", and "I try to do different movements and observe the robot." (3) Adaptation (acceptance) In this phase, participants recognized that the robot's force was too strong, and they cannot take full control over it when the robot starts to initiate the action.Participants, therefore, tried to adapt their movements to the robot's motion.They readily controlled the robot's motion when the robot's force was not activated, i.e., the robot was not aiming at highervalued coins, otherwise, they followed the robot's motion or they let the robot move without intervening.This phase was described as "I cannot communicate well with the robot.I follow it.", "I am thinking a lot, I should not force the robot too much, or I should try to do small movements to adapt to the robot's motion.", "I let it go.I adapt to the robot motion because the force is much bigger."

The Participant's General Impression of the Robot.
From the interviews, we found a certain tendency of participants to form different impressions of the robot.Participants who correctly understood the robot's intention and answered that the robot was cooperative tended to have a positive impression of the robot.This was expressed by phrases such as "The robot supports me.To lead me to a better way.", "I thought at first it was malfunctioning, but after I understood the robot's motion.The robot's sudden movement makes me laugh.", "The robot assists my movement, it predicts." "It would be a playmate.It helps me." However, some of the participants who answered that the robot was partially cooperative or not cooperative expressed a negative impression: "The robot assists me in getting higher coins, but it makes me tired because of its force.", "It was like a pet.Sometimes it listens to me, sometimes not.I was surprised.I want to go in this direction, but the robot is going to another." Participants who did not understand the robot's intentions correctly and answered that the robot is not cooperative expressed negative impressions of the robot."The robot is like a kid, like my son.The motion is rough and lacks carefulness.It does not adjust to human motion.It is childish.", "The reaction of the robot is slow.It is just a tool for a game, I don't have friendly feelings towards it.I am frustrated.", and "The reaction of the robot is slow.Nothing stands out in this robot."

Change in Participant Emotions and Perceptions from Trial Session to Active
Sessions.Before carrying out the quantitative analysis for the categories obtained from the interview, we aimed at understanding whether participants showed any evolution at all during the sessions of the game (especially between the "Trial sessions" and the "Active sessions") in terms of their emotions and  perceptions of the robot.This allows us to support with the quantitative data, that the participants had indeed felt that the robot had some "intention" as described in Table 2, and that this feeling altered their emotional state and perception of the robot.To understand whether participants' emotions changed across the sessions we compared a participant's affective state as measured by the SAM after the trial session with their affective state measured after the first active session ("T" vs. "A1").We also compared their affective state after the first active session with that after the third active session ("A1" vs. "A3").Specifically, we performed a one-way ANOVA considering all 35 participants to evaluate the changes in participant affect, we ran Tukey-HSD post-hoc test to verify the actual differences and report Cohen's d for effect size, as shown in Figure 2. We can observe that participants had a statistically significant (p < 0.05) decrease in "pleasure" (F = 3.46, p = 0.0015, d = 0.95) and an increase in "arousal" (F = 3.31, p = 0.05, d = 0.5) after the robot actions started, while the feeling of "dominance" (F = 3.42, p = 0.0016, d = 0.43) increased significantly between the first and the third active sessions ("A1" vs. "A3").
Participants were also asked to complete the CH33 questionnaire on their perception of the robot after the trial session ("T") and after all active sessions ("Exp").This allowed us to analyze how the participants' perceptions of the robot changed between a "compliant" and an "active" robot.For this purpose, we performed a variance test between the CH33 scores obtained after the trial session ("T") and those after all active sessions ("Exp"), as shown in Figure 3. Participants had significant changes in their perceptions of robot "toughness" (F = 3.33, p = 0.002, d = 0.61), and 9:14 N. Abe et al.
"agency" (F = 3.15, p = 0.003, d = 0.59).From the results, we could infer that participants perceived the robot as being less tough and having more agency after the robot initiated its actions.

Relationship between Understanding of the Robot's Intention and Perception of Robot's
Cooperativeness.We performed statistical analysis to understand whether there is a relationship between the results obtained from the analysis of the interviews and quantitative data collected during the experiments: participants' personal information, their performance during the games, and their attitudes to and perceptions of the robot.Specifically, we aim at understanding whether there are statistically significant differences between groups of participants.
Referring to Table 2, we focus on two major categories derived from the qualitative analysis: (1) understanding of the robot's intention and (2) perception of robot's cooperativeness, which are subdivided into the following subcategories:

Category 1 Understanding of robot's intention:
A. "Yes" (22 people): the participant correctly understood the robot's intention.B. "No" (11 people): the participant did not correctly understand the robot's intention.C. "Unclear" (2 people): the participant did not perceive any kind of robot's intention.

Category 2 Perception of robot's cooperativeness:
A. "Yes" (18 people): the participant expressed a perception that the robot is cooperative.B. "No" (9 people): the participant expressed a perception that the robot is not cooperative.C. "Sometimes" (6 people): the participant expressed a perception the robot is sometimes cooperative and sometimes not cooperative.D. "NoAns" (2 people); the participant did not give an answer explicitly about the question on her/his perception of the robot.
The interview data is analyzed by two experts.We measured the inter-rater reliability using Cohen's Kappa for two categories: understanding of the robot's intention and perception of the robot's cooperativeness.The score was above 0.8, indicating that the agreement between the experts is substantially high.
To analyze the relationships between the categories and subcategories identified in the analysis of the semi-structured interview and the quantitative data, we performed statistical analyses to understand whether there are significant correlations between the quantitative data and the interview outcomes.The interview categories are categorical variables, while the quantitative data are continuous variables.We performed a point biserial correlation to account for this type of data.The disadvantage of the point biserial method is that only dichotomous (binary) values can be considered for the categorical variable, so some of the subcategories need to be grouped together or discarded entirely.We considered only the subcategories "Yes" and "No" for both the understanding of the robot's intention and the perception of the robot's cooperativeness, discarding the other groups.The choice is led by the larger number of participants in these two subcategories compared to the others, for which a significant result can be obtained.
For the statistical analysis, we considered each of subcategories listed above, and the following parameters from the quantitative data: age, personality (for each of the Big Five personality traits), game score (for each of the sessions, and the aggregate), differences in the CH33 factors (for each of the six factors), NARS (for each of the three factors), SAM (for each of the three affective dimensions in each of the sessions).
To have a broader overview of the data including also the subcategories with less population ("Sometimes", "NoAns", "Unclear"), we also show the side-by-side boxplots of the listed parameters grouped by the subcategories of understanding robot's intention and perception of robot's cooperativeness.While no statistical tests were performed, as the categories are dependent rather than independent variables, a visual comparison of the data can lead to interesting observations, coupled with the correlations performed.
Results were considered significant when p < 0.05.Comparisons between the two major categories were not conducted because, as shown in Table 2, the two categories contain the same population and a direct comparison would not give meaningful results.
There was no significant difference in terms of gender (female vs male) between the subcategories, nor was any meaningful trend observed, therefore the analysis is omitted.All participants selected 1 or 2 (on a scale of 1-5) as their level of familiarity with the robots, so this information was not utilized for the analysis.In the following, results for the tests aforementioned are reported.

Age and Understanding of Robot Intention/Perception of Robot's Cooperativeness.
There wer no statistically significant correlations between participant age and understanding of the robot's intention (Figure 4(a)) nor between age and perception of robot's cooperativeness (Figure 4(b)).It was, however, possible to observe that the participants who did not understand the robot's intention were older on average than those who did.Also, those who did not find the robot cooperative are on average older than those who found it cooperative.

Personality and Understanding of Robot's Intention/Perception of the Robot's Cooperativeness.
Participants' personality showed significant correlations with both understanding of robot's intention and perception of robot's cooperativeness (Figure 5).Specifically, from the point biserial correlation results, the personality traits of "Extraversion" and "Openness" were positively correlated with the understanding of robot's intention, with correlation coefficient r = 0.37, and p-value p = 0.03, and r = 0.34, p = 0.04, respectively."Extraversion" is also positively correlated with the perception of robot's cooperativeness, with r = 0.63, p = 0.0004.

Game Score and Understanding of Robot's Intention/Perception of Robot's Cooperativeness.
The game score showed a significant correlation with the understanding of the robot's intention, but not with the perception of the robot's cooperativeness.However, this correlation was only significant with the "Trial session" (r = 0.44, p = 0.009), where the robot did not perform any actions.It is also worth noting that interestingly, while the "Unclear" category was not considered for the correlation due to the low number of participants in this category, the average game score was lower with respect to the categories of "Yes" and "No".Although there were no significant correlations, Figure 6(b) shows that the participants who correctly understood the robot's intention ("Yes") achieved on average higher scores than those who did not understand it ("No") across all the game sessions.A similar trend was observed for the "perception of robot's cooperativeness" category in Figure 6(a).That is, those participants who found the robot to be cooperative ("Yes") achieved higher game scores than those who did not find the robot cooperative ("No"), although this tendency was not statistically significant in any of the sessions.

Differences in CH33 Scores of Psychological Safety between Trial Session and Active
Sessions, and Understanding of Robot Intention/Perception of Robot's Cooperativeness.To assess changes in participant perception of the robot during the experiment, we used the percentage difference between the scores for the six basic dimensions of perception from the CH33 questionnaire after the trial session and the scores after the three active sessions.Positive differences mean the score after the active session increased with respect to that after the trial session.There were no significant correlations between those who understood the robot's intention and those who did not (Figure 7(a)), as well as between those who perceived the robot as cooperative and those who did not.However, as observed in Figure 7(b), there are notable differences in the change in perception of the robot's "performance", "acceptance", and "harmlessness" between those who perceived the robot to be cooperative and those in the "NoAns" subcategory.Specifically, the two participants in the "NoAns" subcategory had a decrease in their acceptance of the robot and an increased perception that the robot was harmless between the questionnaire after the trial session and after the active sessions, while those who explicitly stated that the robot was cooperative, or was not cooperative, had little to no change in their CH33 scores.

NARS (Negative Attitude Towards Robots Scale) Scores and Understanding of Robot's Intention/Perception of Robot's Cooperativeness.
The NARS questionnaire quantifies the psychological attitude of a participant towards robots on 3 sub-scales: (1) Situations of Interaction with Robots (S1-Interaction), such as "I would feel very nervous just standing in front of a robot." (2) Social Influence of Robots (S2-Social Influence), such as "I am concerned that robots would have a bad influence on children.", and (3) Emotions in Interaction with Robots (S3-Emotional Interaction), such as "I would feel relaxed talking with robots".For further insight into these factors, the reader may refer to the original NARS questionnaire publication [36].From our results, there were no statistically significant correlations between either understanding the robot's intention (Figure 8(a)) nor perception of the robot's cooperativeness (Figure 8(b)) and negative psychological attitudes.However, we can note that those who did not explicitly state their understanding of the robot's intention ("Unclear") had lower NARS scores than others.

SAM (Self-Assessment Manikin) Scores and Understanding of Robot's Intention/Perception of Robot's Cooperativeness.
Affective state was measured in the affective dimensions: "pleasure", "arousal", and "dominance" in the second trial session and in the three active sessions.No significant correlations were found between these dimensions and the understanding of the robot's intention (Figure 9) nor the perception of the robot's cooperativeness (Figure 10).However, although not statistically significant, the scores of "dominance" and "pleasure" for those who understood the robot's intention ("Yes" in Figure 9) and of those who perceived the robot as cooperative ("Yes" in Figure 10) were higher across the sessions "Trial", "A1", "A2", "A3", than those who did not understand the robot's intention and those who did not perceive the robot as cooperative.We also note that Figure 9 shows that those who did not answer explicitly whether they understood the intention of the robot ("Unclear") scored lower in the "Dominance" dimension than participants in the other sub-categories ("Yes", "No"), while those who did not answer whether they felt the robot was cooperative or not ("NoAns"), got a higher score in "Dominance" compared to the other sub-categories, as seen in Figure 10.

DISCUSSION
The findings from the qualitative analysis of the semi-structured interviews and the quantitative analysis based on the questionnaires have allowed exploration of the participants' perceptions of unanticipated robot action, particularly their understanding of the robot's "intention" and its relationship to positive or negative perceptions of the robot during the active sessions, and thus to evaluate our hypotheses.

Hypothesis
Concerning the participant's understanding of the robot's intention, although almost all participants (33 of 35) perceived that the robot had some sort of "intention", only 22 of 35 participants (63%) correctly understood the robot's intention, and 11 participants (31%) were not able to identify what the robot's intention was.This result shows that human understanding of the robot's intention during the physical interaction was not straightforward.Inferring or understanding another's intention from only physical force and motion cues transmitted to the body would surely be difficult or variable because such ability would strongly rely on the sensibility of haptic or motion perception varying according to the personal and social experiences of an individual [23].
Hypothesis: Participants who correctly understand the robot's intention will have a positive perception of the robot, which addresses the relationship between an understanding of the robot's intention and the participant's positive perception of the robot, is supported.It was observed that the majority of participants who correctly understood the robot's intention (22 participants, or 82%) had a positive perception of the robot, with 18 participants answering "The robot is cooperative" or "The robot is sometimes cooperative", therefore the hypothesis is verified with statistical significance (with binomial test having n = 22 and k = 18).As reported in [27], knowing the robot's strategy, or feeling that the robot is mindful of the human partner's strategy, is central to gaining trust in the robot and to leading to a positive perception of the robot.
It may be premature, however, for us to validate the reverse of our hypotheses: Participants who don't correctly understand the robot's intention will have a negative perception of the robot.From  the analysis of the semi-structured interviews, among those who did not correctly understand the robot's intention, only 55% (6 of 11 participants) had a negative perception of the robot, therefore the hypothesis cannot be verified with statistical significance (with binomial test having n = 11 and k = 6).Further research will be required to examine the potential relationship between the lack of understanding of the robot's intention and the negative perception of the robot.

Quantitative Analysis
Further discussions of the outcomes obtained from the qualitative analysis are supported by the quantitative analysis performed.We observed a relevant relationship between having an understanding of the robot's intention and the performance of a task in collaboration with the robot, with some significant correlations between the interview results and the quantitative data, especially for personality traits.As shown in Section 5.2.5, participants who correctly understood the robot's intention performed better (obtained higher scores in the game) than those who did not understand the robot's intention, although the correlation is not statistically significant.This result provides an important insight into the relationship between an understanding of the robot's intention and the effective performance of a collaborative task.Noting that Dragan et al. [13] emphasize the importance of the legibility of a robot's motion to a human partner during collaboration, the understanding of another's intentions or what they are attempting to do may be an essential 9:21 factor for effective collaborative performance.This is likely also related to the positive perception of the robot.While the positive correlation is not statistically significant, those who had a positive perception of the robot also scored on average higher than those who did not.Our research does not, however, prove any causal relationship between an understanding of the robot's intention and effectiveness in the game performance, because it could be that participants who are able to perform well in the game could merely also be capable of correctly inferring the robot's intention.
Also, as shown in 5.2.4, a significant correlation between some personality traits and participant understanding of the robot's intention was observed.From our findings, we may speculate that people with extravagant or open personalities could more easily or more rapidly understand the robot's intention and thus be able to achieve higher scores in the game as shown in 5.2.5.Although our current research does not prove any causality between the three factors of personality, performance, and perceived robot uncooperativeness, it does show that there is a relationship between these factors.This evidence would contribute, from a human-centered perspective to the study of how a human understands and interprets a robot action or intention and reacts to unanticipated robot action.
Concerning the relationship between an understanding of the robot's intention, the positive (cooperative) or negative (uncooperative) perception of the robot, and the psychological factors measured by CH33, NARS, and SAM, we were unable to draw a conclusion from these outcomes.From the findings shown in 5.2.6 and 5.2.7, we would summarize broadly that there may be different attitudes in the participants who were identified as not having understood any kind of robot's intention ("Unclear") or did not give any clear answer ("NoAns") compared to the other participant categories.We may speculate that this may have arisen from the semi-structured interview where certain people were not comfortable with expressing themselves and providing clear answers.However, the size of the population in the subcategories of "NoAns" and "Unclear" is much smaller than in the other subcategories, so those data may not be reliable or too much marginal to analyze.Therefore, further examination will be needed.
Many of the statistical analyses performed in Section 5, while showing interesting trends, did not show significant correlations.We believe that this could be due to several reasons the most important of which are (1) the number of participants, (2) the complexity of the interaction task, and (3) the questionnaires used.The number of participants may not have been sufficiently large to catch the complex interactions involved in the experiment.The population also spanned over three age groups, with only a dozen participants per group.Although age did not show significant correlations, we cannot exclude that it had a relevant effect, especially as there is a clear trend that a higher number of younger participants understood the robot's intention and perceived the robot as cooperative.The same applies to the remaining factors where no significant correlations were obtained, but trends could be observed.The experiment was designed to involve a complex task as interactions in the real world are complex.However, this complexity may have impacted the obtained results, as there are many elements of the experiment that may have led to the participants' perceptions.The questionnaires, especially CH33 and NARS, are commonly used and have been validated for human-robot interaction scenarios where there is little or no physical interaction.Therefore, the validity of these questionnaires for an experiment where a physical task represents a major component could be questioned.Nevertheless, the creation of questionnaires was out of the scope of this exploratory study.

Summary
The three phases of perception change during the active sessions (Section 5.1.1)informed us of the evolution of the participant's thoughts and experience from the moment where the robot first acted.This classification would serve, as a framework tool, to analyze and identify in which phase the human gained or lost the perception that they are collaborating with a robot.For example, to increase the ease of using a robot, a robot designer may design a robot behavior that can lead directly to the third phase of adaptation (acceptance) by avoiding the user experiencing the first and second phases.We may also consider a possible fourth phase as an "optimization" phase where participants can investigate different strategies to find the "best" way of performing a task.
An interesting point that emerged from the answers to the semi-structured interview is that the participants seemed to relate their experience to their control over the robot.Even if it was not our original intention, the answers given by the participants seem to indicate that a good percentage of the participants "expected" to be fully in control of the robot and experienced a "loss of control" when the robot was in active mode.This could then be experienced either positively ("cooperative") or negatively ("uncooperative") by the participant.Further investigation would be needed to address this point with specific experimental setups, which is beyond the scope of this study.
Combining our present findings on the human perception of a robot's unanticipated action and the outcomes on measurable interaction factors from our previous research [21], we can also examine the relationship between the perception of unanticipated robot action and physical and physiological factors of the participant.For instance, from the analysis of such factors previously reported in [21], participants who perceived that the robot had more positive qualities (higher score in "acceptance" and "humanness" from CH33, lower score in "toughness" and "agency" from CH33 etc.) seem to have a more "relaxed" interaction with the robot characterized by closer distances, lower forces, lower anxiety, and lower mental load.Furthermore, participants who understood the robot's intention used less force, had less mental load, felt more dominant, and kept both hands closer to the robot end-effector, and participants who found the robot being cooperative used less force and were also characterized as more extroverted compared to those who found the robot uncooperative or less cooperative.Thus, further research combining factors obtained in [21] and the outcomes of this paper will contribute to the complete design of active physical interaction between a robot and a human.

LIMITATIONS
The present study has several limitations that should be taken into account when interpreting its results.One of the most notable limitations is the absence of a control condition, which hampers our ability to assess the actual meaning of the observed relationships between the qualitative and quantitative data.This means that in the current study, differences observed in the participants' responses, which may be due to variations in their understanding of the robot's intention, are inherently correlational in nature and not necessarily caused by their understanding of the robot's intention or their perception of the robot's cooperativeness.In future work, the inclusion of a control condition, in which the robot does not perform any actions, and participants are asked the same set of questionnaires, would provide a baseline for comparison, and would allow us to ascertain the impact of the robot's actions on participant's responses.
Another limitation stems from the experimental design, which took place in a controlled laboratory environment with a relatively short duration of interaction between participants and the specific type of robot used in the study.This controlled environment does not fully represent realworld scenarios, and the short duration of the interaction may not capture the complexities and dynamics that may emerge in longer-term engagements.Therefore, caution should be exercised in generalizing the results to real-world settings involving different types of robots and longer interactions.
The decision to offer participants a bonus payment was driven by the need to increase engagement with the robot during the experiment.This approach was informed by previous studies that used a similar game-based experimental design and encountered issues with participant engagement.While the payment was intended to incentivize active participation, it is important to recognize that this may have introduced unintended biases into the results.The impact of the bonus payment on participant behavior and responses needs to be carefully considered when interpreting the study's outcomes.
It is important to emphasise that the study builds upon previous exploratory experiments where interesting trends and relationships with the factors were observed.However, due to the exploratory nature of these previous studies, certain relevant factors may not have been fully explored or considered.Consequently, the interpretations and conclusions drawn from the current study are contingent on the insights derived from these earlier investigations.Moreover, the data collected for this study has not been fully reflected in the existing literature, and various factors may not have been taken into account.Consequently, the findings may only reveal initial trends or associations, and it would be premature to draw definitive conclusions based solely on this study.Increasing the number of participants, refining the experimental design, and considering tailored questionnaires may lead to more comprehensive findings in future studies.
The limitations of the study highlight the need for caution when interpreting the results.This research should be viewed as a first step towards understanding the relationship between participants' understanding of the robot's intentions and perceptions of its cooperativeness with quantitative data.Further research with improved methods should be pursued to build upon these initial findings and address these limitations more fully.

CONCLUSION
Our research investigated human perception and understanding in the presence of an unanticipated robot action in the context of physical human-robot interaction.We designed a game-based collaborative task where the human participant played a "catch-falling-coins" game displayed on a screen by manipulating a robotic arm.Our experiment created a surprising moment in an "active session", where the robot took the action of targeting higher-valued coins without prior notification to the participant.Although 63% of the 35 participants came to understand the "robot's intention" of "aiming at the higher-valued coins", there were still people who did not understand that the robot had any kind of intention, nor did they understand it correctly.Our research is particularly interested in the relationship between having an understanding of the robot's intention and a positive perception (i.e., of cooperativeness) of the robot.Our findings show that there exists a relevant relationship between a participant's understanding of the robot's intention and their positive perception of the robot, with statistically significant correlations with a few personality traits.Further research is, however, needed to identify the causality of this relationship, while the current study shows mainly correlational outcomes.
The originality of this research lies in including an unanticipated physical action by a robot in an experimental setting in the context of a collaborative task requiring physical interaction.Given the current lack of existing literature addressing this type of interaction, we conducted an exploratory study, using semi-structured interviews to explore what the participant experienced and thought during the experiment.This approach allowed us to obtain in-depth information that complemented the quantitative data and promoted a holistic understanding of the interaction.We believe that the research contributes, even at its current exploratory stage, To advance the understanding of human perception, especially how humans interpret unexpected robot actions and form positive or negative perceptions during physical interactions with robots.

Fig. 1 .
Fig. 1.Examples of the experiment with two participants.The participant is playing the game displayed on the screen by moving a robotic arm which, in turn, controls the horizontal position of a virtual catcher tray (red) on the screen.

Fig. 2 .
Fig. 2. Participant affective state as measured using the self-assessment manikin (SAM) after each session.An asterisk indicates a statistically significant difference (p-value < 0.05).

Fig. 3 .
Fig. 3. Scores for the six factors of psychological safety measured by the CH33 questionnaire after the trial ("T") and the three active sessions ("Exp").An asterisk indicates statistical significance with p-value < 0.05.

Fig. 7 .
Fig. 7. Differences between the trial session and active sessions in the factors of psychological safety measured by the CH33 questionnaire, grouped by the interview subcategories.The red line indicates zero value (no difference).

Fig. 8 .
Fig. 8.The three sub-scales of NARS grouped by the interview subcategories.

Fig. 9 .
Fig. 9. SAM scores after each session (Trial, Active sessions 1, 2 and 3) grouped by the interview subcategories of understanding of robot's intention.

Table 1 .
Semi-structured Interview Questions 8 Do you think that the robot has its own intention or strategy in the game? 9 Do you think that the robot is helpful or cooperative in the game? 10 What do you think is the best game strategy for you now? 11 To conclude, could you please explain your motivation to participate in this research?

Table 2 .
Participant Understanding of the Robot's Intention and their Perception of Robot's Cooperativeness