Effects of Transparency in Humanoid Robots - A Pilot Study

Transparency is recognized as a vital feature for understanding and predicting robot behavior. Another feature that affects interaction with robots is their anthropomorphism. The relationship between these remains under-explored but is postulated to be negative. We present a pilot study investigating the effects of robot transparency in human-robot interactions, where the robot has an anthropomorphic appearance. We asked participants to evaluate and interact with the humanoid robot Pepper to examine whether visualizing the robot's goals and behavior affects perceived intelligence, anthropomorphism, and robot agency. Our preliminary findings suggest that users may attribute higher ratings of agency when interacting with a robot visualizing its goals. In this late-breaking report, we propose our experiment on the interplay between transparency and anthropomorphism in human-robot interaction and summarize insights from our preliminary pilot study.


INTRODUCTION
This paper describes an experimental setup for a study on transparency and anthropomorphism in human-robot interactions.The work is inspired by the experiment described by Wortham et al. [34], investigating robot plan transparency in a non-humanoid robot.Transparency is an important interactional property in humanoid robots.While humans and our culture have evolved to facilitate inferring each other's behavior, artifcial intelligence is often perceived as 'black boxes', making it difcult to understand among other things robot decisions.Observing robot behavior may only present a limited aspect of their decision-making process, depending on the extent to which robots have and indeed can be designed to express reliable signals [5,18,25].Making robot behaviour transparent is crucial to the development of social robots, and impacts how human users attribute agency and potentially build relationships with robots.Does anthropomorphism help, through harnessing social reasoning?Or does it harm through the same vector, since robots will fundamentally never be exactly human?
Transparency in robots has signifcant efects on interactions with humans as it relates to how humans anticipate and predict robot behavior, as well as ascribe theory of mind (ToM) -understanding others' beliefs and intentions [6,24,27].Transparency might also have a signifcant impact on making robots more explainable [20], trustworthy [26], and safe [3].Transparency has been shown to increase the understanding of the control system of the robots, which in turn improves communication in human-robot interactions [12,34].Interestingly, in the Hindemith et al. study [12], lower anthropomorphism scores relate to higher rates of reaching interaction goals.However, the relationship between diferent transparency conditions and anthropomorphism has not been analyzed.In the initial study by Wortham et al. [34], participants tended to ascribe anthropomorphic cognitive descriptors, such as thinking, to a fully mechanistic robot (wheels and exposed circuit boards) more often in the transparent condition.However, anthropomorphism was not assessed explicitly as a separate dimension.These intriguing observations motivate us to specifcally investigate the interaction between anthropomorphism and transparency for HRI.
Investigating anthropomorphism within human-robot interaction reveals diverse and complex dimensions.Anthropomorphism has been shown to afect how people perceive robots when they fail [13,16,17].Wykowska et al. [35] explore the dynamics of engaging with teleoperated versus autonomous robots, investigating the impact of observers' beliefs on sensory processing.Martini et al. [19] explore the physical characteristics essential for triggering mind attribution to non-human agents, examining the varying levels of human likeness.Wiese et al. [31] looked into the activation of social brain areas when perceiving others as intentional agents.Inspiration was drawn from foundational works that introduced concepts like the Uncanny Valley theory [21] and specifc psychological processes related to anthropomorphism [8,29].Research has delved into the examination of cognitive confict arising from categorically ambiguous stimuli [1,30,32]; some research has also questioned whether anthropomorphic appearance or anthropomorphic social behavior afects how humans perceive social robots [15].
In this work, we aim to take the study by Wortham et al. [34] further by investigating anthropomorphism as a central dimension of transparency [12].Our core question is: does anthropomorphism contribute to or inhibit the impact of transparency information?While anthropomorphism can increase engagement and is therefore generally considered benefcial for human-robot interactions, various factors can interact with it [23].We investigate whether a humanoid robot with human-like features such as face, hands, and eye-gaze afects how people perceive its intentions, and whether transparency information on the workings of its AI impacts this [19,35].We propose an experiment design and present preliminary results from a pilot study with human participants.Our work examines the following research questions: • RQ1: Are there any interaction efects between transparent robot behavior and human-like form or features?• RQ2: How do users perceive human-like transparent robots in the dimensions of intelligence, anthropomorphism, and robot agency?

EXPERIMENTAL SETUP
The humanoid robot Pepper [22] was used for the current humanrobot interaction study to investigate the efects of transparency on perceived intelligence, likeability, anthropomorphism, and robot agency.The Pepper robot moves on a wheeled base, and its height of 120cm allows direct interactions with humans.Figure 3 shows the Pepper robot in the experimental space.The robot possesses several cameras in its head and is able to detect and track humans and human faces.Pepper is equipped with tactile sensors in its head and both hands, which provide limited possibilities for tactile interaction like stroking of its head.The robot is also equipped with several additional sensors, e.g., a laser scanner, which allows for safe navigation in human space.Despite its human-like appearance, and the senses designed to roughly approximate (though by no means be identical to) those of a human, the behavior of the robot might appear completely unexpected to a human observer.Errors in perception might lead to behavior that is unexpected by a human because the type of those errors difers signifcantly from those in humans.For instance, the visual detection of human faces of the robot Pepper is highly sensitive to lighting conditions in the room.From one angle the robot might not detect a human reliably from a certain perspective, while confdently detecting a human from another perspective.This could result in the robot appearing more attentive in some locations of the room while appearing more avoidant in others.More essentially of course, the robot does not share a full human motivational system or behavioral repertoire.
Additional information on the robot's perception and internal behavior state might help a human understand the robot's intentions and goals.On the one hand, it could result in a higher degree of anthropomorphization by making the robotic senses more understandable and thus more relatable to a human.On the other hand, the displayed information might cause the robot to appear more as a technical device.

Experimental Robot Behavior
In this study, the behavior of the robot was designed to be strictly non-verbal.This was done to control for possible unwanted anthropomorphism efects that could result from speech-based interaction independently of the robot and possibly overshadow the anthropomorphism that results from the robot's appearance and behavior.
The behavior of the robot in our study consists of two main phases, solitary and interactive.In the solitary phase, the robot moves between three predefned relative locations in the room.At every predefned location, the robot waits and performs an arm animation resembling checking a wristwatch.After a brief period, the robot moves to the next location.During the entire phase, the robot actively checks for human presence.
The behavior during the solitary phase is interrupted when a human is detected, and the robot switches into an interactive phase.The interactive phase is limited by a time counter, which simulates a social battery.In the frst 20 seconds after detecting a human, the robot pays full attention to the person by tracking the face and actively moving its head and body to follow the human.The robot actively keeps a close distance between itself and the human, i.e., the robot backs up or comes closer if the human is too close or too far, respectively.After 20 seconds, the robot stops adjusting the distance but still follows the direction of the human's face with its head and body.The robot also starts to perform a random selection of gestures.The interactive phase ends automatically after 40 seconds or if the human is no longer detected; either way, the robot then returns to the solitary phase.The interactive phase also ends if the robot's head is touched.
The high-level behavior was implemented using POSH (Parallelrooted, Ordered, Slip-stack Hierarchical) plans, from the Behaviour Oriented Design (BOD) development methodology [4,33].The diferent phases in the behavior of the robot are expressed as drives in the POSH behavior description.Figure 2 illustrates the POSHplan of the robot's behavior.The visualization was generated with a modifed version of the ABOD3 visual editor for POSH-plans [28].The state of the robot's behavior, such as active drives and actions, as well as the robot's senses, such as human detected, can be displayed on the chest display of the robot as illustrated in Figure 1.
The code is open and can be found on the project's website [14].

Experimental Space
The experimental space was set up as illustrated in Figure 3.The room was divided into two parts -a quiet space and an interactive space -separated by a small barrier placed on the foor.This barrier served as an obstacle and could not be crossed by the robot.
The quiet space consisted of a table with a chair, a computer, and an additional monitor.The additional monitor displayed the static behavior graph from Figure 2 that could be used by participants to interpret the robot's behavior.The computer (a laptop) displayed the questionnaire.In the quiet space, a participant could refect on the robot's behavior and fll out the questionnaire.Figure 3 (right) shows a participant flling out the questionnaire in the quiet space.As can be seen in the fgure, the robot was still able to detect and observe the human, as well as attempt to engage through gestures without crossing the separation barrier.The robot was located in the interactive space and was active during the entire duration of the session.When the participants entered the room, they saw the robot moving about the space as described in Section 2.1.The participants were free to enter or exit the interactive space.

Experimental Session
Each participant received a brief introduction.The participants had 20 minutes to interact with the robot and to fll out the questionnaire.After 20 minutes the participants were informed that the time had passed, and they were given the opportunity to spend a few additional minutes with the robot to fnish the questionnaire.

Transparent vs. Non-Transparent Condition
The participants were divided into two groups.Participants in the transparency group had an additional display showing the robot's POSH behavior tree as well as the robot showing its current sensing and behavior state on its chest display.Participants in the nontransparency group did not have access to the robot's sensing and behavior state (both displays were physically covered with black opaque cardboard).

Questionnaires
To assess the participants' perception of robots, we used the English version of the Godspeed questionnaire [2].Unlike the smaller set of custom-made questions used in the previous studies [34], the Godspeed questionnaire is a validated measurement translated into a number of languages.It is widely applied in human-robot interaction research, facilitating comparisons of the participants' subjective experiences across a wide range of studies in diferent cultures.It consists of 24 items aggregated into 5 dimensions: likeability, perceived intelligence, perceived safety, quality of interaction, and, importantly, anthropomorphism.On each item, the participants judged between two opposing options on a 5-point scale, for example, "calm <-1 2 3 4 5 -> agitated".
Next, participants completed the Mind Perception questionnaire [10] to assess their attribution of mental capacities and experiences to robots.It consists of 18 items, with each item addressing a feature or an ability, for example, "How much is the robot capable of conveying thoughts or feelings to others?".The participants indicated on a 5-point scale whether the agent is less capable or more capable of it.In contrast, the previous work that this study builds on (Wortham et al. [34]) only used a subset of 7 out of the original 18 items and a continuous scale.
Finally, the participants submitted their basic demographic information, their knowledge of STEM subjects, experience with programming and with robots, and self-reported English language profciency.We also asked the participants whether they considered the use of robots as a threat or as a beneft, broadly.

PRELIMINARY TESTS AND PILOT STUDY
An initial study was conducted with 36 participants, consisting of 19 female and 17 male participants.The study was conducted with each participant individually.During the experiment, each participant spent between 20 and 25 minutes alone in a room with the robot.The task was to interact with the robot to fnd out its goals and aims, and after the task, they were asked to complete the questionnaires.The survey aimed to assess how the perceived transparency in the interaction with the robot was afected by additional information regarding the robot's internal state and how it related to the perceived level of anthropomorphism of the robot.

Initial Findings
Several observations were made, with some fndings consistent with our hypotheses on transparency and anthropomorphism, and some fndings going against our predictions.
The transparency group reported a higher level of understanding of the goals of the robot and a higher level of perception that the robot achieved its goals.The transparency group reported more often that the robot was "controlled by a human" and less often that the robot was acting autonomously.The non-transparency group reported a higher level of the robot appearing to be human.The non-transparency group reported a signifcantly higher perceived "communication skill" of the robot than the transparency group.Preliminary data analysis did not show a signifcant dependency between anthropomorphism and the transparency condition, at least at the current N.
Most participants reported the visualization of the status of the robot's perception as being helpful in understanding the robot's behavior.The majority of participants commented on the robot's behavior appearing repetitive.The most common suggestions for improvement of the robot's behavior were increasing the robot's responsiveness as well as adding speech capabilities.

DISCUSSION AND FUTURE WORK
The initial results are currently being evaluated and used to calibrate the experimental setup and to conduct a more extensive study.
In the previous study, the reactive planer had direct access to the robot's basic functionalities.The robot's range of actions was limited.In the current study, due to the complexity of the robot, the reactive planer controls higher-level decisions, which are executed by dedicated control functions like animating a gesture or moving to another location.Compared to experiments with the simpler robot, this resulted in limitations, such as the repetitive gestures subjects noted.The underlying challenge might stem from the fact that the human-like appearance of the robot inspires a signifcantly larger range of capabilities, which the robot is unable to fulfl, in contrast to the abstract shape of the simple wheeled robot that inspires fewer expectations.The presence of such expectations is also supported by requests for more responsiveness and speech capabilities.We had deliberately excluded a speech interface after pre-pilot work, frst because of its high anthropomorphic nature, and secondly to reduce between-subject variability in speech recognition.
The transparency added by visualization of the robot's senses and its internal behavior state seems to increase the sense of being able to understand and anticipate the robot's behaviors.It also seems to result in participants perceiving the robot as less capable.This may be accurate, a success of transparency; or pejorative, as transparent intelligence is not perceived as 'real.'These will be difcult to disambiguate without a more meaningful and complex task for the robot to perform.
The preliminary analysis did not show a clear dependency between the transparency condition and the perceived level of anthropomorphism of the robot.One reason could be that the majority of participants had experience in interacting with robots and had a programming background.But this is potentially good news if transparency both gives users a better model of robot behaviour, yet does not interfere with the social acceptability of robots.
In conclusion, we speculate that transparency, by providing access to internal decision-making processes, might facilitate humanrobot interaction by engaging metacognitive mechanisms -that is, thinking about thinking.Metacognition plays an important role in human social interactions [9] and has been proposed as a key element for successful communicative strategies [11], both from the receiver of the information and transmitter [7].It is possible that experiencing transparent humanoid robot interaction will impact human self-understanding as well.

Figure 1 :
Figure 1: The Pepper humanoid robot used in this study.Left -perceptual and behavioral information is displayed.Rightthe display is covered up.

Figure 2 :
Figure 2: POSH plan for the behavior of the robot.

Figure 3 :
Figure3: Scenes from an experimental session.Left: the robot is in a solitary state, and the participant attempts to catch the robot's attention; Middle: interactive phase, the robot pays full attention and keeps its distance from the human; Right: the participant flls out the questionnaire, while the robot is in the interactive phase paying attention to the human.