Social Robots in Healthcare: Characterizing Privacy Considerations

As healthcare robots gain traction, human-robot interaction (HRI) researchers are exploring the factors that impact user adoption and trust in these robots. Due to the sensitive nature of care, privacy concerns play a significant role in determining robot utility, usefulness, and adoption. In our work, we conducted a 3x3x3 online study (N=239) to explore peoples' perceptions of privacy and utility of 3 robots at varying levels of Human-Likeness (HL) across 3 realistic healthcare contexts. The results show that the context of care delivery is a key driver of perceptions of privacy and acceptable privacy-utility trade-offs. Interestingly, the HL of robot design may not significantly impact peoples' privacy perceptions of healthcare robots. We plan to leverage these key findings to develop privacy-aware robot behaviors that are context adaptable in order to improve privacy outcomes for healthcare robots.


INTRODUCTION
HRI research on social robots to support human health is growing rapidly [40].Healthcare robots are serving roles in locations ranging from hospitals to homes, performing a variety of services such as triage and supply delivery, as well as providing companionship and telemedicine support [25-27, 34, 42, 45].
A unique challenge in using healthcare robots is that they are providing care in sensitive care spaces [50].Like healthcare providers, robots bear responsibility to protect people physically, psychologically, and socially.HRI literature classifes these responsibilities under the umbrella of privacy for social robots [29], violations of which have been shown to negatively impact users' trust [1].On the other hand, people may develop an increased trust in healthcare robots over time as a result of anthropomorphizing them [15,29,30,37], and may even begin oversharing with the robot in certain contexts [22].Therefore, healthcare robot designers need to balance various privacy concerns against the perceived usefulness of these robots, while being mindful of other factors such as HL robot design.Addressing these expectations could improve users' trust, and consequently the adoption of healthcare robots.
Due to the embodied nature of robots, research in the area of privacy-sensitive robotics characterizes privacy under four unique dimensions [29].Informational privacy refers to the privacy of personal data, social privacy refers to the privacy among social actors, psychological privacy refers to the privacy of an individual's thoughts and values, and physical privacy refers to the privacy of one's body and property.Prior research also suggests privacy-utility trade-ofs in social robot use.Privacy-utility trade-ofs are a discrepancy observed in peoples' intent to use a social robot despite being aware of the privacy concerns it may pose [31], since people may be willing to trade-of privacy in lieu of the utility they fnd in using the robot.This indicates that perceived utility may also be a determinant of user perceptions of healthcare robots.A holistic understanding of peoples' perceptions of privacy, utility, and the efect of HL robot design and anthropomorphism will enable us to develop privacy preserving behaviors towards more trustworthy healthcare robots.
To explore this, we conducted a large-scale quantitative study ( = 239) to address three research questions related to peoples' perceptions of social healthcare robots.RQ1: How is human-like robot design related to privacy perceptions?RQ2: How is human-like robot design related to the privacy-utility tradeof?RQ3: How does context afect privacy perceptions of social healthcare robots?
We developed animated design fction stimuli to ground our participants in three realistic healthcare scenarios (hospital waiting room, hospital patient room, and home) where healthcare robots are commonly used.We selected three robots (Moro, Pepper, and Geminoid) of varying degrees of HL from the ABOT database [39] and placed them in these scenarios.Based on these stimuli, we evaluated participants' perceptions of privacy and utility through validated questionnaires and free-form responses.
Our results showed that the robot's context of operation is a key determinant of peoples' privacy and utility perceptions, even more so than the robot's HL design.Particularly, our results indicated that healthcare robots can best serve needs at home, while simultaneously being the most sensitive to a variety of privacy concerns.
These contributions can be leveraged to develop healthcare robots whose design is centered around end users' requirements for privacy while using these technologies.

RELATED WORK
Privacy in HRI Privacy-sensitive robotics is an emerging area of research that explores the unique dimensions of privacy associated with robots.While it expands upon the existing body of research on informational privacy from legal and ethical perspectives [12,19,49], this is insufcient for HRI as social robots are embodied agents [29] and have a broader defnition of privacy.To address this, there are studies that explore the privacy concerns of robots based on their context of use, such as for telepresence robots [24,43], healthcare robots [30], and household robots [16].However, many of these studies are qualitative with small sample sizes, and their results may not be generalizable.
Human-Likeness in HRI Robot appearance is known to afect peoples' perceptions of a robot's intelligence [21,46], credibility [8], trust in the robot [37], acceptance [36], and can inform users' judgements of robots [17,23].Phillips et al. [39] developed a measure of HL of robots as a combination of their constituent human-like physical features.Other works explore anthropomorphism of robots as HL in appearance combined with HL in behavior [10,47].However, to our knowledge no empirical study has explored how anthropomorphism and HL robot design could afect privacy perceptions of healthcare robots.
Design Fiction Many studies in HRI have used design fction as stimuli to help participants imagine scenarios or technologies that may not yet exist [3,13,20].Design fction are most commonly used in the form of vignettes [32], static imagery [3], and storytelling [2,7].To our knowledge, limited studies in HRI have explored the use of design fction animations as prompts for participants [48], however it may be worth further exploring design fction animations for large participant sample sizes while addressing a methodological gap between static imagery and the use of realworld robots.

OUR WORK
We conducted a 3 (HL) x 3 (context/scenario) x 3 (repeated measures) online study ( = 239) to determine if and how robot HL and context/scenario afected peoples' a) privacy perceptions and b) utility perceptions.We had 2 independent variables: Robot HL and Context/Scenario (describing the robot's task).HL had 3 levels: low (very robot-like), medium, high (very human-like) based on HL scores selected from the Anthropomorphic roBOT (ABOT) database [39].Context had 3 levels: hospital waiting room (where the robot helps with patient check-in), hospital patient room (where the robot helps with patient mobility), and home (where the robot helps with patient neurorehabilitation).We developed scenarios and stimuli within this framework for our study.
Participants and Power Analysis: We conducted a power analysis using the G* software [18] for a mixed between-within subjects F-test to detect efect sized with power of = 0.80 and = 0.05.We sampled 120% of this number ( = 240) to account for potential participant attrition in online studies.We recruited 239 participants, all of whom passed a "bot check" and audio-video check procedures before viewing our stimuli.
To operate in the scenarios we developed, each robot needed to include two feasibly functional arms so that the robot could provide mobility assistance, and needed to be mobile.We were not concerned with whether each robot had these functional capabilities, but rather that a lay observer would fnd it feasible that the robot might serve these functions.
To support ecological validity, we developed 3 realistic scenarios depicting healthcare robots in the 3 most common uses of social healthcare robots: patient check-in, mobility assistance, and neurorehabilitation assistance.With the help of an animator, we developed animated videos of each robot in the 3 scenarios.
We designed each scenario with two characteristics 1) we ensured there was at least one clear violation of privacy and 2) each scenario ended with slight ambiguity -indicating the robot may or may not have gone through with the privacy violation.We created this ambiguity in order to capture the privacy-utility trade-of, where participants answered questions such as "If the robot did / did not do . . ., I would still fnd the robot useful / benefcial / fulflling a need".
Manipulation Checks: We conducted two manipulation checks to estimate the visual and cognitive fdelity of our animated design fction stimuli.We recruited 93 participants to complete the study.Participants were randomly assigned one scenario, and had to rate each of the three animated robots presented in that scenario video.
Placement on the HL Spectrum: To evaluate the visual fdelity, participants were asked to rate each robot's overall HL by answering: "Does this look physically human-like (HL)?" and "Does this look physically robot-like (RL)?" on a 0 (not at all HL/RL) to 100 (just like a human/robot) Likert scale.The results of this check revealed that the conversion from photorealistic depictions to animation suppressed perceptions of each robot's HL, but the relative ordering of the HL scores stayed consistent with the ABOT database scores.These results were as anticipated, and confrmed the visual fdelity of our stimuli.
Perception of Scenarios: To evaluate the cognitive fdelity of the scenarios, participants were asked to 1) select all the privacy violations they perceived in the scenario (or select "No violations") and 2) provide a free-form response "What do you think happened at the end of this video?"The results of this check revealed that participants perceived the type of privacy risk alluded to in each scenario.However, they also identifed aspects of other types of privacy risks, indicating that privacy risks are not mutually exclusive.In the free-form response, all participants were able to anticipate what may have happened next, indicating that our stimuli elicited high cognitive fdelity.
Measures and Evaluation: We used the trusting beliefs, overall privacy, and physical privacy subscales from the Lutz and Tamó-Larrieux [31] privacy scale along with the health information disclosure scale [14] to measure privacy perceptions, and a custom 3-item utility subscale to measure utility perceptions.We showed each participant one robot (selected randomly) across all three scenarios.After each video, participants answered all questionnaires, and, at the end of the study, provided optional open-ended feedback about the overall study.We evaluated our data using mixed repeated measures ANOVAs for each subscale of the dependent variable with robot type as the between-subjects variable.
Procedure: After providing informed consent, participants were randomly assigned one robot from the selected robots, and were presented the three scenarios in a randomized order.After each video, we presented the aforementioned measures.We repeated the procedure for the remaining two scenarios.We concluded the study by collecting demographic information following the openended qualitative question "Please use this space to leave us honest feedback concerning the study".The study took about 12 minutes to complete and participants were compensated $4 for their participation.

RESULTS
Quantitative Results: We conducted statistical analyses using the software Jamovi version 2.3 [44].
Qualitative Results: We analyzed the responses to our openended study feedback question using Refexive Thematic Analysis (RTA) [4,5].Two researchers frst independently coded the questions via an inductive coding process, and then resolved inconsistencies and refned themes through discussion [11].We did not calculate inter-rater reliability as per current best practices in RTA literature [6,35].
A few participants said that the programming of such healthcare robots should refect existing legal protections and social conventions (e.g., HIPAA).Some participants commented on the importance of not overstating the capabilities of robots."I don't think robots can replace humans for situations that require noting body language, facial expressions, tone of voice. . . the robot may misinterpret the severity of a situation.However, they could be useful for collecting basic demographic and insurance information." Participants also raised concerns about healthcare worker displacement."My concern is the fact that we have such high demand for these positions, yet instead of encouraging people to enroll in felds of study that we have such demand, we are replacing or trying to replace careers in these felds with robots.We have such high unemployment as it is, using robots will increase unemployment.Not help it."

DISCUSSION
Task and Context: Across all privacy and utility measures, the task and context of robot operation emerged to be the most important driver of peoples' perceptions of privacy risk and utility.Particularly in the home scenario, people indicated perceptions of the highest risk to privacy while also indicating high trusting beliefs and a high likelihood to disclose private information to a robot.These results align with those of prior studies that emphasize context as a determinant of privacy and utility perceptions of household robots [9,43].Therefore, while healthcare robots may best serve needs in homes, they may also be one of the more complex places to deploy assistive robots.Roboticists designing home healthcare robots should carefully balance developing useful functionalities while ensuring high standards of privacy protection.
Our qualitative results also raised important questions about worker displacement.However, healthcare robots may be able to bridge issues arising from healthcare staf shortages by performing non-critical tasks.It is important to carefully contextualize the tasks that such robots would perform within the existing legal and ethical frameworks for care delivery.
Human Likeness: While we did not observe signifcant efects of HL across privacy or utility measures, we did see signifcant efects across scenarios.
This could be because the ABOT database scores are based on static imagery, as opposed to the dynamic animations we used.Adding context and movement could change the overall HL of the robots, indicating that further work would have to be done to reassess the HL of robots in the ABOT database when robots are depicted with movement, rather than just static imagery.
Design Fiction: Our manipulation checks revealed that animated design fctions were good at eliciting high cognitive fdelity to ensure the robustness of participants' experiences as they viewed the scenarios.Particularly in HRI, researchers are often forced to use static imagery or vignettes [33,39,41] because of a lack of access to physical robots, sensitive settings, or simply because certain technologies do not exist yet.Furthermore, design fctions allow us to leverage methodological techniques with large sample sizes without having to delve into the complexities of developing real-life robot tasks and behaviors.
Design fctions present an exciting opportunity in HRI and are starting to gain traction in the feld [28,38].
Future Work: Moving forward, we hope to use these fndings to work alongside healthcare workers and patients to develop privacysensitive robot behaviors.Since context signifcantly impacts privacy and utility perceptions of healthcare robots, we will work towards developing robot behaviors that are adaptable to the privacy requirements of various healthcare contexts.We are currently collaborating with healthcare workers to identify spaces where healthcare robots can serve in hospitals.Some examples include performing tasks such as supply delivery and conducting triage assessments.
Through continuous collaboration with stakeholders, we aim to develop frameworks for privacy-aware robot behaviors for healthcare robotics that best improve privacy outcomes for healthcare robots.

Figure 1 :
Figure 1: We used animated robots placed in realistic healthcare scenarios to elicit high cognitive fdelity in our participants.Our robots range from very robot-like to very human-like, based on the real life robots: Moro, Pepper, and Geminoid.This storyboard shows animated Pepper helping a person with check-in procedures in a hospital waiting room.