Social Robots in Hospital Settings: An Initial Exploration of the Services Provided, Interaction Style and in the Field Evaluation

There is interest in using social robots in hospitals but little understanding of how they engage with users to fulfil their roles. Research in the field helps understand how HRI may occur naturally. We conducted a scoping review of literature on social robots in hospital settings, using the Arksey and O'Malley 5-step method. This report presents an initial synthesis of 19 studies. The robots performed various tasks, from greeting and educating visitors to social companionship, supporting healthcare delivery, carrying goods, and educating staff. Most were physically embodied, but three were embodied conversational agents (virtual robots). To engage with interaction partners, 89% used speech and 79% used motions, gestures, and facial expressions. Less commonly used was written text and tactile interaction. Further personalizing interactions, introducing creativity, and focusing on the wild aspects of HRI could help support the application of social robots in hospital settings.


BACKGROUND
The application of robotics continues to gain prominence in the healthcare sector, with healthcare robots being deployed to perform various tasks within hospital environments [1][2][3][4].These roles encompass medical and clinical responsibilities, such as participation in surgical procedures, monitoring patients' vital signs, and dispensing medication.Additionally, healthcare robots have contributed to providing companionship [5], logistical operations involving the delivery of bio-samples, meals, and linen [6][7][8] and housekeeping responsibilities, such as the sterilization and disinfection of surfaces [9].Moreover, there has been an interest in using robots for administrative functions, such as acting as a receptionist [10].
Many of these roles require robots to engage in social interactions with patients, staff, or visitors, and social skills become important [11].Social robots can be deployed to assist in hospital services traditionally requiring interpersonal encounters.For example, a hospital receptionist robot may elicit information from patients and/or their companions to complete the patient registration process.Social companion robots interact with humans in a manner conducive to building and sustaining interpersonal connections.Consequently, they must exhibit enough social competence to effectively engage with users.This is crucial in a hospital setting when encountering distressed patients or their companions needing support.
Socially competent behaviors in interpersonal interactions can be categorized as transactional, goal-oriented, and contextdependent [12].In essence, individuals act with the objective of advancing their goals in a manner responsive to others' reactions and appropriateness within a given context.If we apply this theory to robots, socially competent robots should adapt and personalize their behaviors based on the purpose of the interaction and overall goals, user relationships, user characteristics, and user feedback.Social robots designed for maintaining human-robot relationships are also expected to evolve their engagement behaviors in tandem with the progression of such relationships.

Gap in knowledge and aims
Previous literature reviews have primarily focused on summarizing the services provided by healthcare robots [1][2][3][4] without a distinct focus on social behaviors.In contrast, this scoping review aims to provide an overview of how existing social robots engage with users when delivering diverse services in hospital settings.
By examining how social robots engage with users to fulfil their designated roles, this review provides a more nuanced understanding of appropriate robot behaviors and identifies their strengths and limitations.Through research in the field, the review helps understand how human-robot interaction occurs naturally, outside the laboratory.

METHODS
We conducted a scoping review following Arksey and O'Malley's five-step process [13] and used methods similar to a previous review on robots and agents for combating loneliness in older adults [5].This report presents an initial analysis of our review, as the data extraction is ongoing.The registered protocol Details on the methods can be found on OSF (https://osf.io/fhjr5/),where the protocol is registered.

Identifying the research question
In alignment with our review's objective, we identified questions to guide our work.These were: What tasks do social robots perform in hospitals?What types of social robots were employed in the hospitals and how did they engage with users to fulfil their designated roles?What are the impacts of human-robot interactions on users?The last question will be the subject of the full review.

Identifying relevant studies
We searched SCOPUS, PubMed, Web of Science, CINAHL, PsycINFO, ACM Digital Library, and IEEE Xplore using keywords related to the population/context (e.g., Hospital*, acute care, Patient*, inpatient), technology (e.g., social robot*, digital agent*, virtual agent*) and focus (e.g., user experience, evaluation, interaction).Limits were applied to species (excluding animal studies) and publication type (excluding literature reviews).

Selecting studies
Selected studies had to be published in English and report on social robots (physical or virtual agents capable of social interaction).Articles had to provide information on the robotic platform/system used, be published in journals (to provide enough information on the evaluation) and include an evaluation aspect conducted in natural rather than simulated hospital settings.The exclusion criteria were studies not in English, reporting on robots not capable of social interaction (e.g., daVinci surgical tool), used in other health settings (e.g., residential or nursing homes) or simulated hospital settings, grey literature and conference proceedings and technical papers.
Search results were managed on Rayyan [14].After removing duplicates, a two-step screening process was employed, involving title and abstract screening followed by a full-text review against eligibility criteria.Two researchers were involved in this process.

Charting the data
An Excel charting sheet recorded information extracted by the reviewers: on the article characteristics, the robot, interaction style and partners, the study (e.g., sample size, participants, setting and methods) and findings (with a focus on social interaction).

Collating, summarizing and reporting results
Results were summarized using a content synthesis approach, structured around the four research questions.The reporting of the full review will adhere to the PRISMA-ScR guidelines [15].
The majority (74%, n=14) were exploratory or descriptive, often using mixed methods such as observations, surveys, interviews, and system-use data.Only five (26%) were experimental or quasiexperimental.The 17 studies that reported a sample size had three to 318 participants, totaling 802 participants (M: 47.18, SD: 76.85).

Type of robots
Physical robots were most commonly used, with 7 using Nao.Three reports of the Nao robot also used sensors to detect exercise parameters (e.g.step length, cadence, and blood pressure) and a bespoke graphical user interface that ran on a tablet, which allowed the user to interact with the system and respond to the robot's requests.One other humanoid (Arash robot) was used, as were two animal-like robots: Paro (harp seal) and Nabaztag (rabbit).The remaining studies used the SCITOS robotic platform, an Automated Guided Vehicle and the CLARC robot (with the CGAMed application to enable clinicians to monitor data. Three embodied conversational agents (virtual robots) were used.One was interacted with using an Oculus Quest 2 HMD [24].This was created using the Unreal software and DriftAI for speech.Another avatar (Falls Risk Assessment Avatar FRAAn) used the Kinect camera to detect the user's responses [29].This avatar appears agender and ethnically neutral.Two other avatars were female in appearance, with one also appearing multi-racial.
Across the technology, 68% (13/19) were autonomous, and only 21% (4/19) used Wizard-of-Oz (WoZ).WoZ is a technique whereby users believe they are interacting with an autonomous robot, but this is controlled by someone who is hidden (e.g., in another room).One used a combination of WoZ for speech but had automated facial expressions and motions [20].

Tasks performed
The robots performed various tasks and services, including greeting visitors and educating the public on preventing COVID-19 (e.g., how to wear a mask).Some focused on providing social companionship, including telling stories to children, conversing with patients, telling jokes and reading the news.Others helped to deliver healthcare.For example, delivering interventions to mitigate falls, loneliness and delirium for hospitalized older adults, rehabilitation therapies for outpatients in pediatric and cardiac wards, supporting cognitive stimulation sessions and performing motivational interviewing to encourage women to breastfeed.A smaller number helped make assessments for being able to complete activities of daily living, fall risk and addiction.Although assistive in nature, these robots had social skills e.g., were conversational.Only one was educational for healthcare workers, delivering verbal de-escalation skills training in response to a code black (violence or aggression) [24].Another drove around the hospital carrying goods (e.g., medical equipment, food, and waste) but spoke to passersby, asking them to move out of the way [32].

Interaction partners and length of interaction
Interaction partners were inpatients, outpatients attending clinics, health and care workers (i.e., nurses, therapists, physiatrists), staff (i.e., administrators and service staff), support people, visitors and family members.The length and frequency of the engagement varied greatly and were often not reported clearly.In five (26%) of the studies, this was not reported at all.Interaction lengths varied from short one-off interactions lasting less than 5 minutes (storytelling to pediatric oncology patients) [20] to more frequent interactions for up to 18 weeks, where bi-weekly cardiac rehabilitation sessions lasted 35-55 minutes each [26].

Speech
Almost all (89%, 17/19) of the social robots used speech to communicate, ranging from speaking words to making noises like laughing, coughing, or sneezing.This interaction was mostly realtime (synchronous) and natural, whereby the robot responded to the interaction partner.Examples included answering questions or stopping talking when the user interrupts.Two used WoZ to complement the speech, whereby the robot automatically used filler phrases while the human wrote responses.Speech was sometimes pre-programmed and one-way, e.g.warning people to move out of the robot's way.

Gestures, expressions and behaviors
Most robots (79%, 15/19) used facial expressions, movements, and gestures to interact.Facial expressions included showing different emotions (e.g., laughing) and gazing at the interaction partner to show attentiveness.The LED lights on the Nao robot were often used to imply facial expressions and mood.For example, green or yellow fast-blinking LEDs expressed pleasant expressions, while slowly changing blue or violet LEDs expressed unpleasant expressions.
Movements and gestures included waving at the interactive partner to greet them and role-modelling exercises they described, such as exercises and poses during therapy.The Nao robot in Blavette et al. [18] also demonstrated behaviors like correctly putting a mask on, washing hands and simulated coughing into its elbow.Some robots, like Paro, did not have human-like faces or limbs to convey common gestures and expressions.Instead, it would look at the interactive partner and move its whiskers and body when touched.
Robots and embodied conversational agents also used touch sensors to interact, where interactive partners used a touchscreen on the robot or a tablet or directly touched the robot [19,22,23,27,34].Carrillo et al. [23] exemplify this as Nao's head-based tactile sensors enabled users to initiate physical activities.

Personalization and sustaining relationships
Techniques to personalize interactions and sustain relationships were mentioned in some of the studies.Commonly, this included using speech and gestures/expressions that responded directly to the user, such as by pacing the delivery of the intervention in accordance with the responses of the interactive partner.Irfan et al.
[28] also used the patient's name and recognition, where online learning of biometric data helped to adapt to changes in the user's appearance (e.g., hairstyles or wearing spectacles).The system also tracked their progress between sessions and drew on previous data to make personally motivating remarks.
Two studies explicitly mentioned proxemics.In one, the robot was initially placed at a social distance from the interaction partner (2.50 m) and moved to a closer personal distance when introducing itself (1.5m) [18].In another study, the robot was 1.5m from the interactive partner to represent the social distance of interactions with friends and colleagues [19].This was adjusted during the interaction in response to the session's dynamic.
Similar to findings in a previous review [4], the social robots presented in our initial synthesis performed various services and tasks, from greeting and educating visitors to providing social companionship, supporting healthcare delivery, carrying goods, and educating staff.
Interaction partners varied greatly.They represented the diverse hospital workforce (including administrators and health and care workers), patients (from new mothers and pediatric patients to older adults with cognitive impairment) and support people (e.g., families and visitors).Most robots were physically embodied (mainly using Nao), and only three were embodied conversational agents (virtual robots).To engage with interaction partners, 89% of the robot's used speech and 79% used facial expressions, movement, and gestures.Tactile interaction was less commonly used.
Creativity was evident in the applications.LED lights and colors conveyed facial expressions and emotions.Other technologies, such as Virtual Reality headsets, were used to engage with conversational agents, and sensor data prompted social interactions while delivering physical therapies.Non-humanoid robotic forms like animals and Automated Guided Vehicles also unexpectantly became interaction partners.
The focus on personalized interactions was promising, with speech, gestures and proxemics used to maintain and sustain longer-term human-robot relationships.In HRI, personalization refers to robots that meet an individual user's needs and/or preferences [35].When using social service robots, personalization may encourage interaction and engagement [36,37], cooperation [37] and overall acceptability [38,39].Within healthcare, this may positively impact adherence to longer-term programs.
It was interesting to note the preferences of the embodiment of the interventions, with most of the studies using physical robots.Previous research has compared embodied agents (e.g., Nao) to virtual graphical bodies (e.g., animations), finding that participants perceive more social presence, helpfulness and enjoyableness when interacting with a robot [40,41].This may lead to heightened motivation and emotional connection [40,41].When delivering healthcare interventions, this can be especially important.
It was a strength that the studies were conducted in the field, as a previous review found that 75% of the human-robot interaction research on social hospital robots was conducted in laboratories [4].However, we found that the reporting was often unclear, with the frequency and length of interactions often not described.There is a need to provide more context and detail in future studies, as it was unclear how often the robot was actually used, why there may have been variation in use between interaction partners and how generalizable or transferable the findings were.
A focus on implementation learnings in future human-robot interaction research is also crucial, as many factors impact the adoption, uptake and sustainability of digital interventions in realworld settings.Researchers may consider applying existing frameworks, such as the Nonadoption, Abandonment, and Challenges to the Scale-Up, Spread, and Sustainability (NASSS) framework [42].This can uncover challenges in adopting and implementing health technologies by focusing on the multifaceted nature of technology and considering the technology itself, the context in which it is implemented, and interactions among stakeholders.Another important factor to consider is cost, as embodied robots are more expensive to purchase and maintain than virtual agents.
Ultimately, by personalizing interactions, introducing creativity into social interaction, and further focussing on the wild (uncontrollable) aspects of HRI (e.g., natural interactions, context, and implementation factors) we can help support the application of social robots to hospital settings.

Figure 1 :
Figure 1: Flow diagram of the literature search and screening process.