Beyond the Black Box: Human Robot Interaction through Human Robot Performances

Robots are becoming increasingly visible in our lives as AI technology is going through unprecedented transformations. This is an important moment to pose questions of how the design features of these robots -both their physical and interaction design - shape their modes of relating, and correspondingly structure how we (humans) might relate to them. This paper describes work studying human-robot interactions through collaborative human-robot artistic performance. It introduces an initial project, Dances with Robots, and current work Beyond the Black Box - which function both as artistic performance and an intervention method to observe/study the changes in human perceptions/feelings towards robots. Through this work, the authors have witnessed how human performers' feelings and attitudes towards their robot collaborators have changed through time spent in rehearsal and in cooperative performance. Inspired by these observations, this paper introduces the research idea of studying human-robot interaction through robot-human performance artwork.


INTRODUCTION
The landscape of artifcial intelligence (AI) has witnessed an unprecedented recent transformation, sparked by the debut of Chat-GPT in the winter of 2022, accessible to the public, and fueled by the rapid advancements in generative AI technologies.This development made AI, and generative AI in particular a focal point of widespread curiosity and fascination among the public.Platforms that harness the power of generative models, such as chatbots and creative content generators, have made AI accessible and relatable to individuals without technical background.Understanding the societal implications and potential benefts of these technologies has become paramount.
It is essential to recognize the potential avenues where generative AI will shape our interactions with technology.Currently, the majority of AI technologies widely accessible to the public reside in machines such as computers and smartphones.However, in the future, AI will become increasingly embedded in our daily lives and embodied in the form of robots.This vision of robots as companions, assistants, and collaborators is inching closer to reality.Partner and social robots have been widely used in various contexts including education, entertainment, therapy, and human assistance [5].NAO, an autonomous and programmable humanoid robot developed and made available in 2008 by SoftBank Robotics (formerly Aldebaran Robotics, currently United Robotics Group) has been used in various educational settings including the RoboCup Soccer league for the development of algorithms for humanoid soccer and for the research of children with autism.Pepper, another humanoid robot developed by SoftBank Robotics, was unveiled in 2014 and has been in use by various businesses, stores and customers in Japan.Moreover, the fast growing demands for service robot applications in home or work spaces have contributed to the development of wellperforming robots equipped with better localization and actuation control [1].The development of AI technologies such as generative AI will further accelerate the development of partner/social robots with which we can envision the future where human interactions with robots, facilitated by AI tools, redefne the way we experience and integrate AI into our daily lives.To prepare for such future, it is crucial for developers of technology to understand how human perception/feelings toward robots will shape our relationship with robots since without the establishment of healthy human-robot interaction and application of social robots in our everyday life will not have positive impacts on our lives.
In psychology, attitudes defne the state of mind of an individual toward a group or an object and are important in describing mental predispositions that might explain human behavior.Some studies indicate the importance of robot competence and the perception of warmth for development of trust in HRI (i.e.[4]).However these HRI studies were conducted via video.Other studies have reported an increase in negative attitude toward robots perceived as competitors.
This paper reports our exploration of a diferent kind of research approach on how humans establish trust and afection in our human-robot interactions, and how the particular features of these bodies, their overall design, and their movements contribute to what types of relationships will develop between humans and robots.Also, our preliminary observational fndings through the development of human-robot performances.Moreover, how the relationships they -humans and robots -establish further shape the ongoing relationship between them will be explored through our project.
The research approach has grown out of our work over the past year called Dances with Robots -a collaborative performance between an anthropologist, education scholar, and emerging media artist -exploring new interactions between humans and robots.This unique approach to studying HRI is through movement art, especially performance involving humans and robots.The approach is unique in the sense that typical HRI studies are conducted in lab sessions or artifcial situations.Our approach uses a more intuitive process where human performers manifest implicit, unscripted attitudes while in rehearsals or performances of creative work.During this process, we document these observations as a form of ethnographic research, and report implicit measures of attitudes toward robots as a better predictor of behavior towards the robot than explicit measures of attitudes toward robots generally.

BEYOND THE BLACK BOX -A HUMAN-ROBOT PERFORMANCE
Beyond the Black Box project [9], the human-robot performance, is our intervention method to observe/study the change of the human perceptions/feelings towards robots, developed from a previous work, Dances with Robots, developed for ON DISPLAY GLOBAL 2022 by Heidi Latsky Dance [8] [Figure 3].ON DISPLAY GLOBAL 2022, a virtually distributed movement performance continued over 24 hours in the form of a living sculpture garden.The performers moved in slowly shifting tableau while challenging viewers with stillness, direct focus, and unavoidable gaze,

Human Robot Performance Company
Our collective is composed of two robotic and three human performers [Figure 2].PEPPER is a social humanoid robot from United Robotics Group (former Sotfbank Robotics), and is capable of recognizing faces, facial expressions/basic human emotions [10].Pepper was optimized for human interaction and is able to engage with people through various sensor inputs (touch, voice/sound, and vision).
It can also carry out prescripted conversations based on human inputs using voice recognition technology.CRANE is an xArm7, a co-robotic arm from UFACTORY [17], with a portable Robotics and Performative AI Workstation, and a custom creative robotics software stack.This 7 degree of freedom (DOF) articulated robot arm has integrated force and temperature sensing and interactive programming modes.
Our HUMAN performers -Authors Gerardo, Eguchi, and Twomey -have deep prior experience in performance and movement (Gerardo), creative robotics (Twomey), anthropology (Gerardo), and human-robot interaction (Eguchi and Twomey).The HUMAN performers embody two diferent roles: (Gerardo) deploys the prescribed choreography along with the two robots during the performance, (Eguchi and Twomey) program their respective robots to "perform" the choreography for the performance, albeit via diferent methodologies.Eguchi programs PEPPER's movement prior to performance, and so the choreography is scripted, but the facial recognition is tracking in real time.Twomey programs CRANE's joints to move in real time during the performance: its facial recognition algorithm also responds with real time tracking.

Setting the Stage
With Dances with Robots, three performers (PEPPER, HUMAN, and CRANE) were placed in a linear formation.A live camera feed with video projection was placed in the middle, overlaying imagery of the scene in infnite regression.Over the course of a one hour live-stream, the human performer shifted slowly to engage with PEPPER, CRANE, and her own embodied projected image [Figure 3].
The choreography for ON DISPLAY GLOBAL 2022 requires the performer to remain still with eyes open in a position that appears to be caught mid-action, as opposed to in a classical pose.The performer can remain in this position for as long as they like, allowing viewers to observe them as though they were a living statue.The eyes of the performer must be closed, once they start moving, as they move slowly and gradually toward another position, and then freeze in place.The rules for this choreography were designated by Heidi Latsky in 2015 as a way to democratize the engagement in the movement arts to include performers across a broad spectrum of shapes, ages, and abilities, and to de-stigmatize diferently-abled and othered bodies.In this sense, the choreography was an ideal vehicle by which to investigate the evolution of an ongoing, durational, unscripted human-robot interaction in real time.

PRELIMINARY OBSERVATIONS
Our observation during the rehearsal and performance became the inspiration for our research approach: it revealed that the human performer's unanticipated preference towards the humanoid robota desire for interacting with it more rather than the robotic arm.Although both robots, PEPPER and CRANE, followed similar patterns of movements as the human performer did, the physical appearance of the robots gave diferent impressions.While CRANE was able to recognize faces in both the human performer and PEPPER, neither the human performer nor PEPPER could fnd its face.Thinking about difering bodily morphologies and the role of the face as an interface, this is an interesting mismatch between perceptive capability and physical form.What could be perceived as a disadvantage for CRANE is the lack of its face.While technically capable of face recognition, CRANE has no corresponding humanoid visage or face to be read.In Dances With Robots, the human performer could connect with PEPPER, the humanoid robot, in a more intimate way, which was challenging with the face-less CRANE despite its capability of responding to human inputs.Moreover, the human performer developed her desire for getting PEPPER's attention unintentionally by trying to put her face inside of the range of PEPPER's gaze to turn on his face track feature.This attempt made PEPPER follow her face during the performance.
The research question to be explored through the human-robot performance is how do the particular features of these robot performers -their body morphology and physical design as well as the design of their interactions -shape their modes of relating and correspondingly structure how we (humans) might relate to them, and if/how human interaction and experience with robots contribute to the changes in human feelings and perception toward robots.There have been various studies exploring people's (including young adults and children) attitudes, feelings, and opinions towards robots [3,6,7,13,14,16,18,19], These studies report positive attitudes towards robots.Some studies highlight that a robot's appearance contributes to people's perceptions of robots.Humans associate humanoid or animal-type robots with their abilities to feel emotions or understand humans, and machine-looking robots as incapable of having emotions or understanding humans [2,20].Although the studies consistently reported people's positive perception towards robots, they were conducted in laboratory settings, disconnecting their real-life experience, and using a pseudo-setting specifcally created to measure specifc aspects of their perception of robots.This has led to the question, "if/how does human interaction and experience with robots contribute to the changes in human feelings and perception toward robots (especially over a certain period)?" Though both are styled with a pop/scif clean white retro-future roboaesthetic, PEPPER and CRANE exhibit vastly diferent body and facial morphology.PEPPER is designed to emulate a human -with the expected arrangement of facial features and limbs.Despite its simplifed facial geometry and the lack of articulated legs (PEPPER has a single mermaid-like wheeled base rather than two separate legs), the approximation of a human form might be perceived by humans as an open invitation or provide ease to interact and connect with the robot.PEPPER's face is designed to resemble kawaii-ness (Japanese cuteness) to attract human attention, one of the important features for partner robots.Humanoid robots capitalize on our innate preferences for neotenized facial features (FIGURE 5: Neoteny) -frontal bossing with the appearance of facial features aggregating in the lower 2/3 of the skull and the size of the eyes appearing relatively large in comparison to the rest of the face (developed in utero in the frst weeks of development), a proclivity that has been harnessed in the design of social robots to foster connection and the attribution of trustworthiness [15].CRANE, although it has the same 7 degrees-of-freedom as the human shoulder-elbow-wrist [12], distributes those joints diferently along the length of its body, connected with "links" that are distinctly non-human shapes.Additionally, CRANE cannot travel as PEPPER does.It is stationed in one location -and so presents as an indeterminate morphology somewhere between snake, tentacle, or athetoid worm.The camera mounted on the end of the arm further isolates it from humans, making facial recognition an inherently non-reciprocal interaction.
PEPPER excels in connecting with humans, as described above; however, CRANE becomes endearingly relatable when in motion.CRANE's smooth movements can become animal-like, which contributes to its ability to connect with humans.Visual design features, physical mannerisms, and gestures characterize and express a kind of "life" or "awareness" in the robot, which could drive trust and intimacy in humans.Through its smooth, animal-like movements and motions, CRANE can build a connection and shared presence with a human performer.This was evident in the human performer's comment about her developing favorable feelings toward CRANE toward the end of her Dance with Robots performance.
Despite those capabilities both PEPPER and CRANE display, the robots do not resemble human movements and interactions the way we envision (like in the SciFi movies).This inability or incomplete nature, which could be seen as their vulnerability, could make humans attracted to the robots to engage in interactions.For example, PEPPER's kawaii (cute) face and its kawaii voice are meant to be attractive for humans to start interacting with PEPPER.In addition to the kawaii-ness, its incomplete human-like movements (like those of toddlers or young children), though not too bulky but not completely smooth like CRANE either, could make it look vulnerable.Humans have a tendency to protect vulnerable things (i.e.babies and small animals) from harm.Vulnerability could provide a sense of belonging among some.Vulnerability is essential to the human experience and existence.Khalifan and Barry report that vulnerability can increase the feeling of intimacy among people [11].Beyond cultivating empathy toward vulnerable robots, further research by will explore how perceptions of vulnerability could contribute to the development of trust toward robots.Despite those capabilities both PEPPER and CRANE display, the robots do not resemble human movements and interactions the way we envision (like in the SciFi movies).This inability or incomplete nature, which could be seen as their vulnerability, could make humans attracted to the robots to engage in interactions.For example, PEPPER's kawaii (cute) face and its kawaii voice are meant to be attractive for humans to start interacting with PEPPER.In addition to the kawaii-ness, its incomplete human-like movements (like those of toddlers or young children), though not too bulky but not completely smooth like CRANE either, could make it look vulnerable.Humans have a tendency to protect vulnerable things (i.e.babies and small animals) from harm.Vulnerability could provide a sense of belonging among some.Vulnerability is essential to the human experience and existence.Khalifan and Barry report that vulnerability can increase the feeling of intimacy among people [11].Beyond cultivating empathy toward vulnerable robots, further research will explore how perceptions of vulnerability could contribute to the development of trust toward robots.
We believe those questions of how/if robots' vulnerability or sophisticated smooth movements and values-motivated behavior in robots infuence human perception of/feeling towards robots could be explored through human-robot performance.Are humans expecting robots to show their emotion toward humans through their kawaii-ness or sophisticated movements and interactions?Will humans develop their trust and afection toward robots?If so, how?Will humans change their perception of robots -trust and afections -through their observation of human-robot performance (as audience) or their participation in/interaction with the humanrobot performance over time (as collaborators)?

FUTURE DIRECTIONS
Additional performances scheduled in the coming year will provide opportunities to formalize the study of changes in feelings and attitudes between human and robot performers.Our group is developing a new project in this research series to debut in May 2024.This evening-length performance will occur in four discrete segments (like the movements in a sonata)-each juxtaposing a particular combination of human and robot performers, and each elucidating specifc dimensions of these emerging human-robot relationships of intimacy/afection and trust.In addition to concentrating the focus and clarifying the creative aims of the performance work, the new event will incorporate a number of data collection methodologies into the event.It will employ ethnographic observation (by the artists and research team, in rehearsal and performance), video observation of the performance event (including a visual structural analysis of movements and responses), and real-time audience sentiment recording and analysis (using a custom web-based "afect slider" on smartphone) together with pre-/post-surveys.All observing parties (artists, participants, and audience) will annotate degrees of connection and moments of trust in the performed movement work.The majority of data collected will be qualitative since the study itself is ethnographic in nature to explore the human-robot interactions/experiences through human-robot performances.

CONCLUSIONS
This paper reports the development of research ideas exploring human-robot interactions through human-robot performances, which has led to our research questions -how the particular features of these robot performers -their body morphology and physical design as well as the design of their interactions -shape their modes of relating and correspondingly structure how we (humans) might relate to them, and if/how human interaction and experience with robots contribute to the changes in human feelings and perception toward robots.We are developing a research scheme to observe and measure the changes in human perception of robots -trust and afection -through their observation/experience of human-robot performance.Experimental human-robot interactions and generative artistic performance could provide valuable ways to engage with these questions, potentially producing in-depth knowledge of how humans develop feelings and preferable perceptions of robots, which could contribute to an efective human-robot co-existing society.

Figure 1 :
Figure 1: PEPPER(s) and HUMAN Warming Up.Copyright the authors.

Figure 3 :
Figure 3: Dances with Robots, developed for Global On Display 2022 by Heidi Latskky Dance.Copyright the authors.

Figure 4 :
Figure 4: HUMAN and CRANE interact face-to-face.Copyright the authors.

Figure 5 :
Figure 5: Neoteny in PEPPER, human baby, human model.Baby by Jonnelle Yankovich and adult by Tamara Bellis on Unsplash.