"Would I Feel More Secure With a Robot?": Understanding Perceptions of Security Robots in Public Spaces

Robots are increasingly being deployed as security agents helping law enforcement in spaces such as streets, parks, or shopping malls. Unfortunately, the deployment of security robots is not without problems and controversies. For example, the New York Police Department canceled its contract with Boston Dynamics in response to backlash from their use of Digidog, an autonomous robotic dog, which sparked fears in the public. However, it is unclear to what extent affected communities have been involved in the design and deployment process of robots. This is problematic because, without input from community members in the processes of design and deployment, security robots are likely to not satisfy the concerns or safety needs of real communities. To gain deeper insight into people's perceptions of security robots - including both potential benefits and concerns - we conducted 17 semi-structured interviews addressing the following research questions: RQ1. What characteristics do people ascribe to security robots? RQ2. What expectations do people have about the function and role of security robots? RQ3. What are people's attitudes toward the use of security robots? Our study offers several contributions to the existing literature on security robots.


INTRODUCTION
Robots are increasingly being deployed as security agents helping law enforcement in spaces such as streets, parks, or shopping malls [9].Economically, security robots can offer a lower cost, scalable approach to law enforcement [9].Operationally, they can be deployed in dangerous scenarios in place of police officers and collect data that is often difficult to obtain.This explains why both public and private organizations are choosing to make use of security robots.Knightscope is a leader in providing security robots in the U.S., with its robots being deployed by clients in parking structures, airports, commercial buildings, hospitals, hotels, factories, and other contexts [38,53].Their commonly used model, the K5 Outdoor Autonomous Security Robot, includes among its capabilities 360-degree HD video streaming and recording, people detection, thermal anomaly detection, two-way intercom, and remote monitoring [37].Security robot technology will continue to be used more widely, with a global market "[accounting] for $1.62 billion in 2017 and [which] is expected to reach $3.68 billion by 2026" [64].
The deployment of security robots is not without problems and controversies.For example, the New York Police Department canceled its contract with robotics company Boston Dynamics in response to backlash from the public when their use of Digidog, an autonomous robotic dog, sparked fears that it would become a "robotic surveillance ground drone" [54].Knightscope security robots deployed at LaGuardia airport were reportedly perceived as creepy by women passengers and security personnel [53].Further concerns and potential harms of the deployment of security robots may include inequitable targeting of human populations based on biased algorithms [6,62], increased actual and perceived surveillance within communities [42], and creating or furthering social divides in public spaces due to adverse or discriminatory effects of increased policing [6].However, it is unclear to what extent affected communities have been involved in the design and deployment process or if any efforts have been made to incorporate community members' feedback on their introduction into local environments.This is problematic because, without input from community members in processes of design and deployment, security robots are likely to not satisfy the concerns or safety needs of real communities [29].The implications of such design and deployment issues are further heightened by recent controversies around police brutality and a lack of accountability [17].Therefore, it is critical to understand how community members perceive robots designed and deployed specifically for the purposes of security and law enforcement.
To gain deeper insight into people's perceptions of security robots-including both potential benefits and concerns-we conducted 17 semi-structured interviews addressing the following research questions: RQ1.What characteristics do people ascribe to security robots?RQ2.What expectations do people have about the function and role of security robots?RQ3.What are people's attitudes toward the use of security robots?
Our study offers several contributions to the literature on security robots.First, it extends the literature on security robots by identifying the conflicting views on their use with regard to the wider public and specific groups.On one hand, we identified new concerns about their use in marginalized and oppressed communities, while on the other hand supporting previous findings that women felt there would be some advantages to their use.Second, this study extends the research on gender and the use of security robots.Previous researchers have found that women often prefer the use of security robots more than men, but have stopped short of identifying why [25,49,75].This study highlights some underlying reasons why women seem to be more inclined to support the use of security robots.Third, this study identifies a new concern about security robots and autonomy-a lack of clarity on how much autonomy a robot possesses.In doing so, this study expands the conversation on security robots and autonomy by highlighting the need to communicate upfront to the public their degree of autonomy.Finally, this study extends the research on the use of security robots for surveillance.Previous researchers have identified public concerns with the use of policing technology to monitor them [48].Yet, our study finds a much more nuanced view than has been previously discussed.Participants expressed a willingness to trade off the use of security robots for surveillance in exchange for crime prevention in certain contexts with heightened risks, with the exception of facial recognition technology which they felt could be abused to personally identify individuals who have not committed a crime.

BACKGROUND AND RELATED WORK
Security robots can be classified by differences in their hardware and software implementations as well as their level of autonomy [67].Overall, security robots can be autonomous or semi-autonomous.Specifically, teleoperated security robots are remote-controlled by an operator typically from a safe distance.Distributed security robots work in a team of security robots and operate by sharing a network of information needed to complete a goal.Surveillance security robots are autonomous robots that perform under the search-identify-alarm cycle in which they continuously watch for abnormalities and alarm personnel.
With regard to people's perceptions, we first discuss related work on robots in general, followed by research specifically focused on security robots.

Perceptions of Robots
Prior research on perceptions of robots in public spaces has looked at robots being used as tour guides or informational guides in museums or shopping malls [21,24,31,32], as receptionists [56,77], in schools and for education [2], for therapy and healthcare [70], as well as perceptions of drones [14,71,78].As it becomes more common for people to interact with robots in public spaces, having an understanding of people's perceptions and mental models of these robots is important for designing the roles robots play in society.People may experience an expectation gap regarding the capabilities of robots [40], leading to false understandings of robot capabilities.Kiesler suggests that humans extrapolate their understanding of other humans to robots, using "social cues, humanlike movement, and anthropomorphism" to understand the functionality of a robot [35].These attributes manifest through the appearance of the robots and the ways in which they move, talk, and respond to humans [18,56].People's mental models of robots are also context specific [35], meaning, for example, a tourist speaking with an informational robot in New York City's Times Square will expect the robot to be knowledgeable about that location.Because of this, humans may perceive the functionality of security robots in a similar way as they do human security agents, since the context of patrolling security robots aligns with similar roles of human security agents.

Perceptions and Acceptance of Security Robots
Research on security robots' acceptance has focused on understanding why humans will or will not accept and/or trust them.This research can be generally categorized into two broad areas, one focusing primarily on the human (user) and the other on the security robot itself.The first set of literature has paid particular attention to the user's gender.Three studies examined the effect of gender on the acceptance of security robots with conflicting results.One study found that female participants had a higher intention to rely on robots and higher perceived trustworthiness [25] while another study failed to find significant results regarding trust or perceived competency [12].Another study found that perceived gender of the robot, and associated gender stereotypes, can affect people's level of trust towards the security robot [66], including that security robots that appear male-gendered are perceived as more useful and acceptable in their security roles.Other studies have found that the security and safety functions of the security robot were more important for female users than male users [49,75].This in turn impacted positive attitudes toward security robots, which was strongly linked to trust [36,49,58].
Research on user characteristics has also examined factors such as personality, cultural influences, and mental models on the acceptance of security robots.Lyons et al. [44] found that the personality traits of extroversion and agreeableness were both positively associated with trust and intention to use security robots.A study on cultural influence found that Americans living in Japan had significantly higher trust in security robots used for peace keeping than Americans living in the U.S. or China [43].Finally, research on mental models has found that an accurate mental model is significantly positively associated with trustworthiness and not significantly associated with robot power [55].
Another set of studies have focused on the security robot itself.As with robots more broadly, people's trust towards security robots is dependent on the robot's ability to perform its designated task.Robots that can successfully perform the intended task are viewed as more reliable and therefore more trustworthy [27].A robot's level of perceived intelligence, as seen through the robot's "knowledge, responsiveness, and behavioral proficiency when humans interact with a robot" is also correlated with how much trustworthiness people ascribe to security robots [5,36,45].More human-like robots are preferred over machine-like robots when the robot is completing a job that requires substantial social skills [26].Notably, in this study, jobs such as a customs inspector or security guard were perceived by participants to be more fitting for machine-like robots.A robot's politeness, presented through verbal commands from robots to humans, has also shown to be an important aspect in how people interact with security robots.Inbar et al. found that "polite [guard] robots were perceived as friendlier, fairer, and as acting in a more appropriate way.Polite robots were also perceived as less intimidating" [30].Finally, people tend to trust security robots more when the robot's intent is transparent for the human [45].
Hoffman et al. have found that humans may follow commands of a robot authority in a similar way as they do with human authorities [28], though participants in this study tended to cheat slightly more and felt less guilty when a robot authority was present compared to a human one.Although another study found that half the participants followed the verbal instructions of a security robot [59].Agrawal and Williams [1] found that people's perceptions of aggression of a security robot did not strongly alter their behavior in following directions of the robot, but people who chose to disobey the robot reported higher perceptions of unsafeness of the robot.The authors suggest that security robots should exhibit behaviors that make humans feel safe so that humans can trust these robots more.
Privacy concerns may further shape perceptions and acceptance of security robots.Due to the numerous amounts of sensors capable of being attached to a mobile robot, the amount of data collected from humans "can lead users to underestimate the capabilities of the robot and misunderstand how and what kinds of data are being recorded" [41].Lee et al. [41] found that people were unaware of the types of sensors a robot had and were also unconcerned or unaware of the potential risks to their own privacy in relation to the robot's data collection capabilities.Vitale et al. found that a robotic system which is transparent about the data being collected improves people's interaction with the system over a nontransparent system [69].However, they also found that a humanoid robot was able to collect more participant data with little or no concerns when compared to a disembodied kiosk system collecting the same data.This second finding aligns with similar results by Tonkin et al. [68].Thus, privacy concerns appear to be affected by a robot's appearance and physical features.
The literature on acceptance of security robots has provided useful insights, but several areas call for further attention.First, little attention has been directed at understanding how the robot's presence itself can change the perception of private and public spaces.Yet the literature on policing shows that the presence of law enforcement can have unforeseen implications for public and private spaces [57].Second, more work is needed to understand how security robots' potential for surveillance is likely to impact the public's acceptance of them.The use of technology for surveillance is not new but many surveillance technologies allow individuals to opt out at times by avoiding certain public places [51].However, as is the case with the NYPD "Digidog, " security robots are designed to actively patrol areas and even follow suspicious individuals [54].We know less about how the public might react to this type of active surveillance embodied by security robots.Third, in the age of bias and discriminatory policing policies, it is not clear if security robots might be perceived as a potential solution or an extension of the existing problem.From a community perspective, this becomes essential to comprehending their broader acceptance and impact.

METHODS
We conducted a semi-structured interview study to answer our research questions: RQ1.What characteristics do people ascribe to security robots?RQ2.What expectations do people have about the function and role of security robots?RQ3.What are people's attitudes toward the use of security robots?Our study was reviewed and approved (exempt) by our institution's IRB.

Interview Protocol
We structured our interview protocol in a manner that elicits open-ended responses from participants to encourage them to speak in their own words [7].The individual interviews consisted of four main parts (see Appendix A for the full protocol).First, participants were asked about their thoughts on robots in general.This served to get a base level of understanding regarding their perceptions of robot technology.In the second part, we then asked participants more specifically about perceptions of security robots and their use in public environments.
In the third part, we showed participants a set of five visual prompts, which showed a Knightscope K5 robot [37] placed through digital manipulation in photos of different public locations, and asked them about their thoughts of having such a security robot be present in these spaces.The goal of using photo elicitation here was to "gain deeper, richer, more complex understandings of people's lives" [19] by adding a visual component to the interview.We used photo prompts that placed a security robot in environments with different uses and potential sensitivity.This included an airport, where people might expect heightened security and surveillance, including encountering security personnel; a shopping mall and a grocery store, where people might expect the presence of cameras but the overt patrolling of security personnel may be less common; a playground, where security and safety for children is desirable but the presence of a security robot could be perceived as intrusive or raise surveillance concerns; and in a residential neighborhood, where a security robot may suggest safety but could also raise surveillance and over-policing concerns.Figure 1 shows examples of the photo prompts, see Appendix B for all photo prompts.Together, the second and third parts of the interview served to elicit people's perceptions of security robots, their functionality and perceived benefits, and if people perceive these robots to be threatening in any way or have other concerns.
In the fourth and final part section of the interview, we asked more directly about participants' thoughts and concerns regarding privacy, security, and public surveillance in relation to security robots.These direct questions were purposefully placed at the end of the interview to see whether participants would bring up respective issues on their own in prior sections of the interview.

Recruitment and Sample
Participant recruitment was conducted online through social media websites (i.e., Facebook, Twitter, Reddit), targeting groups and populations that involve engagement in public spaces or outdoor activities (e.g., neighborhood groups, walking groups, tourist business groups, etc.).As public spaces are shared by a diverse set of the population, we recruited individuals who take part in activities in public spaces, as these are the places in which security robots would be interacting with the general public.In the recruitment messages we described a study on "robots used in public places," not mentioning 'security' to reduce self-selection bias.We used a screening survey (see Appendix C) to collect demographic information from potential participants, including age, gender, race, sexual orientation, occupation, and income; as well as two scales: Knowledge on Technology Relevant to Robots [74] and the Attitudes Towards Robots Scale [33].We used purposive sampling with the goal of recruiting a participant sample diverse in these demographics in order to include a broad range of perspectives.We continued interviews until we reached data saturation, i.e., no new topics or themes emerged in interviews.Participants received $25 USD as compensation.
In total, we conducted 17 individual remote interviews in the summer and fall of 2020.Table 1 provides an overview of our sample's demographics.All participants lived in the United States, which ensured that participants shared the same legal and sociocultural context regarding robots and policing.Participants were 18 to 64 years old, with the majority being in the 18-24 and 25-34 age groups.Nine participants were men, seven women, and one participant non-binary.Participants were diverse in terms of race and ethnicity (10 White, 4 Asian, 2 Middle Eastern, 1 Hispanic), but there were no Black participants in our sample.Most participants had completed a college degree (6 Master's, 6 Bachelor's, 1 PhD).Participants held various occupations and income levels were evenly distributed (2 less than $10k per year, 1 $10-19k, 2 $20-29k, 2 $40-49k, 2 $70-79k, 1 $80-89k, 2 $100-149k, 1 over $150k, 3 prefer not to say).
For the Knowledge on Technology Relevant to Robots (KTRR) scale [74], which asks about knowledge regarding computer programming, AI, and robotics on a 7-point Likert scale, the median score in our sample was 4.3 (range: 1.0-6.7),which suggests that while some participants were knowledgeable the majority had moderate to little knowledge regarding robot-relevant technologies.The Attitudes Towards Robots Scale [33] measures three attitudes, also on a 7-point Likert scale.For our sample the median scores were: 4.4 Robot-Liking (RL, range: 1.6-5.5),3.5 Robotphobia (RP, range: 2.5-5.7), and 4.5 Cyber-Dystopian (CD, range: 2.0-5.5).These values suggest that our participants' attitudes towards robots were mostly neutral and not particularly biased against robots in general but also not overly enthusiastic about robots.

Analysis
Interviews were transcribed in full for qualitative analysis.We used inductive and reflexive thematic analysis following Braun and Clarke's six phases [10]: familiarization with the data, generating initial codes, searching for themes, reviewing themes, defining and naming themes, and producing the report.One of the authors developed an initial set of codes, which was then refined over multiple iterations together with the larger research team.This author then proceeded to code all interviews with frequent check-ins and discussions with the team.After coding, the analysis moved to finding, naming, refining, and writing descriptions of themes in the data through an iterative process led by two authors with frequent input and discussion from the team.
Researcher Positionality.Our thematic analysis surfaced themes about gender, race, and immigrant experiences (e.g., having an accent).Our research team consists of members with both insider and outsider perspectives regarding those themes, including women and men, members identifying as White, Black, and Asian, and U.S. natives and immigrants.

Limitations
As any study, our study has certain limitations.First, we rely on participants' self-reports which leaves room for social desirability bias, however, we gained the impression that our participants were frank in expressing their perceptions and opinions regarding security robots.
Second, most of our participants had not encountered an actual security robot.We anticipated this, given that while there are increasing efforts to deploy security robots in public settings, deployments were still comparatively rare when we conducted our interviews.To account for this, we incorporated the photo elicitation prompts to help participants imagine and reflect on security robots in different public settings.People who have had actual interactions with security robots may have differing perceptions and experiences which should be studied once such interactions are more common.
Third, while we strove for diversity in our recruitment and accomplished this along multiple dimensions (gender, ethnicity, occupational background), our sample did not include participants who were Black or older adults (>64 years old), possibly due to our online recruitment method which may have also led to missing other potentially interesting groups (e.g., people with lower digital literacy).Our study is further limited to U.S. residents, and people in other jurisdictions, cultures, or countries may have different perceptions regarding security robots.Certain groups that may be most vulnerable to the effects of security robot deployments, such as undocumented immigrants, are also likely not represented in our sample.Future work should study to what extent the perceptions of these groups align with or differ from our findings.
It is also worth noting that around the time we conducted our interviews in the summer and early fall of 2020 there were extensive protests against police brutality and racial injustice in the United States after a series of murders of people of color by police.These events were on the minds of many of our participants and are reflected in some of their answers and perceptions.We do not consider this a limitation of our study as the histories of police brutality and racial injustice must be considered in the design and deployment of security robots, yet it could be that the recency of those events and protests may have resulted in a stronger focus on racial issues compared to other kinds of concerns with security robots.

FINDINGS
Table 2 provides an overview of our findings.As expected, participants had very little to no firsthand experience with security robots, so we first report on what they imagined these robots are capable of and intended for, such as their features, functions, and purpose.Section 4.1 describes the perceived characteristics that participants ascribed to robots generally, followed by security robots specifically.As participants discussed what purpose robots could conceivably serve in the environments depicted in our photo elicitation, and how they would feel about such applications of this technology, we identified themes regarding perceived benefits of security robots.These are described in Section 4.2.However, all participants also expressed concerns about the potential risks of deploying security robots, which we outline in Section 4.3.

Perceived Characteristics of Robots and Security Robots
Robots overall were perceived as intelligent systems that are intended for rote tasks, with many participants naming artificial intelligence as a key capability.When thinking of security robots specifically, participants imagined a range of capabilities related to detection and data collection, including facial recognition.Importantly, participants were unclear about how much autonomy a security robot may have in the moment, as opposed to carrying out pre-determined actions or having a human operating it to some extent.Below we detail perceived characteristics of robots, and security robots, in turn.

Robots:
Intelligent Systems for Rote Tasks.Participants generally thought of robots as independent systems that can act on their own to some extent, such as autonomous vacuum cleaners (e.g., Roomba), though they held varied perceptions as to what constitutes a robot.Examples of how our participants characterized a robot were "something that operates on its own" (P5) or "for the most part is relatively independent from people just walking by" (P6).Some participants mentioned the mechanical aspects of how a robot acts on its own: "Robots, to me, are generally some sort of a mechanical device that's powered by artificial intelligence" (P7).On the other hand, some participants mentioned voice assistants such as Amazon Alexa and Google Home as examples of robots they had encountered, even though these do not involve any motion.Therefore, perhaps the most common perceived characteristic of robots was artificial intelligence broadly construed.
Self-checkout machines were also named by participants as examples of robots they had encountered, though these are neither mobile nor intelligent.This misconception speaks to the view that participants held of robots as task-oriented systems designed for rote tasks or for the convenience of humans.For example, P13 discussed such roles for robots both traditionally and in the future: "The first thing that comes to my mind is industrial robots on an assembly line in an auto plant.With the mechanical arms and doing spot welding and painting and things like that, the traditional robots.I've seen some videos of some more restaurant type robots, the kind that make hamburgers and do serve and things like that.I see a huge future in replacing people doing mundane, monotonous tasks like in restaurants, because I deal with restaurants now.Serving burgers and doing repetitive, easy tasks, just moving back and forth.... No critical thinking involved." (P13) Similarly, when asked what kind of robot they would be willing to use, P3 referenced convenience: "I would probably use it if it could make my shopping easier.So maybe if I could order something to be delivered to me or pre-order something from a robot, I would do that" (P3).
Perceptions of robots were further influenced by exposure to videos (e.g., on social media) of autonomous food delivery or cleaning robots, even though participants were less likely to have encountered these types of systems in their own lives.In addition, participants referenced what they had learned about applications of robots in Asian countries where they are more highly adopted, and depictions of robots in the media (e.g., coverage of robotics company Boston Dynamics) and science fiction films and TV shows (e.g., RoboCop, The Jetsons).For example, P1 suggested a good task for a robot was labor in the home by referencing The Jetsons, a cartoon about a family with robot housekeeper.

Security Robots: Detection and Data
Collection with Unclear Level of Autonomy.Participants attributed similar characteristics to security robots, and also ascribed specific potential features to them: facial recognition, cameras, motion detection, detection of substances, recording audio, sensing temperature, interacting with nearby smartphones/devices, looking for certain behaviors, scanning for weapons, and carrying weapons such as a taser or gun.
The polished and sleek design of the Knightscope robot in the images conveyed reliability to some participants: "I would feel pretty comfortable around it because it looks kind of expensive and well-made, so I feel like it would probably do its job pretty well" (P5).However, it was also important that the look and size of the robot was appropriate to the context.P5's comfort level with the same robot changed when comparing the different photo prompts: "I feel like that for some reason, having it indoor, something that big is kind of a little bit more intimidating, and I think airports tend to be busy and have a lot of people, whereas shopping centers, I feel like sometimes it can be relatively empty and so having a big robot like that there would feel a little bit intimidating because it feels a little unnecessary." (P5) Importantly, participants wanted to be able to understand why the robot was there and what its purpose was.As we will discuss later (in Section 4.3), concerns about the use of security robots increased the less clarity participants had about its purpose, its level of autonomy, who was controlling it, and what could potentially be done with data it was collecting.However, if there was a generally accepted, legitimate purpose for the robot to be in use within a particular context, then participants could envision being comfortable with its presence and even interacting with it: "I think some sort of public acceptance already in some form would help me ... if there is some legitimate purpose of that robot being there, I would have a better time with it.And I think that's a step towards using it as well.Like if I have a better time accepting that it's there for certain purposes that have some function, then maybe the first time I won't use it, but then maybe the second, I will.I think in my case, it might be more for curiosity than anything else." (P12) Primarily, security robots were expected to be used for passive surveillance and data collection: "It would be data on mostly people but it could also be vehicles or animals or air quality or whatever it might be that it's collecting on.... If it's trying to recognize faces then there'd be cameras that are trained to take pictures of people as they walk by.If it's calculating how many people or how much traffic there is for example, then it will have some kind of facial or some kind of recognition that whenever a car or whenever someone walks by ...There has to be some kind of component attached to it which will be able to sense that environment in whatever way that it's supposed to." (P6) Participants especially mentioned this type of functionality in relation to contexts for which they generally expected surveillance to be present, such as theft prevention in stores or airport security: "Definitely surveillance with the camera, make sure no one's stealing anything.Maybe they are scanning, or if they're in the shop then it's by the entrance, maybe they're also scanning, make sure no one's stealing anything, trying to go past them.If they're in the airport, then they're probably definitely maybe even like facial recognition and making sure that everyone that's in the airport is like safe and not on some government list or any of that stuff." (P17) To some participants, there seemed to be a "limitless" amount of information that could be scanned or detected, and then tracked and analyzed by security robots from individuals in public spaces: "One trend I've seen in security writ large is facial recognition technology where you could have these security robots that, if they're hooked into some sort of database for felons, things like that.Where if they identify somebody they could stop them.Or not even necessarily felons.Maybe people get banned from stores for certain things, you have a picture of the person's face.Like old school where that used to be done manually, maybe you just add that to a database.So it could be tracking people, your face.It could be tracking...You could collect a lot of analytics to those sorts of things.People passing by in a certain area, how busy is an area.Then you can use that to scout how many robots you might need.Could obviously capture how many times it needs to be activated.How many different hot spots of activity, things like that.The possibility when you have something like that, that has camera and sound in a public space to capture information is pretty limitless." (P7) However, participants were not always clear on how much the robot itself would be in control of its actions: "I would assume that it just collects data on its own and for the most part is relatively independent from people just walking by.It would be mostly controlled by some operator that's working for whoever is trying to gather information" (P6).Such references to an "operator" indicated that participants had a vague notion of how autonomous a robot is.In this and other moments during interviews, we saw participants "assuming" or wondering aloud to what extent a robot's actions might be predetermined in advance, how much control a human operator might have in real time, and when exactly a robot is intelligent enough to make decisions and act on its own.In Section 4.3, we discuss how this lack of clarity about the level of external control versus autonomy regarding the robot's actions was one of the main sources of concern for participants.

Perceived Benefits of Security Robots
As expected, none of our participants reported having knowingly encountered a security robot.The photo elicitation prompts therefore enabled them to reason as to a security robot's likely purpose and capabilities within various contexts-as well as reflect on the implications and potential consequences of these different uses for security robots.
Below, we describe themes around the uses that participants could imagine for security robots.Participants viewed the potential benefits of security robots as: (1) serving as a deterrent to crime even through their presence alone, (2) intervening to prevent or respond to events such as crimes, violence, or natural disasters, and (3) posing a lower threat of violence against women as compared to human security personnel.

Robot Presence
Serving to Deter Crime.Participants viewed the presence alone of a security robot as a deterrent to crime: "I think if you were a criminal looking for a house, then this [robot patrolling a residential neighborhood (see Fig. 1)] would be something you could see from far away and pretty easily avoid maybe" (P5).While this perception may be influenced in part by the novelty of a security robot, it also matches the traditional role of a police officer or security guard patrolling an area, or Neighborhood Watch signs displayed in a residential community.Therefore, participants were generally comfortable with the use of security robots in public areas where they would expect a need to deter crime: "I think some places that come to mind would be really busy areas like train stations, or airports, maybe malls, potentially schools just because of the issue of school shootings in the U.S. ... I guess like banks or areas that are at risk of being robbed." (P3) Another example suggested was the use of security robots to protect individuals in the public eye: "I think if we did have robots, especially security ones, they should also be around famous people.So either state representatives or government officials, or celebrities and stuff.It would just help with their security." (P17) The presence of a robot by a playground was generally understood to be for the protection of children, who are especially vulnerable.Interestingly, however, one participant distinguished between the need to protect children from strangers in public, versus abuse by family members and others known to them, which tends to occur in private spaces: "I would understand why it's there [at the playground].Mostly, it's because of the children.Because that would also deter any child abductions or even child abuse.I guess with child abuse, it wouldn't prevent it because things could happen without cameras watching, but at the very least, the playground could be a safe place for kids to play." (P2) The much-discussed myth of "stranger danger" refers to abduction from public areas, which is so rare it has been estimated to occur a few hundred times in one year across the U.S. [22,73].In contrast, each year 1 in 7 American children are affected by child abuse and neglect that occurs with family members and other trusted adults [23].As P2 points out, a patrolling security robot cannot protect children from this much more real danger.This example raises the issue of whether the presence of security robots may contribute to factual understanding of actual risks, or unnecessarily raise surveillance and vigilance in line with mere misconceptions.

4.2.2
Intervening to Prevent or Respond to Events.In addition to proactive, passive patrolling, participants envisioned security robots taking action to prevent or respond to events.For example, a robot could threaten to report someone as a way of protecting a parking lot: "Would I feel more secure?Maybe.... To know that it could possibly be rooting out the bad people and keeping people who are trying to do me harm away.It would do that by saying, 'I'm capturing your picture and I'm reporting you to the authorities, you're not supposed to be here, and that's not your car.Please get away.' " (P13) Participants also anticipated that security robots would enable faster response by human law enforcement, as well as described interactive features through which citizens could ask for assistance: "Just because if something does happen, especially if it's nighttime and there's increased chances of crime, that someone's there and near and...It kind of depends on what they're doing.So if there's any surveillance, if there's video footage, I'm assuming security robots would have some button you could press to call the police there faster or something.You would assume that there's a faster response to whatever's going to go wrong or could go wrong.... If it's casual and there's just one or two of them, normal, it would, again, make me feel safer, especially with all...You hear about human trafficking and people just stealing stuff from cars and all that." (P17) This participant interpreted the presence of one or two security robots as reasonable, calling it "casual" as if it would not be out of the ordinary, going on to say: "If there is a lot of them, it would raise a little bit of suspicion."This reaction appears in line with how one might interpret the presence of one or two human security officers as compared to many.Similarly, participants likened a security robot to a human security guard in that they would only expect to interact with either in the event that something was wrong, whether they or the robot initiated contact: "I think if it was a specifically designed for security, then I would try to avoid it, unless I was like actively being harassed or targeted by a person on the premises, then I might go to the security robot to either try to get the person to go away or to report them to the robot or something like that." (P3) Just as a large number of security robots seemed disproportionate for an everyday location, some participants felt that facial recognition would be a disproportionately privacy-invasive technology in their daily lives.However, they discussed its potential benefits when deployed at scale in large spaces with a great number of people: "I would [feel comfortable], I think it depends a little bit on the space because having facial recognition just out and about on campus seems a little bit much for me, but I think at a concert where you have such a greater amount of people in such a small space then that to me would be reasonable." (P5) In fact, this participant had experienced the use of such a technology, and their reaction was a mix of hesitation at not having enough awareness of the fact that it was in use, with an ultimate understanding of its purpose and appropriateness for that context: "I don't understand too much about it, but I went to a concert once and I think I read later on the news that at that concert they had attached to the TV screen or as part of a TV screen they had there that kind of scanned your face and saw if it... matched up with any mugshots or criminal record.So that's what I've encountered.... Part of me was shocked that that was even possible.I didn't know that could even be done, but another part of me was okay with it because there's been violent incidents at concerts before and there's been people with violent tendencies who've gone to concerts for the purpose of trying to potentially do something violent, so to me it was understandable." (P5) In addition to potentially preventing violence, security robots were envisioned as useful for responding to de-escalate violence once it has begun: "One place, somewhat paradoxically, I think you could actually benefit a lot from a robot, would be places where you would need almost minimal force to intervene in a situation.Maybe in a high school, something like that.Where if you have fights that might break out in there, you might have robots in that sort of a situation to disarm things.Or, heaven forbid, if you get into a school shooter type situation, if you could have robots that don't have to-They're bulletproof, it doesn't matter if they get sort of electrocuted, shot, or anything like that.They could potentially swarm somebody, bring them down, incapacitate them in some way.I think that could be a good spot.It's also a relatively contained environment in some aspects.A school is very set parameters, most of the architectures are the same.It's not like you're out walking the streets of Boston where there's a whole lot of different things that are taking place in there." (P7) Participants also mentioned natural disasters as dangerous situations in which robots could help respond, so that fewer human responders can be in harm's way, but more lives could potentially be saved: "Right now we're fighting fires in California, but humans can't really go close to that because we have temperature ranges that we can't just, we can't go close to the fire.But if it was a robot, it could go closer.It could potentially save those people.If you have flying robots, and they're smaller than helicopters, you could get closer to the ground to spray water.And so [robots could] help with natural disasters-also evacuate people in times of hurricanes." (P17) Overall, if the number of robots and their capabilities were proportionate to the perceived scale of security needs, participants felt there could be reasonable uses for security robots to intervene, and that if an event were to occur, they could even be a source of help.

Robots
Posing Lower Threat of Violence Against Women.Women participating in our study stated that they would feel safer around robots deployed for security as compared to human security personnel.They discussed their safety in public spaces in very general terms, alluding to the common experience of worrying about the threat of sexual harassment and other forms of violence against women [34,46].To these participants, the threat of violence extended to humans charged with security, but interestingly not to robots, who were not perceived as posing the same threat.For example, P15 reflected on robots as being focused on their task, and not having any potentially nefarious intentions: "[Seeing a security robot would make me feel] safer, honestly, because I would know that the machine won't really...Because sometimes having a human being honestly doingespecially with women-doing these security things, I would feel more scared to have a human in a dark parking lot doing his job rather than having the machine.So for me, in this case, which is interesting, it's the first time that I think about it, I would feel safer if a machine is conducting this work.... It's just doing it's job.It doesn't have any intentions.

Any negative intentions. " (P15)
Along the same lines, P4 viewed robots as not having desire, and therefore felt more comfortable with them performing a pat-down, which typically involves a human running their hands across a person's clothing to check for weapons or other items: "I don't know because I'm thinking if that happened to me [a robot pat-down] I'd be like, I'm not thrilled about this but okay.But then I guess you wouldn't have to worry about a robot groping you.So, I don't know.Less intrusively or less negatively, probably wouldn't grope people, because I don't know why you'd program something with desire.That doesn't make a ton of sense." (P4) For some women, their physical safety was a greater priority than the security of their information, so they preferred surveillance if it meant lowering the risk of violence.P12's concerns for her safety were not only as a woman, but also one who spoke with a foreign accent in the university town where she lived in the U.S., as well as in Brazil where she had recently spent some time: "I mean, it wasn't safe [when I traveled to Brazil].So, a woman walking alone, a foreigner, foreign accent.I think sometimes, I mean, you know I have an accent, those are concerns that I have in [my university town in the U.S.] too.It's not only concerns that I have in Brazil, [just at a] different level.So, I'm always thinking that I feel more safe if there is some sort of camera, if there is some sort of surveillance, and I'm more concerned about my safety and somebody, in case I'm attacked, kidnapped or whatever, versus somebody having information about me.I think I'm more concerned about safety, more so than information." (P12) Security robots were also suggested for protecting women who have already experienced violence, for example those seeking refuge from an abuser at a women's shelter: "[Robots should be used] to protect people, let's say a homeless shelter, a women's shelter, or a food bank or something like that.At a food bank, not necessarily to prevent stealing food or anything that, but in case something happened or if at a women's shelter, someone's abuser came and that sort of thing, I would want it to be used so that it'll help people instead of incriminate people.... I guess more in a passive way, not necessarily intervention in that moment by a robot." (P2) Importantly, P2 noted the difference between protecting people's safety, and incriminating those who may jeopardize that safety.As we will discuss in the following section, participants raised concerns about equity in whose safety would actually be protected with the deployment security robots, and whose safety may be threatened.

Perceived Risks and Concerns with Security Robots
When participants expressed concerns about uses of security robots, these centered around four themes: (1) the potential for deepening social inequity, (2) unease about an unclear source of control, (3) the risk of malfunction or hacking threatening physical safety, and (4) the likelihood of biased judgements and discrimination.

Potential for Deepening Social Inequity.
As reported earlier, in Section 4.2.1, security robots were perceived as a deterrent to crime through their presence alone.Participants would assume a robot was there to do a certain job in order to keep a location such as a playground or parking lot safe.Some participants described this as a benefit because it made them feel either neutral or potentially safer within that location.
However, the mere presence of a robot caused concern for other participants, who drew the conclusion that a robot must be present because there has been crime in the past, which would have necessitated the deployment of the security robot.This interpretation made participants feel less safe because it caused them to wonder what kind of crime they should be concerned about in that location, especially if it was not a location with which they were familiar, or in which they would not have expected crime: "[If I saw a security robot patrolling a parking lot] I might be a little concerned.[It would make me think,] 'did something happen here?Is this a not safe part of the city?Am I in a dangerous part of the city and I didn't know?'I guess because, if they feel like they need that extra security, it's because something happened.That's an assumption that I have, I know that's not inherent, but it seems to me you wouldn't just put it there for no reason and just in case, because that seems like something that'd be really expensive, and have a lot of time and effort put into.It'd be like something happened and we want to make sure it doesn't happen again." (P4) Participants alluded to navigating different parts of a city given the significant disparities in the levels of crime from one part to another.In the U.S., this phenomenon can especially be seen in segregated big cities, where socioeconomic status and crime are interrelated consequences of a long history of inequitable social policy and investment that has favored certain neighborhoods over others [11,13].As a result, residents and visitors alike navigate levels of crime and safety that can sometimes change from one street to the next.Participants therefore drew the conclusion that if a neutral location such as a park required a security robot, it must be due to the dangers of the surrounding neighborhood: "Because usually when there are more security in a place like a park, that typically means there's a reason for that increasing security.My mind would be that there must be more crime in this area or something like that." (P6) This type of interpretation suggests potential implications for how security robots may contribute to the segregation of cities and the prejudice that deepens inequity, if the presence of robots leads to assumptions about the public space, local area, or people.Moreover, prejudice, segregation, and white supremacy lead to real physical danger and violence for people of color.With our interviews having taken place in 2020 amidst a string of high-profile murders of Black Americans erupting in widespread Black Lives Matter protests, some interviewees made references to these cases and racial unrest.For P2, the image of a security robot patrolling a neighborhood brought to mind the murder of Ahmaud Arbery.A young Black man, Arbery was jogging in a Georgia neighborhood when he was chased down and shot by three local white men [15], who felt they were protecting their neighborhood and were later found guilty of federal hate crimes [4].P2 feared that robots could be misused to similarly target harassment or violence against Black bodies: "I wouldn't like it [a security robot going up and down the sidewalk in my neighborhood] because, again, this could be used for targeted harassment.I'm just thinking about Ahmaud Arbery, and this could be programmed, if it is in the wrong hands could be programmed to target like Black people, if this was a white suburb, or any person of color pretty much.And if I lived in this neighborhood, I would feel really uncomfortable knowing that I'm being watched and my neighbors were being watched." (P2) Another example that highlights the potential for deepening inequity is in the marginalization and criminalization of citizens experiencing homelessness.P2, whose quote in a previous section (4.2.1) stated they would understand that a robot at a park could be there to protect children, went on to qualify that at nighttime, when children are no longer playing there, they would have a different perception of the robot's purpose: "But at the same time, I know that parks at night are common places where homeless people sleep, so I can definitely see the security robot being used to surveil these homeless people, that would sleep on the bench or something, and potentially alert law enforcement.
So again, if it was something that was only during the day or while the park was open, I would maybe be okay with it, but it's still not my favorite." (P2) This participant raises the critical question of whose safety is being protected by the deployment of security robots in public spaces.The surveillance and policing of people who sleep in parks is not often for their safety, but rather driven by others' desire for comfort and security.Likewise, racism and White supremacy can lead some to feel as though they are threatened and acting in the interest of protecting their neighborhoods, when in reality they are violating the safety and civil rights of others.These scenarios exemplify the important decisions that must be made about whether security is truly the issue, or whether a robot or other intervention might play an alternate role to ensure equitable and just community safety while enhancing community cohesion.For example: How might robots improve access to services for those finding themselves homeless?Or: Can security robots be anti-racist?4.3.2Unease about Unclear Source of Control.Participants' lack of clarity as to how much autonomy a robot may have, and who is really in control, was behind one of the main concerns they discussed.For example, P4's confusion regarding the autonomy of a food delivery robot led them to describe the robot's behavior as "spooky": "I know because of the pandemic, there have been those automated, I don't know if its DoorDash or Instacart, but they have little automated robot things that will carry your food to your house.Like pick it up.I don't know if those are controlled by somebody or if it just follows a predetermined route.I don't know.Spooky.I haven't seen one of those in real life though, it's only on like Instagram." (P4) A robot delivering food is relatively small and non-threatening, but when considering security robots, the idea of not knowing who or what is in control was not just unnerving, it led participants to envision various adverse consequences.While discussing how she believed security robots work, P17 was one of several participants to make a reference to their common depiction in dystopian films: "I mean, there's electrical engineering, there's definitely software engineering.They're definitely not thinking.They probably took commands for, if this happens then this, and this, and this.And there's just a lot of if-else loops and all those like different programming codes and stuff.I mean, at some point they would develop their own thinking a little bit, or at least learn.But I don't think we have the technology yet, or at least I don't know of technology that would allow a robot to think for themselves.And I hope not, because every movie that shows that, it always ends in a bad situation." (P17) When imagining the robot in control, participants were concerned about errors; when imagining the human in control, participants were concerned about agenda.In this section, we detail risks perceived by participants regarding various different human entities (e.g., companies, governments, cities, law enforcement) being responsible for deploying and controlling a security robot.The two sections that follow then elaborate on issues of the robot being in control: in Section 4.3.3we outline the perceived risks of malfunction or hacking, which were seen as potential threats to physical safety; and in Section 4.3.4we describe participants' awareness that robots would be imbued with the same bias of the humans creating them, and consequently be capable of the same discrimination, profiling, and racism.
Participants pointed to different potential misuses across the private and public sectors, with no agreement about who might deploy security robots most responsibly.For example, P4 would not trust the federal agency Immigration and Customs Enforcement (ICE) or local law enforcement with security robots because he saw them as guilty of racial profiling and violence: "I don't have a satisfactory answer to [who should be in charge of security robots].That I don't know.I'm sure there are people who would be trustworthy but I don't, I just wouldn't want ICE agents to have security robots, cause then they might just be like, 'oh you're brown, show me your passport, right now.'People don't carry their passports around with them.... Local law enforcement?No.Because I live in Tacoma, the city [in Washington state] where Manuel Ellis was murdered [in police custody], and local law enforcement is not.No. " (P4) Similarly, P6 felt that the 'wrong hands' for security robots to be in would be governments that violate civil liberties, particularly when considering capabilities like facial recognition: "There's a lot of fears about facial recognition being in the wrong hands.Especially if the government has that capability, there'd be a lot of privacy concerns.... A lot of people think for example that China is spying on its citizens.And China is known for mistreating a lot of its citizens.I'm not going to go too much into the politics in that, but yeah.If it's in the wrong hands, if it's under a government that might not care as much about civil liberties, then it could be problematic.I think there's a lot of intricacies to that.... Spying on people that have different political beliefs, that would be a big no-no.I can understand it for maybe trying to find someone that is a dangerous criminal, but for something like maybe speeding, probably not the best application for it.And I feel like it has been using for things like speeding or running red lights in the past and it had a very negative reaction from citizens.And a lot places in Europe stopped using it because of that." (P6) Within the public sector, therefore, applications of security robots would have to walk a fine line between perceived legitimate use (e.g., identifying dangerous individuals), while not being felt as over-surveilling innocent citizens in their daily lives (e.g., while driving).We also note, from P6's example, the influence of what people learn second-hand from uses of novel technologies in other countries, and how this may shape their considerations of the potential uses and implications within their own context.
Other participants, in contrast, were more trusting of government agencies than private businesses, because they viewed them as more likely to be ethical and accountable to citizens: "I would prefer that maintaining would be in government domain.Currently, obviously, law enforcement has its own issues that we're trying to work through as a society.But between the two, I would still say the government is probably more trustworthy on the whole than the private sector.... They're just after profits.They're not necessarily accountable to you as a citizen in a certain way.If you're just going to these public places and you know maybe a store wanted to install this because it has a 99% theft prevention rate or something like that.The onus of that sort of private security provider would not be to you as the customer or the citizen, it would be more to the people that are paying the money.So the businesses, the places they're securing.... Obviously they don't want to be creating giant death machines and things like that, that would not be good for their profits.But they don't also necessarily care the most about you as a consumer, how you're interacting with these things." (P7) P8 similarly felt that local (city) government was more likely to make decisions in service of the public, and when it came to private companies, their agenda was in question based on how they were looking to make money: "It sort of depends on the nature of the business.If it's a building you need to access, that's probably more okay than the company trying to sell something because there's personal information they can use to share email and send that to spam companies.And that's no good.Yeah, it's like the city, the reason they're there is to try to serve the public and then private companies, they might be good but they might not be.There's no way to kind of know that beforehand." (P8) Participants were weary of how much their information is being used for advertising purposes.Multiple participants expressed concern with the general idea of "being profited off of," even if they did not specify direct adverse effects on their lives.For example, P2 alluded to their employer obtaining information about their behavior as a consumer, but primarily opposed the general use of targeted advertising on principle: "If I went to a certain store and then my job was able to-just anything that makes what I do available to other people.And especially if it's for profit and I'm being profited off of.... I don't think [we should have security robots at all].I can maybe be supportive of it if it was protecting an unprotected class of people so that they're not being hurt or harassed.Or people who were harassing them would be documented and recorded.But if it was, again, for law enforcement or private company profit reasons, then I would oppose it....An example of this is, I've seen, I think it was either in Japan or Taiwan, and there being a targeted billboard advertisement based on how it reads your face, and then...Even if it's not personal information but it being, 'Oh, you look like a 21 year old female.'This is the targeted advertisement.That makes me uncomfortable, even if it's not collecting data, even if it's just being used for that." (P2) In summary, to some, humans being in control of a robot and deploying one in support of a corporate agenda was most concerning: "It's not so much the robots themselves, I think it's just how they'll be used specifically by corporations is my biggest concern" (P1).In contrast, the next section describes why others were more concerned about the robot itself, its physical size and features, and the idea of it being in control and making its own decisions.

Risk of Malfunction or
Hacking Could Threaten Physical Safety.In an earlier section (4.2.3), we discussed the perceptions of women participants who thought they would feel a greater sense of physical safety with a security robot than with human security personnel.The other participants in our study were likewise more concerned with their physical safety than information security.However, the other participants tended to highlight the risk to physical safety that could come from the robot malfunctioning or being hacked.The Knightscope robot we used in our photo prompts is approximately five feet tall, close to the size of an average human.Therefore, its size alone was viewed as having the potential to accidentally cause injury, as P2 described after having personally encountered a cleaning robot at Walmart: "It just cleans the floor and it just goes around.And it has a sensor, if there's someone then stops ... I think I've been crossing paths with one at Walmart and, not really [wanting to interact with it], kind of moving away from it, not really wanting to be around it because I'm afraid it'll just go crazy.... If it somehow malfunctioned and started going really fast or if it just didn't stop.If there was a toddler walking around and then-It wouldn't be a serious injury or anything, but it just freaks me out." (P2) When it came to security robots, participants also imagined they would be tasked with intervening, which posed the potential physical threats of being detained or unintentionally injured through force (as well as facing police violence, particularly for people of color, if they are called to the scene): "I think for me personally, my biggest fear would just be, what if it malfunctions and targets me for no particular reason?... [It might] I guess maybe like record people and report their behavior to a police department or call the presence of human police.I guess it could be possible for a robot to physically detain you, but I would hope that it wouldn't get to that point until like any issues are really like ironed out.But I would definitely be more afraid if it was capable of physically detaining people or doing anything, but I would still be a little concerned about the implications of it reporting people to the police or something.... I also think about autonomous vehicles or like self-driving cars, that I know have like struck and like seriously injured people unintentionally.So I feel like the biggest margin of error would just be like really harming someone with unneeded force." (P3) Participants compared threats to their physical security with threats to their information security, and many decided that physical danger would be a greater concern, not only through malfunction but also in the event that a robot was hacked: "I think that they [security robots] would be useful, but it depends how hackable they are.Because depending on what they can do or what privileges they have, like if they're carrying any-Hopefully they aren't carrying weapons.But if they're carrying anything that could be used as a weapon, it would have to depend on how hackable they are.Especially today's day and age, mostly everything can be hacked and people are very careful about that.... So if they're carrying any sharp objects, they could hurt someone if they were being hacked.Or they could break things or just hurt the environment, hurt the people around them.If they're not carrying anything, but they have a camera and they are hacked, then it's easier to stop people.It's easier just to have privacy breaches, someone could almost look into your life, especially if you're being targeted and the robot's near you, follows you home, something like that." (P17) Physical security threats were perhaps a greater concern because they present more imminent and obvious danger, whereas information security threats are less visible and often delayed in their direct impact on someone's life.Some participants specified that the physical form of the robot was not in and of itself perceived as threatening, unless the robot were to be put in the position of making calculated life-or-death decisions, akin to the ethical dilemma of the trolley problem [52]: "I mean to me the whole physical aspect of the robot is not that threatening.The more threatening part to me is the whole AI thing and the critical thought.The whole autonomous vehicle [threat].If you're coming up to an intersection and a bus comes at you, and the car has to make a decision whether to either hit the bus, or go onto the sidewalk and hit a woman with a baby stroller?What decision is going to be made?"(P13) If security robots are tasked with physical security, participants felt this would inherently include some ethical decisions, which they did not view robots as capable of handling: "If we're talking about physical security, that's more concerning to me [than information security] in a lot of ways, having a robot for that kind of work [physical security guards, police].So the thing about machines in my estimation, is they do sort of what they're designed to do very accurately and without any sort of consideration.There's not a lot of ethical judgment in there, and there's not a lot of, in a sense, room for ambiguity a lot of the time, in my experience.If you are empowering these machines to do a lot of things, I don't think we are sufficiently advanced in our understanding of AI algorithms to do that sort of thing safely, where it's just getting to be very... There's a big risk there, and I don't think we're quite far enough where we can trust it to handle that kind of a system." (P7) Empowering security robots to make decisions was viewed as a risk because of the consequences of mistakes.Some pointed to the need for human supervision or oversight to ensure that innocent citizens would not face consequences as a result of error or misjudgement: "I think there always has to be a human supervising it, or looking at tapes and reviewing the results, and reviewing whatever happened, because the robot can't do everything.There still needs to be humans in charge...If the robot messes up, if you're just nervous to be around a robot, you wouldn't necessarily be punished.I don't know if that's the right word.There's just someone else that's making sure that the robot actually detected all of the things that it did, and it was for the right reasons." (P17) However, as we discuss in the next section, humans themselves make biased judgements, and many participants highlighted that robots will unavoidably act with the same biases as the humans who design them.

4.3.4
Biased Judgements and Discrimination.The idea of a robot being in control of making decisions, and then acting on them, was a key concern for some participants.The role of security robots would involve making judgements about what is safe, which is a nuanced concept, and susceptible to the bias of the humans who design and train it to make decisions.As P3 suggests, this could easily lead to discrimination such as racial profiling: "I think the first thing that comes to mind with security robots is, I just would be a bit fearful of how they decide what is, or isn't, a safe situation.Or what kind of person might be there to cause harm.I vaguely remember reading something about how search engines and different programs internalize the biases of people who design them.So I would be a little bit worried that security robots might make like specific people feel needlessly uncomfortable, that like the security robots might profile different people." (P3) The concept of safety often also involves intuition, which P17 pointed out that robots would be incapable of: "Also intuition is something that a robot can never really be taught.Sometimes people have a sixth sense.Like they feel like something bad is about to happen and there's no justification, it's not like there's a reason they feel that, they just do.And I don't think a robot can ever, say-Because they don't have feelings.So they can't really feel that something bad is going to happen." (P17) Therefore, even the possibility of robots being neutral or objective could be viewed as a downside, because it would render them incapable of such instinctual abilities to detect and react to security situations.On the other hand, P4 argued that robots cannot be objective, and that the unconscious bias of humans is likely to make its way into their programming: "You know how there are AI algorithms that will detect things in pictures that will be like, 'Oh, were your eyes open?Somebody was squinting.Why were their eyes closed?' No, that's just an east Asian person.I would just worry about whoever designed or built this, their biases would sneak in.And then, I don't know, Islamophobia is pretty bad in the United States, and someone might be like, 'Oh, lets have them just detect abnormalities and head coverings.'And be like 'Ah, suspicious.' No, its just a lady living her life.... Everybody has a bias, and its just, you can't escape it even if you think you're writing the software that would be objective visual processing, there's no such thing as objective, and even if you're not a hateful person, you might just accidentally slip something in there that causes trouble for some innocent person." (P4) When asked whether they think robots should be used for security purposes or public safety, P1 noted the challenges of split second decisions that can arise in such contexts, and questioned whether humans would be able to create effective robots for this purpose, when it is so difficult for humans to do it themselves: "I don't think so, but then again I can't really make an argument that humans are really perfect for that job either.Especially with everything that's going on right now.All the police brutality, just had the [police killing of] Breonna Taylor happen yesterday, so obviously humans are imperfect in those regards, but since humans are programming robots to do this, I'm not sure how they would be able to judge a situation.I don't really know if humans are able, when they're put in a certain scenario, to make that split second decision.How are they calculating, is it an algorithm?How does that work?" (P1) Given the timing of our interviews, difficult questions about policing and racial reckoning were at the top of mind for participants.With these complex issues unresolved, the concerns raised in our study suggest some progress should be made toward them before adding robots to the challenging work of security.

DISCUSSION
Our goal in this research was to gain deeper insights into people's perceptions of security robotsincluding both perceived benefits and concerns.Overall, our study found that many related attributes of the security robot had both positive and negative reactions from participants.This quandary parallels many of the discussions around policing in general and policing with technology specifically [65].Yet, these paradoxical views may make it difficult to build a consensus on the development and deployment of security robots, as we outline in this section.
One caveat we highlight with our findings is that we relied on photo elicitation rather than actual interactions with robots (as we noted in our methodological limitations in Section 3.4).While future work ought to take a positivistic stance in seeking out objective, representative reactions to security robots, this was not the purpose of our study.Our approach was interpretivist, to understand how people subjectively conceptualize security robots, how they form attitudes toward them, and what potential benefits and concerns arise about the imagined possibilities of their use.Our contribution regarding these perceptions is important for informing ongoing work designing and deploying security robots, so that by the time people are more likely to encounter them in real life (or studies with actual robot interaction become increasingly easier to execute), the issues raised by our study can be considered and accounted for in the many design and deployment decisions made along the way regarding the coexistence and collaboration of humans and robots.Moreover, future studies can be designed to evaluate robot interactions using the themes we have identified.As we discuss the key takeaways from our study below, we caution that we do not know the degree to which actual interactions with a security robot would have changed our participants' perceptions of them.For example, the physical design attributes and presence of a security robot in real-world spaces and situations may have changed participant attitudes.Our photos showed only one design, the Knightscope K5 security robot, but participants may have been more or less concerned about the use of security robots around them if they interacted with other robot designs, such small and cute versus large and intimidating.

Tensions Between Perceptions of Safety and Concerns over Injustice and Abuse
Our study extends the literature on security robots by identifying the conflicting views on their use with regard to the wider public and specific groups.We found mixed or conflicting opinions on the use of security robots in public spaces.On one hand, many participants indicated that the presence of security guards would deter crime.Security robots were viewed in the same manner as police officers and security guards patrolling neighborhoods.Participants also viewed them as ways to contact actual police for help if needed.In this view security robots were stand-ins or extensions of existing policing approaches.This view seems to reflect a sense of trust and comfort with existing policing approaches in certain contexts, such as airports or events with many people, and sees security robots as potentially new forms of their embodiment.
On the other hand, participants were also concerned with the potential targeting of marginalized communities and the perpetuation of existing biases.This view of security robots sees them as vehicles to further segregate and target racial and economic minorities, seeminlgy reflecting a sense of distrust and discomfort with existing policing approaches with regard to social equity and justice.As such, this view sees security robots as a potentially new form of existing oppressive policing policies.This may come as a surprise and disappointment to many who thought "RoboCops" might provide an opportunity to alleviate racial bias in policing [47].This indicates that design efforts should consider existing perceptions of and interactions with human security officials, as well as policing expectations for the specific context of deployment, to inform how these might extrapolate to services involving security robots.Future research should investigate in greater depth where perceptions between human and robot security converge as well as diverge, in order to shape the design and deployment of security robots.These issues are likely to be vital to informing local and national policy on security robots for policing.For example, [76] studied different stakeholder views on the use of robotic technology by metropolitan law enforcement in public spaces.Their study yielded several policy recommendations on when and where police departments should or should not be allowed to deploy security robots in the public.
This study supports prior research which found that women are more incline to accept security robots, but also goes beyond it by identifying the reasons why.Several studies have provided some evidence that women are more likely to accept security robots [25,49,75].According to our study, this was because women thought they would feel safer with security robots than with their human male counterparts.For women, who are more likely to face violence in public spaces [25], our findings indicate that robots may be viewed as posing less risk of violence than human security personnel.While the complexity of human behavior includes abuses of power paradoxically by those charged with protecting safety, our participants perceived robots as focused only on their programmed task(s) and lacking in the socially constructed power dynamics and sexual behavior that underlie gender-based violence.More specifically, the women in our study felt they were less likely to be sexual harassed by security robots, which made them more comfortable with pat downs.Other scenarios mentioned were women wanting surveillance to reduce likelihood of being attacked in public spaces, and protection for women seeking shelter from domestic violence.The implication of this perception is that robots may provide a greater sense of physical safety for women, and potentially other populations who carry disproportionate risk of violence.

Who is in Control?
Need for Clarity About a Security Robot's Level of Autonomy Autonomy, either too much or too little, emerged as a major concern throughout the study.This study therefore highlights the concerns humans have when security robots have either a high or low degree of autonomy, and this may have much to do with the authority allocated to them.For example, prior studies found that humans tend to either respect the authority of security robots by following their directions or if they do not respect their authority they report feeling unsafe around them [1,28].
This study identifies a new concern with a high degree of robot autonomy: the lack of clarity on how much autonomy a robot possesses.In the human-robot interaction literature, high robot autonomy has been shown to engender positive and negative reactions from users.From a positive perspective, high levels of robot autonomy have been linked to perceptions of more intelligent robots, ease of use, and adaptability [16,60,63].From a negative perspective, robots with high levels of autonomy can also be viewed as a physical threat to human safety [79].We found similar reactions with our participants indicating their concern about their physical safety with highly autonomous security robots.In addition, there seems to be a genuine concern with regard to potential malfunctions of the robot leading to physical harm to the public.This is problematic because, unlike other robots, security robots may both have the authority and need to physically engage with the public to perform basic policing functions like detaining suspects [50].Despite this, security robots are already being developed and deployed around the globe to use force against suspects, or control crowds by employing tasers and dispensing tear gas [3,20].This would suggest that security robots should include some type of effective safeguards to prevent them from physically harming the public.What was new in our study's finding was that our participants wanted to know upfront the degree of autonomy.There is a need to clearly communicate to the public how much autonomy the security robot has and the lack of clarity seems to be as problematic as the degree of autonomy.The lack of clarification leads participants to describe robot behavior as "spooky." Security robots with a low degree of autonomy also engender concerns from participants.According to many participants, teleoperated robots represented the lowest autonomy security robots.This study clearly identifies a deeper concern for understanding who was actually in control of teleoperated security robots.Participants were most concerned about not having any knowledge at all regarding who was in control of the security robot.For example, some participants wanted to know if security robots were being operated by profiting-seeking companies or non-profit public companies.Research on healthcare robots demonstrated that humans trusted teleoperation robots more when they were allowed to see the human operator [39].Teleoperated security robots may also engender more acceptance from the public if they are provided with information about the individuals who operate them.This also provides an opportunity to allow the public to see the qualifications of the operator which could further engender trust.Members of a marginalized community seeing that someone from their own community was an operator may also engender more trust and cooperation.
However, this may do little to address another of our participants' concerns: the potential of the security robot being hacked or highjacked.This concern is not unfounded as shown by the Dallas police department's use of a teleoperated robot to kill an armed suspect [61].This was accomplished by strapping plastic explosives to the robot and remotely guiding the robot to the suspect's location.One could easily imagine a remote controlled robot being high-jacked.This would leave the general public in a compromised position [8].For example, members of the public would not know if they should obey the commands of the robot and risk physical harm or disobey and risk legal actions.

Context-and Use-Dependent Acceptance of Security Robot and Surveillance
There is a growing concern about the use of surveillance technology and individual privacy, especially in policing [48].These concerns are only growing as the use of such data is being employed in criminal trials [72].These concerns also emerged during our study.Some participants were realistic about the use of security robots for surveillance and in some cases discussed the benefits of such surveillance within specific contexts with heightened security needs, such as airports, concerts, stores, or even schools.Participants cited the deterrence of crime and reduction in theft as one benefit, for example.Participants also acknowledged that security cameras were already collecting similar data.However, this acceptance of surveillance was strongly context-dependent.Participants expressed discomfort with security robots patrolling and surveilling residential neighborhoods or public spaces that were not at a heightened risk of incidents.An illustrative example is the discussion about the appropriateness of security robots on playgrounds or in public parks: participants felt that during the day a security robot might provide safety for children, but that if security robots were deployed at night the main motivation might be to displace people who are homeless.
An interesting aspect in participants' thoughts around surveillance was that they seemed to clearly differentiate between the use of security robots for public surveillance of non-identifiable data, and their use for collecting identifiable data which could be used to personally track and monitor specific individuals.In particular, participants mentioned the use of facial recognition technology by security robots as a major concern.This seems to highlight a perceived difference between technology that looks for and is used to deter bad behavior from technology which is used to target and track specific individuals regardless of whether they are engaged in criminal behavior.There was also explicit concern about the biased use of facial recognition technology.Such concern is urgent to address in light of several instances in the news where Black citizens were misidentified as wanted criminals due to biases in facial recognition systems that incorrectly identify people of color [48].Therefore, people have real and warranted concerns that the use of security robots for surveillance can lead to unjust arrest of people of color.
These findings underline the importance of carefully considering the context in which security robots would be used and the need for seeking community input before security robots are being deployed in order to shape deployment parameters and patrolling protocols, as well as rules for engagement and robot autonomy, in ways that align with safety needs expressed by affected communities rather than merely automating and expanding policing and surveillance infrastructures.An important aspect of this would be to clearly define the purpose of a security robot and what kind of behaviors-rather than individuals-a security robot is intended to deter or detect.

CONCLUSION
With increasing use of security robots in various contexts, this study fills an important gap to understand public perceptions of what this technology can do, what its benefits might be, and what concerns people would have about its use.We conducted 17 semi-structured interviews with people in the United States, involving photo elicitation prompts to help them envision environments in which they might encounter a security robot.First, the presence of a security robot evoked a sense of increased safety to some, but to others it signaled entirely new ways that marginalized and oppressed groups may be harmed by injustice.One exception was women participants, who move about the world with caution for their physical safety, and felt less threat of harassment or violence would be posed by a robot as compared to human security personnel.Second, participants felt unease about not knowing whether a robot was in control in a given moment-whether it was making decisions about its actions, merely executing pre-determined actions, or to some extent controlled remotely by a human operator.Moreover, participants had concerns about human agendas behind security robots, whether they may be deployed and controlled by government agencies or private for-profit businesses.Third, participants considered the presence of and surveillance by security robots appropriate in certain contexts with recognized heightened security risks, but were weary of the potential for security robots to be used to automate or expand surveillance infrastructureswith the detriment perceived especially to already marginalized communities.In particular, the use of security robots with facial recognition to identify and track individuals was considered inappropriate whereas identification and deterrence of certain behaviors was considered more acceptable.Our findings highlight the importance of seeking community input and involvement before security robots are deployed to ensure that their goals and operation are clearly defined and constrained in ways that meet the community's safety needs.If you are selected for interviewing we will contact you by email.The interview will take approximately 45-60 minutes and you will be paid $25 USD.There is no guarantee you will be selected for interviewing.On this survey, we will first ask you questions regarding your attitudes towards technology and robots.Then, we will collect demographic information from you at the end.

Fig. 1 .
Fig. 1.Examples of the prompts used in the photo elicitation part of the interview.A Knightscope K5 robot is placed in different contexts (airport (left), residential neighborhood (right) to elicit participants' thoughts about the presence of security robots in different environments.

Table 2 .
Summary of Findings Proc.ACM Hum.-Comput.Interact., Vol. 7, No. CSCW2, Article 322.Publication date: October 2023."Would I Feel More Secure With a Robot?": Understanding Perceptions of Security Robots in Public Spaces