Autonomy Acceptance Model (AAM): The Role of Autonomy and Risk in Security Robot Acceptance

The rapid deployment of security robots across our society calls for further examination of their acceptance. This study explored human acceptance of security robots by theoretically extending the technology acceptance model to include the impact of autonomy and risk. To accomplish this, an online experiment involving 236 participants was conducted. Participants were randomly assigned to watch a video introducing a security robot operating at an autonomy level of low, moderate, or high, and presenting either a low or high risk to humans. This resulted in a 3 (autonomy) × 2 (risk) between-subjects design. The findings suggest that increased perceived usefulness, perceived ease of use, and trust enhance acceptance, while higher robot autonomy tends to decrease acceptance. Additionally, the physical risk associated with security robots moderates the relationship between autonomy and acceptance. Based on these results, this paper offer recommendations for future research on security robots.


INTRODUCTION
The advancement of robotics has paved the way for autonomous systems in various aspects of our lives, with one such domain being security [3,54,56,92].In the context of this paper, an autonomous system is one that has the ability to act independently to accomplish its task.Security robots, equipped with sophisticated sensors, intelligent algorithms, and decision-making capabilities, have emerged as a promising solution to address the growing complexities of safeguarding private and public spaces [54,81].However, their deployment is not without challenges.As security robots become increasingly autonomous, the intricate relationship among their autonomy, risk, and acceptance becomes a focal point of inquiry, necessitating an examination of the concerns that underpin their deployment.
The escalating autonomy of security robots presents a dualedged sword -on one hand, it promises improved efciency, adaptability, and precision in security operations; on the other, it raises intricate questions about liability, decision-making transparency, and the potential for unintended consequences [54,92].These concerns are becoming all the more pressing in light of the increasing role security robots play in our lives [90,92].As these autonomous agents become more prevalent in our surroundings, it is imperative to examine not only their technical capabilities but also the intricate interplay between their autonomy and the level of acceptance they garner from the broader society [78,80].
To address these issues, we focused on three levels of autonomy: low autonomy with full human control; moderate autonomy with hybrid control, with human monitoring and takeover when needed; and high autonomy with a fully autonomous robot.We also focused on two levels of risk: low risk, with the robot observing and reporting only; and high risk, with the robot physically intervening to stop criminal activity.We propose a research model that extends the traditional Technology Acceptance Model (TAM)

Security Robot Acceptance
Robot acceptance has been an important topic in the feld of humanrobot interaction (HRI) [1,4,13,23,33,41,46,61,71].It can be generally defned as the extent of people's intention or willingness to use a robot [12,18,59].As a human-related variable, researchers adopt robot acceptance as an important measure of dynamic human-robot relationships.For example, Babel et al. [4] used human acceptance to measure the impact of confict resolution strategies and compliance behavior in service robots.Choi et al. [13] investigated the impacts of intergroup relations and body zones on human acceptance of a vacuum cleaning robot.Lin et al. [46] explored the efects of robot designs on consumer acceptance.Heerink [33] focused on social assistive robots, examining how demographic factors like age, gender, and education infuence robot acceptance among older adults.Additionally, Esterwood et al. [23] delved into the impact of human personality on robot acceptance.Overall, it is critical for technologies such as security robots to gain public acceptance in order to facilitate their deployment in public spaces and foster successful security operations.
Existing literature has explored specifc factors that infuence the acceptance of security robots, including factors related to humans and factors associated with the security robots and their application domain [27, 37, 50-52, 74, 92, 95].Human-related factors include gender [27,54,95], personality [51], and afection [37].For example, individuals who self-identify as females have a signifcantly higher intention to use a security robot than those who self-identify as males in hospital and college campus settings [27].Lyons et al. [51] discovered that personality traits such as extroversion, agreeableness, intellect, and high expectations are signifcantly correlated with people's public use intentions and military use intentions of security robots.Jessup et al. [37] found that people who experienced positive afect were more likely to accept security robot technology.
Robot-related factors include autonomy [50], robot gender [74], robot type [92], and stated social intent [50].Yet, to the best of our knowledge, these elements have only been examined in their respective individual studies, and most have been found to exert a non-signifcant impact on security robot acceptance.For example, robot autonomy was examined in only one study [52], which found that whether a security robot is fully autonomous or not does not signifcantly change people's desire to use the robot.This was also the only study [52] that reported the infuence of a security robot's stated social intent on acceptance, fnding it to be non-signifcant in afecting people's desire to use the robot.Little attention has been directed at examining those important security robot factors.This may partly be because of the complexity and ethical considerations associated with conducting such research [2].Considering the profound importance of understanding security robot acceptance, it is vital to delve more deeply into attributes such as robot autonomy.
Simultaneously, it is important to consider the boundary conditions impacting security robot autonomy, with risk being a potentially signifcant factor.Risk can be defned as the potential negative consequences or dangers to humans that may arise from the deployment of, operation of, or interaction with a security robot.During actual deployments, security robots may pose varying degrees of risk to humans depending on their actual tasks.Some tasks might mitigate the risk by avoiding direct physical contact, thereby reducing the danger, while others involving direct contact could pose a greater risk.Marcu et al. [54] conducted a qualitative study to understand people's perceptions of security robots in public spaces and identifed a primary concern as "the risk of malfunction or hacking threatening physical safety." Compared to information security threats, the researchers found that physical security threats were a more pressing concern to participants because of the more imminent and evident dangers.Participants also expressed particular apprehension about their physical safety in the presence of highly autonomous security robots.Given that the degree of risk could be an important moderator in human-robot interaction [94], it becomes necessary to explore the potential role of risk as a moderating factor in evaluating robot autonomy.

RESEARCH MODEL AND HYPOTHESES
In our study, we propose the Autonomy Acceptance Model (AAM), delineating how factors such as trust, perceived usefulness, and perceived ease of use increase the acceptance level toward the robot, and robot autonomy decreases this acceptance.Also, the relationship between autonomy and acceptance is moderated by the risk of danger.A condensed overview of these arguments is displayed in Fig. 1.

Technology Acceptance and Trust
Perceived usefulness and perceived ease of use are the two major determinants of when technologies are accepted [19].These factors are the primary components of the technology acceptance model (TAM) [18,19,32,39].TAM has gained prominence across several disciplines for both its parsimony [32,39] and general validity and its predictive power [6,40].Central to TAM is the idea that perceived usefulness and ease of use act as two beliefs that lead humans to establish a specifc attitude toward using technology.This attitude determines when humans will engage in actual use [18,19,32,39].
Perceived usefulness is defned as "the degree to which a person believes that using a particular system would enhance his or her job performance" [18, P.320].The perceived usefulness of a technology is theorized to signifcantly infuence its acceptance, with higher perceived usefulness increasing the likelihood of a technology being accepted and used [19].Perceived ease of use, on the other hand, is defned as "the degree to which a person believes that using a particular system would be free of efort" [18, P.320].Similar to perceived usefulness, the greater perceived ease of use a technology garners, the more likely said technology is to be accepted and used [19].Together these perceptions make up the core components of TAM, but they are not the only factors that can lead humans to accept and use technologies.
Trust is another factor associated with technology acceptance, often accompanied by perceived ease of use and usefulness [11,28,66,68,87,91].Trust can be defned as the "willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party" [55, P.712].Trust has been observed as an important predictor of technology acceptance [28,83,96].This is because trust allows humans to simplify the potential consequences of use by subjectively ruling out undesirable yet possible outcomes [28].This results in people who do not trust technologies tending to disuse them which, in turn, indicates a lack of acceptance and prevents use [43,96].
In the context of security robot acceptance, the proposition that perceived usefulness, ease of use, and trust predict acceptance has yet to be directly assessed.Instead, scholars have primarily examined factors that lead to trust/trustworthiness and/or use intention directly [9,27,47,49,52,64,74,84].This, however, assumes that TAM or related acceptance constructs are as consistent in the security robot domain as they appear in other technological domains.For example, [27] and [51] examined how human gender and personality can impact subjects' desire to use and their trust in security robots.Their fndings indicated that both humans' gender and their agreeableness and intellect/imagination have signifcant impacts on trust and the desire to use security robots.In addition, [74], [9], and [84] examined how a security robot's gender may infuence trust and acceptance of said security robot.Generally, both [74] and [9] found that gender was not signifcantly infuential.[84], however, showed that subjects' had a higher perception of usefulness, ease of use, and overall acceptance of male signaling security robots than female signaling security robots.This study nor any others we identifed in the security robot literature, assessed the implicit assumptions of TAM.Instead, these studies have mostly assumed that perception of usefulness and ease of use, as well as trust more broadly, are predictive of acceptance.
In general this common assumption -that perceived usefulness and ease of use along with trust predicts acceptance -is reasonable.This is the case because numerous fndings across the technology acceptance literature support this theoretical framework [11,18,19,28,32,39,66,68,87,91].Indeed, the results of a meta-analysis on TAM conducted by [91] indicated that across 136 studies examining various technologies that the core constructs of perceived ease of use and usefulness are indeed signifcant predictors of acceptance.In addition to these constructs, trust also appeared as signifcantly predictive, lending additional support to its inclusion in technology acceptance frameworks [91].Indeed, fndings in HRI related to socially assistive robots for older adults also generally validate this approach with perceived usefulness, ease of use, and trust each appearing as signifcant facets of acceptance [34].Based on these works and broader support across the acceptance literature, we therefore hypothesized the following: H1: Greater (a) perceived usefulness, (b) perceived ease of use, and (c) trust increase security robot acceptance.

Autonomy and Acceptance
Based on prior literature, we assert that security robot autonomy should decrease robot acceptance.Increases in autonomy would decrease acceptance primarily because of concerns about potential errors and the human's diminishing sense of being in control [26,85].In scenarios where human well-being is afected by unpredictable elements within the surroundings, such as highly autonomous robots, people's confdence in their power to control the environment would decrease [42].Norman [62] also posited that the presence of highly autonomous agents can elicit negative emotions by fostering a sense of lost control.In addition, autonomous robots can be perceived as threatening, posing both realistic and identity-based threats [24,97].Therefore, as security robot autonomy increases, so should fears and concerns humans have over their use.
Previous literature has shown a strong negative relationship with the acceptance of highly autonomous technology [16,30,35,73,97].For example, research has found a general reluctance to accept vehicles as their degree of autonomy increases [35,73].This is because humans are concerned about possible malfunctions and a lack of human supervision to prevent or handle these malfunctions.This issue is of particular concern for security robots.For instance, the New York Police Department canceled its contract with Boston Dynamics in response to backlash from the use of its Digidog, an autonomous security robotic dog that sparked fears from the general public [63].Similarly, Knightscope security robots deployed at LaGuardia Airport led to both passenger and security personnel reporting concerns about their "creepy" use [60].Therefore, we hypothesized: H2: Increasing the degree of security robot autonomy decreases security robot acceptance.

Risk, Autonomy, and Acceptance
The third hypothesis predicts that robot autonomy will decrease the acceptance of security robots and possibly increase negative attitudes toward them [97].In this section, we propose that the potential risk imposed by the robot through observation (low risk) and intervention (high risk) can moderate the impact of robot autonomy on robot acceptance.
One critical factor that signifcantly infuences our acceptance of robots is the concept of risk [22].Robots, particularly those with varying levels of autonomy, introduce a spectrum of risks.This tension between the allure of technological advancement and the fear of unintended consequences lies at the heart of the riskacceptance dynamic in human-robot interaction.Risks associated with robots encompass the potential for physical harm that robots can pose through either mechanical failures or accidents [93].These risks include physical safety concerns such as the potential for accidents and collisions [58].
The interaction between robot autonomy and the actual risk of danger plays a pivotal role in shaping the acceptance of robots.This dynamic relationship can be explained by considering how humans evaluate and respond to robots with varying levels of independence.
When individuals face a low risk of danger in the robot's activities, they may be more accepting of higher levels of robot autonomy because they believe the consequences of errors are minimal or manageable.However, when the risk of danger is high, individuals become more cautious, and higher levels of autonomy are likely to reduce acceptance.In these cases, humans may prefer robots with lower autonomy levels that they can easily control or intervene with if necessary.This moderation efect underscores the signifcance of context and task-specifc risk assessments in determining the acceptance of robots, particularly in applications where safety and security are paramount.

H3:
The degree of risk moderates the impact of autonomy on security robot acceptance, such that the negative impact of autonomy on acceptance is stronger in higherrisk situations than in lower-risk situations.

METHOD
To examine our hypotheses, we conducted a 3 (autonomy level: low or moderate or high) × 2 (risk of danger: low or high) betweensubjects online experiment.In the experiment, participants were randomly assigned to one of six conditions.Participants needed to watch a video clip about a security robot and respond to survey items related to the robot.This study was approved by a university institutional review board.

Participants
We recruited 240 participants from CloudResearch's Connect platform [15].Participants were asked to fnish an online questionnaire, which takes 6 -15 minutes to complete.Participants received compensation of $ 4.50 (USD) on average, but those who took longer to complete the survey received up to $ 5.All participants met the inclusion criteria: at least 18 years old, fuent English speakers, and based in the United States.Four participants were excluded from the fnal analysis because their overall questionnaire scores deviated more than 2.5 standard deviations from the mean.This left us with a valid sample of 236 participants, comprising 117 females, 113 males, 3 identifying as gender variant/non-conforming, 1 opting not to self-describe, 1 transgender female, and 1 transgender male.Participants' ages spanned 18 -72 years (M = 37, SD = 11.82).Geographically, participants hailed from various US regions: 37% from the South, 18% Midwest, 24% West, and 21% Northeast.Ethnically, the sample was diverse with 13% identifying as Asian or Asian American, 14% Black or African American, 8% Hispanic or Latin American, and 64% White or Caucasian.

Stimuli, Task, and Procedure
During the experiment, participants frst completed a preliminary questionnaire to provide demographic information.Following this, they watched a two-minute video about a security robot, Knightscope K5 (see Fig. 2).This video featured a news report where a reporter and a hotel manager introduced Robbie, a new security robot deployed in the hotel's parking lot.The video showcased Robbie's functions and daily operations.
We produced six distinct videos, which were edited from a news clip [38], each corresponding to a unique combination of variables (videos are provided: https://anonymous.4open.science/r/paper-C41E).Depending on the condition to which the participant was randomly assigned, the reporter's description of the security robot varied in the videos.Each participant viewed a video with one of six study scripts (provided in Table 1).The reporter's background audio was consistently recorded by the same researcher.Immediately after the video, participants were given a written description of the autonomy and risk associated with the security robots as presented in the video.They were then prompted to answer questions about Robbie, the security robot they had just observed.Participants were free to withdraw from the study at any time and for any reason.

Experimental Design
The study manipulated two independent variables: the autonomy level and the risk of danger associated with the security robot.Table 1 ofers an intuitive display of the research design.

Autonomy.
Participants were exposed to one of three levels of robot autonomy.Each level was initially introduced in the video and subsequently reinforced in a written description provided after the video.In the high autonomy condition, the robot was described as fully autonomous.In the moderate autonomy condition, the robot was portrayed as semi-autonomous, with a human operator monitoring and potentially taking control of the robot.In the low autonomy condition, the robot was not autonomous and relied entirely on a human operator's control.

Risk of Danger.
Participants were presented with one of two risk conditions, both in the video's background narrative and the subsequent written description.In the high-risk condition, the robot could intervene and engage in physical contact with humans if needed.Conversely, in the low-risk setting, the robot acted as an observer, reporting emergencies and requesting assistance when appropriate.

Manipulation Check Measures.
To assess the efcacy of the manipulation, each participant was questioned about the autonomy level of the security robot before answering video-related inquiries.All participants accurately identifed the robot's autonomy, confrming the successful manipulation of autonomy.Furthermore, we evaluated participants' perceived risk to verify the manipulation of the risk of danger.The perceived risk was assessed using a 4item scale adapted from [77] and [69].Example items include "I believe that there could be negative consequences when using security robots." and "Security robots will have defects in technology and machines." The reliability of this 5-point scale was confrmed ( = 0.85).We employed analysis of variance (ANOVA) to study the infuence of risk conditions on perceived risk.The alpha level was set at 0.05 for all statistical tests.Results indicated that the perceived risk among participants was signifcantly higher in the high-risk condition ( = 2.84, = 0.90) compared to the low-risk condition ( = 2.47, = 0.99) ( = 8.77, = .003, 2 = 0.036).

Control
Variables.We gathered demographic data including participants' age, gender, region, and ethnicity.

Dependent Variables.
We assessed trust using a 3-item 7point Likert scale questionnaire adapted from [72].Perceived usefulness was gauged using a 5-item, 7-point Likert scale derived from [88].Perceived ease of use was evaluated with a 4-item, 7-point Likert scale adapted from [18].
We utilized an adapted version of the active use scales created by [67] to gauge participants' acceptance of the security robots.Participants rated each statement on a 7-point Likert scale, spanning from "strongly disagree" (1) to "strongly agree" (7).The phrasing of the items was modifed to refer to robots instead of individuals.These items included: "I will interact with security robots in the future if possible," "I am not reluctant to interact with security robots if possible, " "I will acquire a security robot if the opportunity presents itself, " and "I am open to utilizing security robots as part of my security measures if possible."

RESULTS
In this section, we detail the fndings of our study.We employed Partial Least Squares Path Modeling (PLS-PM) utilizing SmartPLS 4 [70], to evaluate the hypotheses.Partial Least Squares Path Modeling (PLS-PM) is a statistical technique within Structural Equation Modeling (SEM).It serves as a path analysis model, depicting relationships among latent variables by illustrating directed dependencies [21,31].This modeling approach not only traces the infuence of an independent variable on a dependent one but also evaluates the relative strength of this impact [48,70].
The variable measuring autonomy was ordinal, with values ranging from 1 to 3. Age, gender, region, and ethnicity were found to be non-signifcant factors and were excluded from the fnal model.Figure 3 presents detailed results for the fnal model, including standardized path coefcients () for each respective path and the variance explained ( 2 ) for the variable.

Measurement Validity
We utilized factor analysis to assess structural validity and found that all items loaded at 0.7 or above on their corresponding constructs except a reverse-coded item (the third item from the ease-ofuse questionnaire) and a low factor loading item (the second item from the acceptance questionnaire), which were therefore removed.
To evaluate both discriminant and convergent validity, we utilized the square root of the Average Variance Extracted (AVE) values.As proposed by the Fornell-Larcker criterion [25], an AVE value surpassing 0.5 signifes good convergent validity of the variables.Moreover, to establish discriminant validity, the correlations among "Robbie is not autonomous; instead, it is remotely operated by a human operator.The human operator controls the robot to detect threats and take action when they feel necessary.The robot is controlled to continuously monitor the surroundings for any potential risks.Anytime threats are detected, the operator controls the robot to promptly report the on-site situation to the security frm and request assistance.The operator determines when to act.The robot is incapable of completing actions independently." "Robbie is not autonomous; instead, it is remotely operated by a human operator.The human operator controls the robot to detect threats and take action when they feel necessary.The robot is controlled to continuously monitor the surroundings for any potential risks.Anytime threats are detected, the operator controls the robot to intervene immediately and, if needed, even engage in physical contact.The operator determines when to act.The robot is incapable of completing actions independently."

Autonomy (Moderate): Semi-autonomous;
The human operator monitors and takes over "Robbie is semi-autonomous and monitored by a human operator.It has the capability to detect threats on its own and request the operator to take over control and initiate action.It continuously monitors its surroundings.Anytime it detects threats, the robot requests the operator to assume control, allowing it to promptly report the on-site situation to the security frm and request assistance.The operator can take control of the robot at any time and decide when to act, while the robot performs actions under the guidance and supervision." "Robbie is semi-autonomous and monitored by a human operator.It has the capability to detect threats on its own and request the operator to take over control and initiate action.It continuously monitors its surroundings.Anytime it detects threats, the robot requests the operator to assume control, allowing it to intervene immediately and, if needed, even engage in physical contact.The operator can take control of the robot at any time and decide when to act, while the robot performs actions under the guidance and supervision."

Autonomy (High):
Fully autonomous; Independent from the human operator "Robbie is fully autonomous.It has the capability to detect threats on its own and take action when it deems them necessary.It continuously monitors its surroundings for any potential risks.Anytime it detects a threat, the robot promptly reports the on-site situation to the security frm and requests assistance.The robot determines when to act and completes these actions independently from a human operator." "Robbie is fully autonomous.It has the capability to detect threats on its own and take action when it deems them necessary.It continuously monitors its surroundings for any potential risks.Anytime it detects threats, the robot can intervene immediately and, if needed, even engage in physical contact.The robot determines when to act and completes these actions independently from a human operator." constructs ought to be less than the square root of the respective construct's AVE.Our data illustrated that the AVEs for acceptance, perceived ease of use, perceived usefulness, and trust stood at 0.83, 0.79, 0.88, and 0.85 respectively, all exceeding the 0.50 threshold, thereby indicating good convergent validity.Concurrently, as depicted in Table 2, the correlations among the variables were below the square roots of their individual AVE values, which denoted adequate discriminant validity.Moreover, we evaluated the reliability of the measures and discovered that all the questionnaires' Cronbach's values exceeded the recommended 0.7 benchmark [10], denoting high reliability: trust ( = 0.91), perceived usefulness ( = 0.97), perceived ease of use ( = 0.87), and acceptance ( = 0.90).

Hypothesis Testing
Hypothesis 1 posited that the factors of (a) usefulness, (b) ease of use, and (c) trust would each contribute to an increase in the acceptance of security robots.Our results demonstrated a signifcant positive impact of perceived usefulness ( = 0.29, = .001),a signifcant positive impact of perceived ease of use ( = 0.18, = .039),and a signifcant positive impact of trust ( = 0.42, < .001) on the acceptance of security robots.Therefore, H1 was fully supported.
Hypothesis 2 suggested that the degree of security robot autonomy decreases security robot acceptance.Our data indicated that autonomy does indeed have a signifcant negative impact on the acceptance of security robots ( = −0.12,= .024).Consequently, H2 was also fully supported.
Finally, Hypothesis 3 hypothesized that the degree of risk would moderate the impact of security robot autonomy on its acceptance.Our model illustrated a signifcant interaction efect between the risk of danger and robot autonomy ( = 0.16, = .038).Specifcally, as delineated in Figure 4, the risk of danger  signifcantly moderated the impact of autonomy on robot acceptance [20].Simple slope tests indicated that in the low-risk condition increases in security robot autonomy signifcantly decreased acceptance( (116) = −2.25,= .025)while in the high-risk condition increases in security robot autonomy had no signifcant impact on acceptance( (116) = 0.10, = .92).Table 3 provides a summary of the results from hypothesis testing.

DISCUSSION
In this study, we aimed to understand human acceptance of security robots by expanding upon the TAM and trust frameworks to incorporate autonomy and risk as factors.On one hand, our results indicated that the perceived usefulness, ease of use, and trust in robots enhanced the acceptance of security robots.The more users trust these robots, perceive them as useful, and fnd them easy to use, the greater their acceptance of the security robot.This discovery aligns with prior fndings in the TAM literature [18,77].
On the other hand, this study identifed a signifcant negative infuence of robot autonomy on the acceptance of security robots, moderated by the risk of danger.This contrasts with earlier research suggesting security robots' autonomy has non-signifcant impact on human acceptance [50].This conficting result could be attributed to our additional consideration of physical risk as a moderator.Specifcally, [50]'s study only considered high-risk security robots authorized to deploy nonlethal weapons against unauthorized individuals.Participants in that study were also exposed to a video depicting a scenario where unauthorized visitors were potentially harmed by these weapons.Our results imply that such non-signifcant fndings regarding robot autonomy might stem from the robot's high risk of danger, as autonomy's impact only existed for low-risk tasks.
Next, the paper delves further into our contributions to the literature, theoretical implications, and study limitations in the following section.

Contributions
First, this study's fndings theoretically extend the literature on security robot acceptance by augmenting the traditional TAM and trust constructs in relation to security robot acceptance.In doing so, this study not only confrms the importance of TAM and trust in the context of security robots but also goes over and above this by identifying the signifcance of autonomy and risk.As one of the most researched acceptance models [53], TAM has been validated and extended in diferent HRI contexts, such as autonomous vehicles [29,35,65], service robots [45,77,82], and collaborative robots [8].However, no study examined these constructs in security robots.This study for the frst time examined and verifed the signifcant antecedent role of perceived usefulness, perceived ease of use, and trust in the acceptance of security robots.Additionally, this study examined two crucial factors related to robots, autonomy and risk, emphasizing their signifcance in understanding the acceptance of security robots.Notably, the efect of the risk of danger, for the frst time, was assessed in the context of security robots and found to have a signifcant moderating efect on the impact of autonomy on acceptance, while not directly impacting acceptance.Future research could further investigate the potential relationships in the security robot acceptance construct and explore additional potential infuencing factors to enrich the current AAM.
Second, this study underscores the signifcance of physical risk in grasping the connection between security robot autonomy and acceptance.On one hand, we confrmed initial assertions that as security robot autonomy decreases, the acceptance of security robots increases.However, this was only for low-risk tasks.On the other hand, for high-risk tasks, the level of security robot autonomy had no impact on acceptance.In the domain of security robots, the impact of autonomy was only examined in one previous study, and it was found to be non-signifcant for human acceptance [50].Our results ofer new insights into the relationship between autonomy and acceptance with additional consideration of physical risk.This underscores the importance of considering the role of physical risk in identifying boundary conditions, especially in the security context.At the same time, previous TAM extensions' consideration of risk mainly focused on non-physical risks, such as risk associated with fnancial loss in the context of e-commerce-related activities [44,86].However, risk associated with physical harm is a diferent concept.Such physical risk is becoming more prevalent and signifcant, considering the increasing future interactions between robots and humans in real life.Given that future security robots could Table 3: Results of Hypothesis Testing Hypothesis H1) Greater (a) perceived usefulness, (b) perceived ease of use, and (c) trust increase security robot acceptance.H2) Increasing the degree of security robot autonomy decreases security robot acceptance.H3) The degree of risk moderates the impact of autonomy on security robot acceptance, such that the negative impact of autonomy on acceptance is stronger in higher-risk situations than in lower-risk situations.

Supported
Partially Supported be deployed in various settings, the potential physical risk associated with security robots could also vary based on their specifc security task execution.The future design of autonomy to promote acceptance of security robots should consider the specifc risk of danger.Future HRI studies should also consider the potential role of physical risk to better explore the underlying infuencing factors.
Finally, the study emphasized the importance of adapting existing theories to the context of robots.Take robot acceptance as an example: given the diferent circumstances and contexts of use, people's evaluations of robot acceptance can change as they derive from varying understandings and expectations of the robot's role [57].Robots in diferent contexts may have unique and important factors that can lead to potentially varying results compared to general HRI theories.For instance, autonomy is generally believed to decrease acceptance [16,30,35], but it was found to have no efect on security robot acceptance under low-risk conditions.Further, [52] strengthened the importance of looking into HRI factors such as acceptance and trust of robotic systems in the security domain.Future research should consider generalizing existing HRI theories to accommodate specifc robot types and task contexts, taking into account their respective unique characteristics, in order to gain a better understanding of HRI relationships.

Limitations and Future Research
Our study has several limitations.First, the entire study relied on an online experiment using pre-recorded videos to manipulate diferent conditions, which could constrain the study's external validity.Participants' experiences and attitudes toward robots could be different when they interact with a security robot in person.Therefore, future studies could consider conducting lab or feld research involving real robots to study human-security robot interaction more comprehensively.
Second, our study only examined acceptance during short-term interactions.The entire experiment took only 8-14 minutes, allowing us to assess people's initial impressions of security robots.However, future studies should consider longitudinal research, deploying security robots in real-life situations, to measure people's long-term acceptance and gain a comprehensive understanding of human acceptance over time.
Third, this study focused solely on a single type of risk, physical danger.However, risk is usually identifed as a multidimensional concept that includes other types of risk such as psychological risk, social risk, and time risk, among others [36,75].Given the signifcance of risk, future studies could explore the potential infuence of diferent risks in more detail, rather than only one facet.Additionally, other risk-related factors should also be examined, such as risk-taking propensity [7,89], which could help in better understanding people's reactions toward risk.
Fourth, our study exclusively recruited participants from the United States.Nevertheless, we acknowledge that culture could potentially be an infuencing factor.Previous research has shown that individuals from diverse cultural backgrounds could exhibit diferent attitudes toward interaction with robots [17,49].Therefore, further research could be conducted to investigate the impact of cultural diferences and validate the fndings of this study across a broader population.
Finally, our study only considered active use as a measure of acceptance.Although active use is one of the most common forms of acceptance, we believe it is also important for future research to assess another less common form of acceptance -passive use.People's interactions with security robots can be indirect; sometimes, individuals simply walk by a security robot and have no direct interaction with it, or they are aware that a security robot is deployed in the same location but never actually see or interact with it.This form of acceptance should include the measurement of people's acceptance of the robot being deployed in a specifc area and their acceptance of others using the robot, even if it is not directly related to themselves.Abrams et al. [1] explored a similar concept, "existence acceptance," in the context of delivery robots.More research could consider incorporating this indirect acceptance to comprehensively understand HRI acceptance.

CONCLUSION
As the proliferation of security robots continues to reshape our society, it becomes increasingly imperative to gain deeper insights into their acceptance.The results of this study shed light on the complex dynamics of human acceptance of security robots.Nonetheless, future research is needed to build on these fndings and expand our understanding of security robot acceptance.

Figure 2 :
Figure 2: Security robot in the experimental video

Figure 3 :
Figure 3: Results of PLS analysis

Table 1 :
Research design and study scripts