Empirically Understanding the Potential Impacts and Process of Social Influence in Human-AI Teams

In the coming years, Artificial Intelligence (AI) will be applied as a teammate that works alongside and collaborates with humans. Prior research in teaming and CSCW has shown that teammates have the ability to change the thoughts and behaviors of each other through simple interactions in a process known as social influence. However, to date, research has yet to identify the social influence that AI teammates could have in these human-AI teams, which has led to a limited understanding of how AI teammates will change the behaviors of their human teammates. To remedy this gap, we conduct a mixed-methods study (N=33) with young individuals to explore how humans could behaviorally adapt and perceive their behavioral adaptation due to interaction with an AI teammate. Qualitative results report that perceived three unique stages they had to experience for the social influence of their AI teammate to lead to adaptation (i.e., perceiving a sense of control, identifying a technological or performative justification, and gaining first-hand experience). Quantitative results validate and illustrate the results of this perceived process, as results show that participants adapted their behaviors to complement the behaviors of different types of AI teammates. This study contributes to the CSCW/HCI field by developing an initial understanding of AI teammates' social influence in human-AI teams, which will be a pivotal design and research consideration in future efforts.


INTRODUCTION
In the coming years, the creation and implementation of non-human teammates driven by artificial intelligence (AI) will increase exponentially, with research already working to form these AI teammates and study human-AI teams [58,69].Existing research in human-AI teaming has shown that the integration of AI teammates will allow humans the opportunity to adapt and transition into new roles that better utilize their potential skill sets [110].Importantly, what separates good all human teams from great all human teams is the ability of teammates to seamlessly adapt to one another throughout their day-to-day interaction [8,13].Within teams, this adaptation can manifest as either a planned integration or as a reaction where humans change their thoughts and behaviors based on repeated interactions and collaborations with other team members, the latter of which is known as social influence [79].Specifically, this social influence manifests as a reaction when individuals who are receptive toward social influence perceive newly presented information (informational social influence) or observe teammate behaviors (normative social influence) [1,40].However, the social influence that AI teammates will have in the Computer Supported Cooperative Work (CSCW) environment of human-AI teams is still relatively unknown.
As such, it is critical to understand if and how AI teammates will be able to have social influence as it is a critical facilitator of team outcomes.Moreover, if AI teammates are in fact able to have social influence, the benefit of an AI teammate's social influence is also not a guarantee, as social influence can result in teammates adapting their own behaviors at the cost of their own performance [100].As such, it is also important that teammates make intentional efforts to ensure they leverage social influence that benefits team outcomes [45,113].For instance, ensuring that more knowledgeable members of a team leverage social influence is beneficial to the overall team and individual performance [75,95], and social influence leveraged by diverse team members can even reduce the biases of other teammates [23,71].Given the above, even though there is a lack of understanding on how AI teammates can have social influence, it is clear that enabling them to have social influence and designing said influence in a healthy way could provide demonstrable benefits to teams and individual teammates.
Currently, our understanding of AI teammates' social influence in human-AI teams is still limited, as research has only shown the influence that AI tools can have on human behaviors.For instance, in performing analytical tasks such as chess where AI is highly skilled, humans often conform to the recommendations of AI tools as opposed to human recommendations [19].However, these and other previous studies on AI's influence as tools pose three critical limitations.First, humans have fundamentally different perceptions of AI tools and AI teammates [114], making findings regarding the social influence of AI tools not readily applicable to that of AI teammates.Second, this past work has often denoted trust as the predominant reason for humans accepting influence from these AI tools [14], but decision-making in teams is driven by a multitude of other factors and perceptions in addition to trust, such as context, workload, team cohesion, and performance [57].Finally, the understanding created by existing work predominately leverages information social influence by having AI tools provide suggestions and information to humans [50], but an understanding of AI (teammate or tool) social influence must also consider normative social influence as behavior and team norm observation, which are critical facilitators of social influence [1,40].Synthesizing these three limitations of existing work on AI's influence, we identify the following research gap in human-AI teamwork: the existence of AI teammates' normative social influence when completing a team task and the factors that facilitate its existence have not yet been investigated by research.
To provide an initial exploration of this gap, this study focuses on understanding what social influence from an AI teammate could look like when humans are receptive and open toward experiencing said social influence.In particular, this study turned toward exploring the potential social influence AI teammates can have on younger adults, as they represent a digital native population more open to the social influence and use of modern technology [102].This study leverages a mixed-methods study (N=33) where young adults (mean age of 18.61) were given the opportunity to work with and adapt around an AI teammate.Data collection and analysis primarily focused on the use of qualitative data to explore and identify the social influence process in human-AI teams, which is currently unknown, and quantitative data allowed for the exploration of the impacts of this process.Specifically, this study uses this mixed methodology to answer the following research questions: RQ1 How do humans perceive the social influence process of their AI teammate?RQ2 When humans perceive social influence from their AI teammate, how do humans change their behaviors based on interactions with AI teammates in a human-AI team to complete a shared team task?
In answering these two research questions, this work makes a number of contributions.First, limited prior explorations of the social influence AI teammates may have within human-AI teams have left researchers and practitioners insufficient fundamental knowledge of what social influence could look like in human-AI teams, as social influence is a critical construct within cooperative work and CSCW [59,103,112].Therefore, this work provides vital insights into the manifestation of AI teammates' social influence within human-AI teams, as well as the effects of said influence on humans' behaviors and perceptions.Second, existing understandings of AI technology's social influence have largely focused on information influence, or the exchange of information [19,76].Collaboration, however, is a process that is facilitated by multiple factors beyond simple information exchange.As such, this work is the first to our knowledge to explicitly observe normative social influence -a critical component of collaboration [40] -to understand how the behavior of human teammates may be linked to the behavior of AI teammates through collaborative interaction.Finally, AI teammates should be designed from the ground up to be beneficial, and the beneficial implementation of human-AI teaming will rely on researchers ensuring that said designs meet the needs of human teammates [32,114].By extension, then, how AI teammates manifest their social influence can and should be designed in ways that meet the needs of their human teammates.This work thus provides several recommendations on how AI teammates should be designed to enable the use of social influence when humans are receptive to it, such as through the inclusion of override mechanisms or shadow periods where humans can observe AI teammates.In pursuring the above contributions, the results of this work demonstrate that, when receptive to AI teammate social influence, participants proactively adapted their behaviors to be complementary to their AI teammates.Moreover, participants perceived this social influence and adaptation process to contain three critical stages that have to be achieved for them to adapt to social influence: (1) perceiving a sense of control, (2) identifying a performative or technological justification, (3) gaining a level of first-hand experience.

BACKGROUND
In this section, we present the existing pertinent literature to our research questions and how it motivates our inquiry.First, we discuss the phenomenon of Human-AI teams and the primary factors that affect their operation.Second, we address the concept of social influence in teaming, the ways in which it is received, and its role in team success.Within the review of social influence in teaming, we review technology-mediated social influence and its relation to social influence in human-AI teams.Together, these areas provide further motivation for studying social influence in human-AI teams and the perspective that needs to be taken to study said social influence.

Human-AI Teaming
Human-AI interaction can manifest in a variety of forms.At the most basic level, humans and AI can cooperate, which entails them working, potentially independently, towards a singular goal.Alternatively, humans and AI can also directly collaborate and divide workloads that contribute toward a shared goal, which often enables them to achieve greater levels of performance due to a reduction in redundancy [4].On the other hand, teamwork adds a layer of complexity to both of these types of interaction, as teams require an intelligent and interdependent division of workload and greater levels of autonomy from teammates [85].In turn, human-AI teaming also requires a greater level of overhead to maintain.For instance, coordination and communication within teams have to simultaneously work towards task completion and healthy teamwork, as these two outcomes are related but distinct [86].As such, teaming is not always an ideal methodology for interaction, and human-AI interactions have to determine if the imposed overhead is worthwhile.
In particular, interactions that can achieve and maintain teaming could lead to heightened capabilities and effectiveness in various areas, such as in those requiring the collection and analysis of big data sets [108] and those that rely on the rapid execution of complicated computer code [53].Despite these benefits, AI teammates also come with new challenges.When humans and AI teammates work together, the team dynamics are significantly different than in all human teams [27,33].It is important to note that what makes a human-AI team different from a all human team is not just its composition, but the ways in which teammates interact to complete team goals.AI teammates will interact with their teammates through the lens and realities of technology.Thus, humans are likely to perceive and respond to AI teammates in a different way than they will their human teammates because of AI's unique machine nature.This fundamental difference is understudied and requires further exploration.
This discrepancy is no more evident when examining some of the key perspectives that affect teaming.Research in the Human-Computer Interaction (HCI) field has shown that when humans believe they were working with AI teammates as opposed to human teammates, their trust in the teammate drops dramatically [65].Plenty of HCI studies have examined multiple factors that could mitigate this drop [28,55,56].Some of these factors include how transparent AI's actions are [17] and the degree to which humans understand AI teammates' actions [66,81].How teammates express reasoning to one another is not the same in human-AI teams as it is in all human teams, due to AI's limitations on natural language processing [21].Specifically, previous research has shown that AI agents are both reliant on the design of explicit communication methods and also implicit communication behaviors, but these designs can look different than that of all human interaction [5,30].Essentially, the difference between the human-AI interaction methods forms a barrier for them to understand each other in a teaming environment.In turn, these barriers in social interaction might also create barriers to the use of social influence, thus understanding how to better design AI to enable these factors and remove these barriers would help teams benefit from AI teammates' social influence.
In particular, one of the more critical teaming processes to human-AI teaming is that of coordination; however, the concept of coordination is somewhat under-explored.Although previous work acknowledges that humans have the ability to adapt and coordinate with AI teammates [90], the majority of coordination work is led from a computational perspective, where AI teammates are designed to adapt and coordinate with humans [78].For instance, existing research has heavily worked to design AI teammates to adapt to new human teammates that they may not have collaborated with [42,115].From a social influence perspective, these studies could be viewed as explorations of human social influence on AI teammates.Unfortunately, while this work is important to create effective human-AI teams, high-complexity tasks still increase the difficulty of humans and AI teammates in coordinating effectively [15,88].Moreover, these attempts may be less successful in real-world applications as they would discount the ability of humans to coordinate due to AI teammates' social influence.The coordination and adaptation of humans could occur as a natural reaction to team interaction, as that is often a reaction that creates social influence, as discussed below, but research still needs to empirically explore this concept.
Regarding the methodologies of human-AI teams, research has continuously shown that only observing the task-performance outcomes of human-AI teams does not yield insightful results, and research has to turn toward performance and perception data to holistically describe human-AI teams [52].In particular, when performing novel explorations of existing teaming concepts within human-AI teams, research often turns toward the use of qualitative methods, which enables research to describe teaming construct from a human-centered perspective [9,114].In turn, the field of human-AI teamwork has also found continued utility in the use of mixed-methods research, which enables performative quantitative task outcomes to be explored alongside inductive qualitative research [26,101].Additionally, due to the current infancy of AI teammates, these research efforts often leverage either commercially available games [30,82] or in-house built [87,107] research platforms, as both enabled the exploration of fairly capable AI systems that specifically designed to complete specific tasks.In regard to exploring social influence, a qualitatively focused effort would provide the capability to inductively explore the concept and describe it from a human-centered perspective, and quantitative data could be used to explore the outcomes of this influence as guided by the qualitative work.

Social Influence in Relevance to Human-AI Teaming
In all groups, members use internal and external resources to achieve certain goals, a concept known as influence [16].Social influence theory posits that the power and ability to do this does not necessarily come from a formal position within the group; rather, it can come from other forms of de facto leadership [16], and AI is expected to be used in these leadership and management contexts [80].Often, people follow the cues and advice from individuals they feel possess a degree of power or expertise.This usually looks like an exchange of knowledge and information between group members [35], leading to a change in people's thoughts, feelings, attitudes, and behaviors [79].It is important to note that both forms of social influence are not limited to verbal communication.Individuals exert social influence through a variety of nonverbal methods, including the exchange of artifacts, body language, and observable actions [91].This is vital to our research since an entity does not have to seamlessly speak and understand natural language to create social influence.
Within social influence theory, team members can exert direct and indirect forms of social influence on each other [35].Direct social influence comes from a member purposefully convincing teammates to alter their own behaviors [35].This is the kind of social influence associated with persuasive argumentation and overt transfer of information.In comparison, indirect social influence is more difficult to identify, as it occurs by affecting how team members think, such as by expanding their worldview to include a new minority perspective [35].In teams, this social influence is exerted with the goal of changing team beliefs and behaviors in a way that betters the performance of teams [45].For instance, these changes can modify their behaviors to improve decision-making abilities [46], enhance creativity during brainstorming sessions [75], or even encourage teammates to be more autonomous in their daily responsibilities.In fact, these benefits of social influence are so important that modern teams heavily train leadership roles to learn to utilize social influence to help motivate and encourage their teams [92].
Importantly, within a CSCW context, technology has significantly expanded the application of social influence within the exertion of social influence and has been rapidly expanded by the implementation of technology into people's lives and the workforce.For instance, Perfumi and colleagues [76] identified two types of social influence -normative and informational social influence.Informational social influence is based on individuals' genuine beliefs in and agreements with others' views, attitudes and behaviors [1,40].As a result of such agreements, individuals adopt those beliefs and behaviors because they consider them valid.The strength of such agreement with the source of influence can be enhanced when the individual lacks relevant knowledge.On the other hand, normative social influence refers to the influence of group norms on individuals' behaviors [1,77].Through observing and interacting with other group members, individuals in the group discover and adopt the group norms as valid standards to shape their own attitudes, beliefs, and behaviors.Perfumi and colleagues [76] found that informational influence is generally much more effective than normative influence in computer-mediated communication in which participants were deindividuated.However, the ability for humans to apply social influence over technology does not stop at social media and heavily contributes to modern teaming.Due to a large portion of teams utilizing virtual technology platforms, an occurrence that has increased since the COVID-19 pandemic [108], teams are often required to exert social influence through digital channels [109].Importantly, this ability has not only allowed teams to operate without being collocated, but it has also allowed teammates to exert social influence differently.For instance, lower-level employees can now more easily exert social influence on upper-level management due to the barriers of contact being removed via technology [93].
Critically, the strength of social influence is not just determined by how it is applied but also to whom it is applied.Indeed, for humans to change due to social influence, they must have a level of susceptibility to influence [83].Historically, susceptibility is most apparent in the research field of hypnotism, where humans are only able to be hypnotized when they believe that they could be hypnotized [39,60].In this way, while an individual could become highly skilled at social influence, said influence will only be impactful if used on teammates open to the social influence process.
Lastly, an important question to explore about this research topic is how an AI teammate could exert social influence and how that exertion would impact teams despite no research investigating this topic.From a theoretical perspective, past work has identified that AI teammates have a large degree of opportunities to apply social influence in human-AI teams due to their potential benefits to team leadership [31].However, while AI teammates' social influence has not been empirically studied, AI agents have been shown to have a measurable social influence over humans through a variety of non-teammate implementations, such as social media actors [20].Indeed, previous studies have documented how fake, artificial agents can have demonstrable levels of social influence, regardless of whether said agent simply represents a human or is a fully artificial agent [49,51].Furthermore, while organizations have not leveraged AI teammate social influence internally in their own teams, organizations have some favor towards leveraging AI social influence in their use of these virtual agents on social media platforms [29,47].However, this past observed social influence is likely not globally effective.Rather, given the construct of susceptibility, these social influence attempts are likely most effective on individuals open and receptive to the influences of AI technology.As such, AI agents are no longer just part of the digital mediums of social influence but are themselves responsible for exerting it.

Summary of the Literature
The existing literature on human-AI teaming and social influence in teams reveals research needs into social influence in human-AI teams.As AI agents assume the position of full team members, their ability to give and receive social influence becomes an extremely important factor to consider in achieving effective and satisfactory teamwork.Current research that focuses on human-AI teams often skips over social influence and directly observes the impacts of teaming factors or focuses on AI teammates reacting to human teammates' social influence, which may be under-utilizing the capabilities of humans and their ability to receive AI teammate's social influence.This paper provides the CSCW community a foundational understanding on the role of AI teammate's social influence in human-AI teams.

Research Context: Rocket League
To promote the presence of social influence, the study leveraged a digital gaming platform, which avoids presenting risks and difficulties in real-world tasks.In particular, we selected Rocket League as the research context (Shown in Figure 1), a digital game where players drive cars to play a soccerlike game in teams.Critically, Rocket League provides multiple benefits to studying teamwork and social influence.For teamwork, Rocket League employs self-organizing teams, as no defined roles are given to players.As such, teammates need to take on responsibilities by interacting with each other and developing their own team strategy, including the ad-hoc assignment of roles.Interestingly, such teamwork often relies less on communication but more on a constant awareness of teammates and the dynamic environment, as the use of voice communication harms computational performance due to a software bug.In turn, this process can be directly aided by social influence as teammates can strategically influence each other into optimal strategies or roles based on expertise or preference.
In turn, Rocket League is also an ideal platform to explore normative social influence in teams, as the game can be efficiently played by simply observing and adapting to behavioral norms.In particular, these norms can present themselves through the gameplay loop (Figure 2), which contains three distinct phases for social influence to exist.Stage 1, kickoff, has teams line up and go for a stationary ball, norms are critical for kickoff as they are used to decide which teammate should go for the ball and which one should hang back to follow up after kickoff.Stage 2, shot setup, requires teams to move the ball toward their opponent's goal, and norms are critical here as they help dictate the positioning players need to take to move the ball toward the goal.Finally, stage 3, shot-taking, has players hit the ball toward the opposing team's goal, and norms are critical here as some teammates are often more proficient or inclined to take shots while others often hang Manipulation : AI Teammate Playstyles Defensive This AI teammate was modified to play a more supportive and defensive role, where it does not go for shots on goal as often and stays in the backfield more.Balanced This AI teammate was unmodified, and it routinely rotated between offensive and defensive behaviors in forward and backward positions, respectively.Offensive This AI teammate was modified to play a more forward role and move and shoot the ball more often.
Table 1.The three playstyles of AI teammates, which were the three condition levels for this study's manipulation back and follow-up in the event of a miss or block.Across all three of these stages, players often self-organize into one of three roles, offensive, defensive, or balanced, and this self-organization is often based on the iterative interaction and social influence amongst teammates.

AI Teammates & Manipulations
Fortunately, a Rocket League community (RLBot, https://rlbot.org/) has developed highly advanced algorithms that can actually play Rocket League as teammates in the past few years.Traditionally, video game bots are extremely individualistic; however, this community has explicitly focused on building AI teammates that can play with a sense of team.For the purpose of this task, a selection process was conducted, and a singular expert system platform was chosen to serve as the AI teammate for this task (i.e., Botimus Prime).This selection was made for two reasons.First, the particular teammate selected in this study was a highly-rated teammate in team play.Second, the expert system provides high flexibility in modifying the behavior of the teammate.The selected AI teammate was then modified so that participants could team with AI that had different playstyles.In particular, the AI teammate's codebase, which has a balanced playstyle by default, was modified to make two additional AI teammates, an offensive and defensive AI teammate.The offensive AI teammate played more forward in the arena area, went for the ball more often, and took shots more often.The defensive AI teammate played further back and prioritized hitting the ball toward the front of the arena for the human teammate to retrieve and use.Thus, as a mixed-methods study, this work was able to quantitatively explore a 1x3 experimental design (summarized in Table 1).

Participants
Participants were recruited through a university recruitment system and received extra credits in a course for their participation in this study.Participants needed to be 18 years old to participate, but no other recruitment constraints were used.Participants signed up for available session times on a first-come basis.This less restricted recruitment strategy was taken to recruit participants with varying levels of experience with video games and Rocket League without biasing the selection of participants.Video game experience ranged, with most participants having minimal if no experience with Rocket League and video games and a small amount being highly proficient experts.In particular, two participants noted that they competitively played Rocket League and had high ranks within the game.A full list of participant demographics can be found in Table 2.
While the population examined skewed younger (M=18.61,SD=1.09), this younger population poses a strategic benefit to the conducted study.In particular, these younger populations, which are more open to adapting to novel technologies [10], would likely be more susceptible to AI teammate social influence due to their openness to novel digital experiences.In turn, the social influence of AI teammates is likely to be more apparent around this younger population, as their susceptibility to said influence is likely higher.Moreover, similar to the platform selected, this population provides the opportunity to more feasibly perform a foundational exploration of AI teammate social influence, which can be later expounded upon by future work that explores additional populations and contexts.with a pre-survey for demographic information.Then, all participants were put through a dedicated training session created by the Rocket League Developers, which taught the basic controls and objectives of the game in a hands-on way.After this guided training session, participants were provided a three-minute practice session where they were allowed to further learn controls without any teammates or opponents present.Once done, the lead author outlined both the gameplay and interviews that would happen next.

ID
3.4.2Game Play.Participants played two sets with three games per set of 2v2 Rocket League where they had an AI teammate placed on their team.Performance metrics were collected during each game played.To reduce the pace and learning required to complete the task, participants played against two non-human goalies that were responsible for guarding their own goal.Additionally, relegating these opponents to goalkeeping roles allowed the bulk of perceptions participants formed during the task to focus on their AI teammates.Before starting these games, participants were told that they would be working with an AI that was programmed to play as a teammate, but they were not told the specific playstyle of the teammate they would play with in each game.
The purpose of separating these games into two sets was to allow qualitative data collection to better focus on iterative change rather than having participants recount all of their experiences once at the end.The AI teammates provided to participants varied across games.One set had participants play three games with either an offensive or defensive teammate, and the other set had participants play with one of each AI teammate's playstyle.This methodology ensured that participants were given time to adapt around a singular AI teammate playstyle across multiple games and adapt around various AI teammate playstyles within singular games.Additionally, the order in which these sets were presented to participants was randomized and counterbalanced.This procedure provided participants with an iterative but diverse amount of experience in teaming with their AI teammate, which bolstered interview quality as it allowed participants to from comparative insights based on their experience, and it also allowed quantitative metrics to be collected for each AI teammate's playstyle.

Post Game Play Interviews.
As a mixed-methods study, qualitative interviews were used to partially answer the research questions, which is a suitable method for two reasons.First, social influence is an iterative experience that happens continuously in teams, and a qualitative approach allows researchers to examine participants' experiences more holistically and not as a snapshot.Second, research has yet to identify what factors contribute to social influence in human-autonomy teaming, which means an inductive qualitative approach will provide a more robust method for identifying potential facilitators of social influence.Given these reasons, an interview protocol was developed using a semi-structured model and focused on the iterative changes humans made when interacting with their AI teammates.This interview protocol was piloted prior to running this study, and the interviews were updated based on pilot participant and researcher feedback.
Upon completion of each set of the games, the first author opened each interview with a question regarding participants' impression of the interaction dynamics during the games and how that interaction became social influence on behavior and perception (e.g., how was social influence distributed in a team, whether social influence fluctuated, how that social influence changed their behaviors, which teammate would be their preference, and their goals and roles during the task among other questions).Participants were first asked about their own definitions of social influence and how their AI teammate contributed to their own definition.For the set of games where different AI teammates were presented, additional questions focused on understanding how participants felt about and reacted to said change, how changes in an AI teammate's social influence impacted their ability to adjust to their AI teammate, and if they felt that change was sustainable or if they would like their AI teammate to remain more consistent from game to game.During the interviews, the interviewer actively explored boundaries that might define and confine participants' perceptions [54] by asking them to identify contexts in which they would expect the social influence dynamic to be different (e.g., in a physical environment versus video game, human versus AI teammate, gaming versus real-world office setting, etc.).Such comparisons and contrasts within and beyond the gaming context allowed various factors that may shape and impact the perceptions of social influence in human-AI teams to emerge and provided insights into how these factors play out in the real-world.All the interviews were conducted in English and audio recorded.Each participant completed a 15-20 minute interview after a block of 3 games with each participant completing a total of 2 blocks and interviews, which allowed them to discuss both the short and long-term impacts of social influence.

Qualitative Analysis
The interviews were transcribed by the first author within 72 hours after each interview was completed.During transcription, relevant prosodic information (e.g., hesitation) was marked, but speech disfluencies (e.g.fillers, stutters) were removed from the excerpts for ease of reading.The transcripts were manually coded using spreadsheets, highlighters, and affinity diagramming, and a thematic coding process was utilized [12,36].Two authors participated in two initial rounds of coding.After a first round of open coding, a discussion was held on the existing codes found, and it was decided that a second round of coding was needed to better focus analysis.After the second round of coding, a final discussion was had to finalize the codes made and group them via affinity diagramming.Through this iterative coding process, initial codes were merged, broken down, or modified by identification of alternative interpretations and cases that did not fit [54].Throughout each of these meetings, the two authors responsible for analysis were accompanied by a third author who was responsible for mediating disagreements and helping consolidate similar interpretations.A total of 16 codes and 368 quotes were finalized during iterative coding.Once codes were finalized, a separate meeting was held with all three authors to transform these simplistic codes into themes that pointedly address our research questions.The authors arranged part of the codes around the research questions and created specific themes from these codes that targeted this study's two research questions.These themes and their organization were iteratively refined before finalizing the results section.In total, two overarching themes were identified, with one theme requiring the inclusion of four different subthemes.

Behavioral Measures
During the six games played, behavioral measures (Table 3) were collected as performance statistics derived from the task.In particular, five measurements were taken from the task that describes participant behaviors: (1) shots taken, (2) goals scored, and (3) number of times assisting a teammate's goal.A higher and lower value for shots and goals denotes a more offensive and defensive playstyle by the participant, respectively.Similarly, a higher and lower score of assists denotes a more defensive and offensive playstyle, respectively.Each measure was taken per game played, and they started as 0 and increased as corresponding actions were performed.This data is automatically gathered and reported on by the Rocket League platform.

Metric Description and Contributing Factors Shots
The number of times the participant hit the ball accurately at the opposing team's goal per game.Goals The number of times the participant hit the ball and it went into the opposing team's goal.Assists The number of times the participant passed the ball to the AI teammate, and the AI scored a goal immediately after.
Table 3. Quantitative behavioral metrics derived from the Rocket League platform.

QUALITATIVE RESULTS
Given that social influence is an unexplored construct within human-AI teams, this work leverages the qualitative results to inductively identify and describe the social influence process participants experience.As such, the qualitative data will be used to describe the unfamiliar phenomenon, and the quantitative data will be used to explore the impact of this phenomenon once identified.In turn, the qualitative results of this work have been arranged to provide an empirical answer to RQ1.The first theme identified within the data demonstrated that when participants were open and receptive to AI teammate social influence, either due to the task or individual differences, the social influence process goes through three unique stages.In particular, the analysis revealed that participants intentionally adapted to become complementary with their AI teammates, and they did so rapidly.Moreover, the analysis revealed three critical stages that occur during this social influence process if participants were receptive to said social influence, which this work has organized as sub-themes discussed later.Critically, during these three stages, participants were able to adapt around their AI teammates.In particular, interviews and qualitative analysis also revealed that participants were making these adaptations intentionally based on their AI teammate's behavior throughout this social influence process.In particular, once participants had noted that they adapted their own behaviors off of their AI teammate, they were asked about how and how they did so, and p30, p29, and p27 were among the many who signaled their intention was to adapt in a way that was complementary to their AI teammate: The other two [later games] we did decent because I started to play more of a support role as opposed to an attack role, and that helped out better there -p30, Male, 20, Caucasian I understood that my teammate was the scorer.So it was me who had to disrupt the field -p29, Male, 18, Caucasian I think I learned what they did, whether they were like really defending or they're really trying to score it, and then I would do the opposite of what they did.-p27, Female, 18, Caucasian Importantly, participants were also asked about the pace at which this complementary adaptation occurred.Specifically, participants were able to rapidly adapt around their AI teammates into complementary roles.The rapid nature of this process is especially characterized by the p28, who mentioned how quickly there were able to understand their AI teammate, and p23, who mentioned that this adaptation perspective was even preferred over prior planning due to its ease and convenience: I usually figured out the bots pretty quick...I would be able to adjust part of the way through the game.-p28, Male, 22, Caucasian I feel like if you were to plan before, game plans change according to the people and it's easier for me on the fly to be able to adapt to them.-p23, Female, 18, Caucasian As mentioned above, during this complementary and rapid social influence process, participants receptive to said process underwent three stages of social influence and adaptation.In particular, these stages each provided participants with a critical perception that further enabled social influence.As such, while these participants were already receptive to AI teammate social influence, due to their status as digital natives and the nature of the platform, these three stages still needed to be completed for social influence to occur in the human-AI teams.The codes related to these three stages have been further grouped into three sub-themes, which are as follows: Stage 1 Participants needed to perceive a comfortable environment to adapt, which requires a semblance of control.Stage 2 Participants needed to justify their adaptation to an AI teammate, and they either used the limitations of AI technology or the skill level of AI to justify that adaptation.Stage 3 Participants needed to gather knowledge about their AI teammate before adapting.
4.1.1Stage 1 -Participants needed to perceive a comfortable environment to adapt, which requires a semblance of control.(RQ1).One of the most prevalent factors participants considered in regard to changing their behavior to accommodate AI teammates was whether or not they felt a sense of control within their human-AI team.Perceiving this sense of control was critical to participants, as it enabled them to feel that adapting would come at the cost of their own influence over their team and task.In particular, when asked if they felt adapting reduced the amount of influence they themselves had, p07 defined influence in terms of having control, and p29 went as far as to directly link their own influence to this sense of control: I think influence pretty much means the ability to have some control over what's going to happen.-p07, Male, 18, Caucasian I mean, at the end of the day, I know I have the most influence over the game itself, because I can obviously turn off the console.-p29, Male, 18, Caucasian Importantly, participants associated control and social influence with each other while still viewing the two concepts as distinctive.For instance, they were fine with one teammate having more influence than the other, but losing a sense of control was acceptable.This sense of control did not need to be highly complicated either; rather, it can be enabled as a basic on/off switch for the AI teammate or the task platform itself.In this way, one can see that the safety of the platform used also promoted the existence of social influence.For example, when asked about what gave them the confidence to adapt to their AI teammate, p29 explicitly noted that the ability to directly control their AI teammate provided this confidence: But the human has influences.They can start the games, stop the game, quit the game.-p29, Male, 18, Caucasian Specifically, participants' need to have an overall level of control over their AI teammate often boiled down to the fear that things might go wrong with AI systems.They believed that having that last semblance of control over the system can prevent things from going south.As an example, p01 was asked if they would be willing to give up a large degree of task control and allow their AI teammate to have a more influential role in the task, but they reiterated that they needed to have some level of control in case of potential errors: But I don't know how completely I would trust it.Because then again, I also like to have control over it.So maybe like half and half, if I see that it's not really doing what it's supposed to, I'd like to have some control over it.-p01, Female, 19, Latino or Hispanic Moreover, having a simple level of control may not be entirely enough, but participants also needed to be able to understand how to enact that control.Thus, it is not just about having control but having a level of confidence in that control.As an example, inquired further about this need for control, p09 used the analogy of a steering wheel within a self-driving vehicle, which he deemed a necessity.The following quote contains an excerpt from that discussion, where p09 repeatedly mentioned how familiar control mechanisms were especially needed with AI systems: I feel like in the instance of a self driving car or something like that.I would completely trust it... That's why you have a steering wheel.Because sometimes it does need human input.-p09, Male, 19, Caucasian The above results show one of the most critical findings of this research, which is that for participants to begin the AI teammate social influence process, participants needed a sense of control over their AI teammates.If they feel that adaption will ultimately result in them feeling out of control of their teammate or their team, then that adaptation could not occur, which was indicated by p01's simple quote: "I'd like to have some control over it.".

Stage 2a -Participants Needed a Justification to Adapt, and Their Perceived Technological
Limitations of AI Often Provided said Justification (RQ1).Critically, the analysis also revealed even with a sense of control, participants needed a justification to motivate their adaptation.Interestingly, participants justified their adaptation to AI teammates through their perceptions of the general limitations of AI teammates and AI's capabilities to adapt.Many participants alluded that adaptation is a uniquely human characteristic, making it the human teammate's responsibility to adapt to the AI teammate.For example, when asked about their comfort adapting to an AI teammate compared to an AI teammate, P07 explicitly noted that humans teammates have greater abilities to act through simple observation: It feels a lot more natural and everything working with a human.Because you can communicate the ideas a lot better just through subtle things that you do.-p07, Male, 18, Caucasian Similarly, multiple participants perceived themselves as having the unique ability to have awareness of the AI teammates' situation and behaviors and adjust their behaviors accordingly.However, participants did not perceive that to be the same for AI teammates.As an example, p05 believed that AI does not yet have the ability to observe and process the human teammate's situational and behavioral information to its advantage the way humans do: Because -p05, Female, 18, Caucasian Ultimately, participants noted that these impressions manifested because AI systems are a type of machine.This inherent impression of AI's machine nature had led participants to the perception that AI is designed to do repeated and simple tasks, but not tasks that need adaptability.However, the consistency of AI teammates is not actually seen as a bad attribute.In fact, participants felt adaptation was a core trait of humans while consistency was a core trait of AI teammates, and this benefit may be what further enables these limitations to serve as a justification for social influence.The following quotes provide examples of participants who felt they as humans were uniquely equipped to adapt to their AI-teammate due to this machine nature: Machines seem to tend to do the exact same task over and over again.They're programmed to do one thing.I feel like a human could adapt to changes in their environment.-p22, Female, 18, Caucasian I think the person actually makes it hard, because machines are pretty consistent.Humans aren't as consistent as machines are.-p09, Male, 19, Caucasian I think since humans are so inconsistent in their decision making, and they're not always making rational decisions I assume most AI would tend to make, because I mean, they're going to be trained for it.-p18, Male, 21, Caucasian While this subtheme is about the limitations AI teammates are perceived to have, the actual reason as to why participants adapted to the social influence exerted by AI teammates was because they feel that humans are best at doing so.Specifically, the above quotes illustrate that adaptation is a role potentially perceived as best suited for humans while consistency is best suited for AI teammates.

Stage 2b -Participants Needed a Justification to Adapt, and The Comparative Abilities of AI
Teammates Also Provided said Justification. (RQ1).In addition to AI's limitations being used as a justification, the comparative skill of AI teammates also motivated participants' adaptation.When a healthy skill gap exists within human-AI teams, participants were able to have a fairly healthy level of adaptation with their AI teammates.In particular, participants were willing to let more skilled AI teammates have social influence and further justified their adaptation through comparative skill levels, as p32 noted when asked why they adapted to the AI teammate: I did feel like I was the one that needed to adapt just because I had never played the game before.-p32, Female, 18, Caucasian This is a relevant finding as adaptation is in itself a skill, meaning it requires effort and a degree of thought to do effectively.Furthermore, it seems that participants equated adapting to actively assisting AI teammates.In fact, many justified their adaptation around their AI teammates by arguing that if they had better skills than the AI, they wouldn't mind the AI teammate adapting to them.Specifically, when asked about whether they would always like to be the one adapting, p31 noted that they would want the AI teammate to start adapting to their own social influence as they gained greater experience with the task: It's just like the experience I have with video games.If I was more experienced in this game, and I had played it multiple times before, I would feel more comfortable with them adapting to me.But just because I'm really inexperienced, and don't know much about the game.-p31, Female, 18, Caucasian The notion of comparative skill is not just in reference to overall game skill but also considers individual teammates' different expertise and playstyles.Participants felt that those who were less skillful at skill "A" should adapt to those who are more skillful at skill "A", In particular, p30 explicitly noted that they looked at the specific skills and capabilities that their AI teammate had during each game: it would change a lot depending on what their skills are.So for example, I was better at scoring than another teammate was, then I would probably want to focus more on scoring.Whichever one they're better at, adjust accordingly to who should do what.
-p30, Male, 20, Caucasian The above results reiterate a relevant finding in human-AI teaming that increasing AI teammate skill does not automatically increase human teammate skill.When considered in light of social influence, we see that participants intrinsically linked performance and influence, and that linkage can encourage them to adapt.However, if there is a perceived disconnect, either due to a lack of AI teammate performance or a greater level of participant performance, then this linkage prevents social influence from being justified.Thus, in addition to the limitations of AI teammates, the actual capabilities and utility of these teammates are similarly important.
4.1.4Stage 3 -Participants needed to gather knowledge about their AI teammate before adapting.(RQ1).Once participants felt comfortable adapting (i.e. after gaining a sense of control and justifying their adaption) the actual adaptation and social influence began.The first step in this process involved participants waiting, observing, and learning from their AI teammates.Given that participants prioritized and took responsibility for adapting and coordinating when working with an AI teammate, the role of knowledge and experience ultimately became one of the most integral considerations they made.
Many -p16, Female, 18, African-American More than just being an important factor, knowledge and experience were essentially a prerequisite to most participants to even start the adaptation and coordination process.One common strategy participants took was to pause and wait until they could figure out the behavioral pattern of their AI teammate to determine their own actions, which participants p33 and p27 noted when asked when they started adapting: I had a more relaxed approach to the ball as much, I just kind of waited to see what my partner was doing.-p33, Female, 21, Latino or Hispanic I would wait and see if they were trying to score more.-p27, Female, 18, Caucasian In regard to long-term teaming, this first-hand experience might have also increased the susceptibility participants had towards AI teammate social influence.In particular, the repeated experience and observation of the positive performance of their AI teammate seemed to have revised some participants' negative prior knowledge about AI and lowered barriers they may have put up in front of AI teammates, which in turn allowed them to become open to adaptation and collaboration with the AI teammates.As an example, p14 and p07 both noted that they were originally skeptical toward their AI teammate, but repeated first-hand experience made them more open to adapting: So I guess I had that expectation going in.Then after the first two games, I kind of realized that I could use this to my advantage.It's much better than I thought.So that's why I feel like I lost most of the influence in third game, because then I started learning, almost like the AI take control and pass it to me, rather than me just trying to take complete control.-p14, Male, 18, Caucasian I feel like towards the end, I definitely was getting a lot more in the groove and everything.Once I realized that, I think it was a little bit of growth on my own part where I stopped chasing the ball the entire time.And I kind of relied on that teammate a little bit more watched what they were doing, and tried to figure out how exactly I could interact with the game that was going on.-p07, Male, 18, Caucasian The above results further contextualize what it meant for participants to "rapidly" adapt to AI teammates.Although actual adaptation is rapid, it is because participants prioritized gaining a robust understanding of their AI teammates before attempting to adapt.For instance, multiple participants signaled their need to wait to gain this knowledge to ensure they were adapting accurately.However, this finding provides a double-edged sword where the creation of this understanding ultimately leads to highly capable adaptation while also slowing down team processes.
The above subthemes and stages present the finding that participants reactively interpreted and adapted around the social influence exerted by AI teammates when they are open to said process.However, the stages that enable this interpretation and adaptation are numerous and varied.Once participants had a semblance of control, they can begin justifying their adaptation either by their perceived limitations of AI teammates or potential skill gaps that exist in AI teammates.Once this justification is created, participants simply waited, observed, and learned the goal of planning out their adaptation.After these steps are completed, participants began to iteratively adapt to their AI teammates with the goal of accomplishing team goals and increasing their influence as a teammate.While the above theme demonstrates how participants naturally interpreted AI teammates' social influence through healthy and reactive adaptation, an extreme type of social influence can occur if the interjection of AI teammates is disruptive.The second larger theme identified by this study details how participants occasionally became overwhelmed and frustrated with their AI teammates.In these instances, some participants fully conceded and surrendered to AI teammates by giving up on the task altogether.The below results demonstrate how some participants forwent healthy adaptation and opt to fully comply with AI teammates' social influence to the point where they no longer look to leverage their own social influence.As an overall example of this negative impact, p27 and p15 directly mentioned their reaction to the overwhelming social influence from the AI teammate, which was driven by an insurmountable skill gap, led to them giving up on the task: I think they were influential, that I kind of just stop trying to get the goals much whenever they... Yeah, they're better than me.-p27, Female, 18, Caucasian Most definitely, by that point I was kind of waiting for the timer to run out because I just, this is kind of annoying, this doesn't like low key, doesn't make me want to play more like that sort of situation.-p15, Female, 18, Asian Unfortunately, just as they are quick to adapt, participants were also fairly quick to concede to AI teammates.This suggests that early imbalance in social influence may have long-term impacts on human-AI team dynamics.For instance, p27 reflected that they have only had one instance of observing the AI's negative social influence and that singular experience has terminated their willingness to engage in further efforts: One time I tried to hit the ball and then the AI came and knocked me out of the way so that was the first time I was like, I don't really need to do anything anymore.-p27, Female, 18, Caucasian In particular, it is important to triangulate why some participants adapted in a positive way, while others negatively adapted to the point of concession.Indeed, what caused this concession was when the interactions AI teammates had with participants caused disruptions in participants' personal actions and goals.For instance, if a human is trying to move a shared resource or go for a goal and an AI teammate prevents that, then participants were quick to be discouraged and stop trying to have impact on that shared resource.This sentiment is exemplified by the following quotes: My teammate kept doing it, taking the ball away from me.-p08, Female, Additionally, beyond this initial concession, these disruptive actions also had a negative social influence on the perceptions participants formed throughout the task.In particular, participants often form perceptions of frustration with the AI teammate.In turn, these negative perceptions were also likely contributing factors to the concession participants noted.As an example, when they mentioned that the AI teammate had disrupted their actions, p27, p04, and p26 were all asked how said disruptions made them feel about their AI teammate: It was kind of annoying when it would push me out -p27, Female, 18, Caucasian You're running into me taking this away from me, I didn't ask you for that.-p04, Female, 19, Black, Asian, Caucasian, Pacific Islander I would be really annoyed with a teammate who would be taking the ball out of my possession every single time.I'd rather have someone probably game to this one where it was, they were still trying to get the ball in the goal, but they were letting me do some of the work.-p26, Female, 18, Caucasian This theme provides an interesting point on how social influence can even manifest as an unhealthy form of adaptation, which was actually enabled by participants' susceptibility to said social influence.However, avoiding disruption provides a means of avoiding this unhealthy manifestation of social influence.Unfortunately, early imbalances in interaction that lead to disruption may be difficult to predict as an imbalance was reliant on the personal ability of participants, but this imbalance, when not overt, also served as a justification for adaptation.Thus, understanding these potential imbalances and the factors that lead to this perceived imbalance before they exist would be the most effective method of encouraging humans to not concede.This and other findings have been summarized in Table 4.

QUANTITATIVE RESULTS (RQ2)
The qualitative results of this work identified that participants perceived a social influence process from their AI teammates that results in them adapting into complimentary behaviors and roles (RQ1); however, it is important to explore if this perception manifested as observable behavioral outcomes (RQ2).In answering RQ2, quantitative results are reported to demonstrate how participants modified their own behavior within the task based on the behavior of their AI teammate.Since participants played multiple games with each type of AI teammate, linear mixed-effects modeling was used for quantitative analysis where random intercepts were set for each participant, which controlled for repeated samples while being robust enough to handle uneven data sets from individual participants.For significant models, a Holm posthoc test was performed to ensure the accurate analysis of repeated measures.
A likelihood ratio test against a model including a random intercept for participant ID revealed that teammate type significantly impacted three different dependent variables (Figure 4): participant score, the number of shots on goal participants had, and the number of times the participant received credit for assisting the AI in scoring a goal.For participant scores, the likelihood ratio test revealed that teammate type significantly improved the linear model ( 2 = 10.30,Δ  = 2,  = .006).A holm posthoc test revealed that participant score only significantly differed between working with the offensive and defensive AI teammate ( (163) = 3.14,  = 20.8, = .006).The estimated means showed that participants scored significantly higher in games with the defensive AI teammate ( = 235,  = 40.7)than the offensive AI teammate ( = 170,  = 40.6).

RQ Finding
Example Quote RQ1 Participants had to feel comfortable and safe in their environment to positively perceive the social influence of an AI teammate as positive I mean, at the end of the day, I know I have the most influence over the game itself, because I can obviously turn off the console.
-p29, Male, 18, Caucasian Participants positively perceived the social influence from AI teammates when they felt humans are better at adaptation than AI technology Machines seem to tend to do the exact same task over and over again.They're programmed to do one thing.I feel like a human could adapt to changes in their environment.
-p22, Female, 18, Caucasian Participants positively perceived the social influence from AI teammates when the AI teammate was better at them at the task I did feel like I was the one that needed to adapt just because I had never played the game before.-p32, Female, 18, Caucasian First-hand experience was required for participants to form a perception about the social influence of their AI teammate Social Influence perceived from an AI teammate perceived as disruptive led to participants reducing their behavioral involvement in the task.
One time I tried to hit the ball and then the AI came and knocked me out of the way so that was the first time I was like, I don't really need to do anything anymore.-p27, Female, 18, Caucasian For the number of shots participants took per game, the likelihood ratio test revealed that teammate type had a significant effect ( 2 = 32.77,Δ  = 2,  =< .001).A holm posthoc test revealed that participant shot count was significantly different between games with defensive and balanced AI teammates ( (163) = 3.47,  = 0.31,  = .001)and games with defensive and offensive AI teammates ( (163) = 5.87,  = 0.270,  =< .001).Estimated means show that participants took significantly fewer shots in games with the balanced ( = 2.21,  = 0.57) and offensive ( = 1.71,  = 0.54) AI teammate compared to the defensive AI teammate ( = 3.29,  = 0.54).
Regarding the number of assists participants had per game, the likelihood ratio test revealed that teammate type had a significant effect ( 2 = 7.57, Δ  = 2,  = .023).A holm posthoc test revealed  Coupled with the qualitative results that illustrate the complementary social influence process in human-AI teams, the above three significant findings (summarized in Table 5) can be used to triangulate the effect that aid AI teammate social influence process can have on human teammates that are open to said social influence.Results indicate that the roles and behaviors of humans shift as the normative social influence of AI teammates changes as well.For instance, participants performed significantly more assists when working with their offensive AI teammate.While the inclusion of an offensive AI teammate does mean that it becomes easier to obtain an assist, the corresponding decrease in participant score and shots taken shows that participants strategized differently based on the AI teammate they worked with.In turn, whether the AI teammate applied a more offensive or defensive normative influence, participants also adapted into either more defensive and supportive or offensive roles, respectively.As such, these results signal that normative social influence can lead human teammates open to said influence to adapt and change their own strategy and role, which is further illusted by the qualitative results of this work.

Ensuring Social Influence Creates Positive Change in Human Behavior (RQ2)
Within the fields of CSCW and teaming, designing AI can be somewhat complex, as AI teammates have to benefit the human teammates they work with, who are assured to have different wants and needs from team to team [114].While quantitative results of this work demonstrate that humans can adapt into complementary roles to AI teammates through the three social influence stages detailed in the qualitative results section, these results do not speak towards whether or not said social influence actually benefits a human teammate.Unfortunately, the complementary roles observed, while potentially beneficial to team performance, may not be ideal for individual humans, as they may learn from and enjoy these roles.From the perspective of human-centered computing, this social influence would thus have the potential to not be human-centered in that it does not benefit human collaborators [111].Moreover, the results of this study demonstrate that humans may also feel obligated to respond to this social influence due to technological and performative justifications, meaning they may feel they have to adapt in a way that is not optimal for them but rather complementary to their AI teammates, as evidenced by participants adapting into more offensive roles when working with a defensive teammate.Rather, for the social influence of AI teammates to be human-centered it needs to enable humans to act in a way that is both beneficial and comfortable for them [6].Thus, social influence should be used to encourage humans to adapt in whatever way they think is best for themselves and their team instead of forcing humans into accommodating AI teammates.Therefore, a challenge exists for researchers in understanding how human-AI teams and AI teammates can be can be enhanced by AI teammates' social influence.
In tackling this challenge, it is first important to note that this challenge is also not entirely unique to human-AI teaming, as social influence can be both negative and positive in all human teaming [41,61].Human teammates can often use social influence to benefit other teammates, with an example being how diverse teammates can have social influence that reduces bias and helps overall team decision-making [71].On the other hand, teammates can negatively use social influence for personal gain, exemplified by the common occurrence of individuals driving personal and potentially selfish goals through social influence [63].Shaping the quality of a team's social influence is often an explicit consideration for teammates and leaders to ensure that social influence is more often used for positive outcomes [67].However, this challenge is made unique by two key differences in human-AI teaming that this study finds important to emphasize based on its results: (1) AI teammates and their roles are designed and manipulated, which means their social influence can also be designed and manipulated; and (2) humans use the technological limitations of AI as a strong justification for letting influence impact them, regardless of whether or not it is in their own personal best interests.
For (1), unlike humans, AI teammates can be fully designed before being implemented, meaning they should also be designed and verified to have positive social influence.Therefore, research efforts, such as this one, can continue to work to provide actionable ways that will create social influence in AI teammates that is beneficial.These potential recommendations may also benefit from an understanding of how humans ensure the social influence they have is positive, which is often the result of explicit training in understanding the needs of individual teammates [34].While these methods may not fully translate to AI teammates, they can shed critical light on their design.For instance, quantitative results demonstrate that the role of AI teammates can lead humans to adapt into complementary roles when they are open to social influence.Thus, AI teammate behaviors and roles could be designed to encourage humans to adapt into roles they are skilled at and enjoy, leading to greater team satisfaction and motivation.
For (2), the results of this study demonstrate that AI teammates actually have a naturally strong level of social influence due to simply having technological limitations.Unfortunately, the presence and acknowledgment of these limitations may mean that humans are less concerned with how said influence impacts them and more concerned with who should burden adaptation.This is highly concerning as it may lead to humans feeling as if they themselves have no social influence in their human-AI teams, which is extremely negative as it can lead to power imbalances and demotivation [43].Thus, methods need to be put in place that ensure humans do not forfeit their own social influence simply because AI systems possess limitations, which might include showing humans exactly how their own actions can change and impact their AI teammate, thus giving them a demonstration of their own social influence.Moreover, this demonstration of social influence would also help participants feel in control, which can further the impact of positive AI teammate's social influence as per the results of this study.
As highlighted, the above two differences that separate human-AI teaming from all human teaming also provide a trajectory for future research and designs of AI teammate's social influence.As an important note, while it is the above two considerations should exist in tandem with each other, it is also important that these considerations exist in tandem with stages for social influence to affect human teammates.In other words, the social influence of AI teammates should be designed to not only be positive, but also impactful, both of which can be informed by the results of this work.Thus, the above considerations derived from the results of this study along with the explicit results of this study need to be considered to make design recommendations for AI teammates.

Design Recommendations
Utilizing the critical interpretation provided by the previous discussion section along with the results outlined in this study, three specific design recommendations are put forth to ensure that AI teammates and their social influence are a positive benefit to the field of CSCW.The goal of these design recommendations is to ensure that AI teammates are able to have social influence and said social influence benefits human teammates.These design recommendations are also poised as foundational design recommendations for designing the social influence of AI teammates, but they should not be the last.Specifically, further research efforts should work to understand and increase the susceptibility that broader populations have toward AI teammate social influence, in turn bolstering the impact of these design recommendations.
6.2.1 Humans Should be Taught to Override AI teammates.Based on this study's findings, it is recommended that methods of AI teammate override and control are always implemented alongside AI teammates to provide humans with this sense of control.While past research has already stated that humans need control in human-AI interaction [3], this design recommendation specifically requires that humans receive granular control over the actions of their AI teammates.Importantly, this design recommendation does not recommend that humans manually approve each individual action before an AI teammate does it, but the results of this study suggest that humans should have a limited veto mechanism that provides a sense of control.As an example, human teammates would be provided a dashboard that lists the actions an AI has decided to make and humans would have an available time gap between when that decision was made and when that decision is actually enacted to veto said decision if they felt it was necessary.This dashboard and time-gap may look different based on the context as well as some contexts are time sensitive so control may need to look different in different contexts.Moreover, given that participant p09 pointed out examples of control mechanisms that they have experience with, such as steering wheels, it is not enough for this dashboard to exist but humans should be trained to use it.
This design recommendation has increasing importance when considering the goal of raising the autonomy of AI teammates, which would in turn provide AI teammates with a greater level of control over their own actions and behaviors [70,84].Thus, the results of this study demonstrate that ensuring that humans are accepting of increases in autonomy would ultimately necessitate the implementation of mechanisms that provide a perception of human control.Therefore, while the existence of override mechanisms may momentarily inhibit AI performance, their existence would increase long-term team performance by facilitating the acceptance of AI teammates' social influence.Moreover, the usage of these mechanisms by human teammates would ideally decrease over time as humans gain a greater level of experience with their AI teammates, but the comfort provided by their continued existence could contribute to that decrease.6.2.2Before "Hiring" an AI teammate, Humans Should Shadow a Potential AI Teammate Working in Another Team.One of the most common factors why participants were able to adapt to AI teammates was that they observed and experienced AI teammate behavior, as illustrated by p16's quote about "it's just observation."Other research domains suggest the use of trial periods to encourage technology usage [72], but this would be more difficult for AI teammates as their incorporation would change human roles, which would be too substantial of a change for a shortterm trial period.Thus, the solution to this challenge would be to allow humans to shadow or observe potential AI teammates operating inside another team before they have to make a decision on whether or not to integrate and adapt to them.This opportunity would provide humans with the critical experience that a multitude of humans identified as being necessary.Unfortunately, this may also be difficult as one team has to be the first team that everyone else shadows.Therefore, this design recommendation also posits that demonstrations of human-AI teams should be made with the explicit purpose of observation.
From the perspective of teaming, this concept of AI teammate shadowing could be viewed as an adaptation of the interview process for AI teammates.Interviewing is a critical and often social component of modern teaming as teams need to get a feel for potential teammates before undergoing the process of integrating them into their team [62].Generally, the methods used to interview human teammates would not be ideal for evaluating AI teammates, as interviews often center around hypothetical behavior discussion that assesses personal attributes such as culture and personality [74,94].Unfortunately, unlike humans, AI teammates are programmed to do a specific job and not to participate in theoretical discussions tangential to their actual job performance.Thus, understanding their ability, performance, and behaviors within another human-AI team would allow humans to effectively "interview" the AI teammate and understand how they would better fit within their team.6.2.3 Team Norm Disruption Should be a Consideration of AI Teammate Actions.Disruptions caused by AI teammates' social influence were a key factor of consideration when humans were determining whether AI teammates' social influence was beneficial or negative, which means AI teammates should be designed to not disrupt existing team processes.However, this may not always be possible to guarantee when designing AI teammates as team norms and processes can often be highly unique and personal to individual teams [73,99].Thus, in addition to making an effort to design AI to not be disruptive, AI teammates should also elicit feedback on whether or not they are disrupting teams' norms and processes.As an example, an AI teammate might access a shared data resource that prevents one from using it right before a daily team meeting, which would prevent the human teammates from getting the information they need from the said resource.While this disruption may seem mundane, it may cause humans to reject the potential help or social influence an AI teammate is providing.Thus, if this happens the humans should be able to provide feedback that the action was correct for the task but not for the team, leading the AI teammate to learn that augmenting that action (potentially by change when they do it) would be ideal for their team.
Additionally, given that participants wanted to predominantly adapt to their AI teammates, the consideration of these disruptions may be a means of creating AI teammate adaptation that humans perceive as beneficial.Importantly, group dynamics and processes have been noted as an important consideration for AI teammates [89].However, the above design recommendation is specifically targeted not toward general consideration of group dynamics and teammates but rather understanding the disruption of existing team processes to reduce frustration, which is bound to happen with the introduction of AI teammates.

Contextualizing AI Acceptance and Ease-of-Adaptation for AI Teammates (RQ1)
The concepts of social influence and technology acceptance are not foreign to each other, as past work both in and out of CSCW have shown that the social influence from external actors (such as coworkers or friends) can directly impact technology acceptance [24,59].However, the findings of this work bring to light that AI teammates have an internal social influence that may actually impact their acceptance, and this influence needs to be critically considered to ensure that AI teammates become an accepted technology in CSCW domains.Consider these findings in relation to the Technology Acceptance Model (TAM), a cannon model that relates the acceptance of new technologies through two core components: the perceived utility of the technology and the ease-of-use of the technology [25].In regards to perceived utility, this study demonstrates both its relevance to AI teammates in its findings and the importance of experience in facilitating perceived utility, which is in line with current understandings of technology acceptance [97,98].However, the concept of ease-of-use becomes more complicated to pin down when discussing AI teammates, which are distinctly different from a technological tool that requires manual use by a human.Fortunately, the TAM has been updated in the past to provide more robust definitions of ease-of-use to increase its applicability to novel technologies [48,106], and the same may need to be done for AI teammates.Past research has identified that the concept of ease-of-adaptation is critical to the acceptance of workplace technologies [11], and the same appears true for AI teammates.
The findings of this study provide four critical factors that create an ease-of-adaptation in AI teammates that should be used to extend the concept of ease-of-use in human-AI teams: (1) a sense of control; (2) a technological justification; (3) a teaming justification; and (4) experiential knowledge.This section elaborates on why these four factors need to become considerations of technology acceptance for AI teammates.For factor (1) control is a component for technology acceptance [105] and has been critical in human-automation teams [68], but the control of an AI teammate is more complicated than other technologies.AI teammates are required to have a level of autonomy and independence to even be considered teammates as opposed to tools [70], which makes preserving human control more difficult.Thus, the potential for human teammates to lose a sense of control and reject an AI teammate is apparent when looking at the trajectory of AI development.Therefore, whatever control humans do have should be heavily emphasized since it may not be initially apparent to them in human-AI teams.Potential methodologies for enabling this sense of control are further iterated in the design recommendations provided by this article.
Factor (2), technological limitation justification, and factor (3), teaming justification, on the other hand, provide an extremely fortunate and interesting dichotomy in human-AI teaming.As AI technology advances in the coming years, its capabilities and skills will similarly increase [37,64], and in turn, its limitations will also shrink [22,38].Thus, as human-AI teaming progresses technological limitation justifications will become somewhat less relevant while teaming justifications become more relevant.While these factors are somewhat similar to perceived utility in that both of them refer to the benefit provided by the AI teammate [25], these factors uniquely enable adaptation.This is fortunate in that it allows adaptation to be justified at both the early and later stages of human-AI teaming, but the transition between these two justifications still needs to be designed.Specifically, ensuring that the technological limitations can be overcome by human capability will be critical to the early stages of ease-of-adaptation in human-AI teaming while ensuring AI teammates provide a unique contribution that humans cannot be more important to later stages of ease-of-adaptation in human-AI teaming.
Finally, factor (4) clearly shows that first-hand experience is of utmost importance when humans make determinations of AI teammates' skills, competency, and motives.While research has heavily documented the importance of knowledge in creating technology acceptance [96,104] and enabling teamwork [7], this study emphasizes how important it is for that knowledge to come from first-hand experience when it involves AI teammates.Unfortunately, potential early adopters of AI teammates would find it difficult to garner this hands-on experience so procedures need to be put in place that enable humans to watch and observe AI teammates without having to first adopt them into their own team, as that adoption process would require large degrees of adaptation of entire teaming processes.A methodology for providing this hands-on experience is elaborated on in this article's design recommendations with the goal of creating a "hiring" process for AI teammates.
As a final note, the above additions to ease-of-use and ease-of-adaptation that need to be included for AI teammates should not replace but add to other considerations of ease-of-use.For instance, information transparency [2] and good user interface design [44] would all still be pertinent as they would help humans adapt to AI teammates.In particular, while the participants of this study adapted into complementary roles that do not use AI teammates in the literal sense, they can still use various pieces of information from AI teammates to adapt and perform better.For instance, transparency, a common benefit to ease-of-use [18], would still constitute benefit to ease-of-adaptation, as said transparency would benefit information, behavioral and role-oriented adaptations, leading to performative and acceptance outcomes.Thus, moving forward, the consideration of this work alongside past work will ensure that AI teammates have good ease-of-use from a technology perspective while also having good ease-of-adaptation from a teaming perceptive.

Limitations and Future Work
While the findings of this article are critical to understanding humans and human-AI teams, there are still limitations within this study that provide important avenues for future research to investigate.The core limitations within this study center around (1) the observation of dyad teams in a singular context, (2) the age range present in the sample interviewed, and (3) the lack of inclusion of individual differences.For (1), dyad teams are not wholly representative of the modern teaming landscape or potential human-AI teams which will often incorporate multiple human or multiple AI teammates.However, this study views this limitation as a necessary one as the perception and effects of AI teammate's social influence must first be understood and then expanded upon to further consider other potential social influences in human-AI teams.Additionally, the gaming context presented naturally highlighted a low-risk and low-impact scenario, which likely helped participants accept social influence.However, social influence exists in other teaming contexts, and this work needs to be directly extended into additional contexts, such as the workplace, to understand how AI teammate social influence changes due to contextual considerations.For limitation (2), a younger population may provide uniquely different opinions on the acceptance of AI teammates' social influence than older populations that generally have less acceptance of newer technologies.This future workforce is likely to experience the social influence of AI technology in their jobs with the potential to experience the initial integration of AI teammates.For this reason, the perspectives and actions of this generation should not be discounted as they provide a highly relevant opinion to modern teaming.However, future research should more critically examine the considerations that make individuals more susceptible to AI teammate social influence.Lastly, while age was a consideration of this study, openness to new technology is not strictly defined by age.In turn, future work should actually incorporate or develop measures for identifying individuals' openness to new technologies or even social influence from AI teammates.In turn, future work will be able to more explicitly identify the specific personal characteristics that might make humans susceptible to the social influence of AI teammates.

CONCLUSION
As CSCW research domains continue to march towards the integration of human-AI teams into the real-world, a fundamental understanding of how humans are going to perceive and react to AI teammates is needed.Importantly, the bidirectional and collaborative nature of teaming indicates that AI teammates that go beyond being simplistic tools will ultimately change existing teaming landscapes, which include the thoughts and behaviors of human teammates.While existing work has explored the impact AI teammates can have on both humans and human-AI teams through social influence, this study uniquely presents an understanding of the actual social influence process that exists between AI teammates and human teammates.The results of this study demonstrate that individuals receptive to AI teammate social influence, such as young individuals, perceive a social influence process from AI teammates with three distinct stages.This process, in turn, influenced participants to perform behaviors complimentary to the AI teammate they were working with, as evidenced by the quantitative results.In turn, the results of this work portray what social influence can look like in human-AI teams while also detailing the stages of teamwork that facilitate said influence.Thus, as the social influence of AI teammates continues to grow in both human-AI teams and society, the results, design recommendations, and understanding created by this study will help ensure that said growth is accepted and facilitated by human teammates in human-AI teams.

3. 4
Procedure (Represented in Figure 3) 3.4.1 Pre-Game and Training.Before playing Rocket League, participants were provided with informed consent, which required agreement to participate in the study.Participants were provided

4. 1
When Participants Were Receptive, Social Influence from AI Teammates Creates Rapid Impacts Through Three Critical Stages (RQ2)

Fig. 4 .
Fig. 4. Graphs displaying the significant effects of teammate type on Participants' (a) Score, (b) Shots Taken, and (c) Assists Performed.Error bars denote standard error.

Table 2 .
Participant List & Demographics I can read the situation of what they're doing.But then they can't really understand what I'm doing and how that would affect what they should do.But I can understand that what they're doing.I can kind of work around it better than they can.
participants mentioned how they carried out their actions based on what they had observed the AI doing.As the following two quotes illustrate, And then once you see what they do, it's easier to figure out what you should be doing.
-p17, Female, 18, Caucasian It wasn't hard.It was just observation.It's gonna help us, I should just do it.
It ran into me a couple of times, but it wasn't anything irritating like the first one.I was just too frustrated to actually properly adjust with the first one.-p04, Female, 19, Black, Asian, Caucasian, Pacific Islander 18, CaucasianThe AI was just whacking the ball out of my possession, which I didn't really care because I was going in the wrong direction.But I think my feelings didn't change towards the AI.My actions didn't change either.-p26, Female, 18, Caucasian

Table 4 .
Summarized overview of qualitative results

Table 5 .
Summarized overview of quantitative results