Extrovert Increases Consensus? Exploring the Effects of Conversational Agent Personality for Group Decision Support

Conversational agent research has significantly shifted from solely focusing on technical capabilities to emphasizing the agent’s social and conversational abilities. Previous research indicates that the agent’s personality, as an essential characteristic in an agent’s social and conversational behaviors, has a significant impact on user experiences. However, the role of the agent personality in a discussion process when an agent acts as a group decision facilitator has not been discussed. To fill this research gap, we conducted a Wizard-of-Oz experiment with 40 participants to investigate the differences in participants’ decisions after discussing with agents with two distinct personalities (i.e., introverts and extroverts). Our data showed that the extroverted agent was more effective at facilitating group decision consensus, but participants perceived the introverted agent as more useful.

The group members go through all discussion topics under the agent's guidance and ask for information if necessary.(B) The human wizard behind the agent keeps an eye on the discussion process and is always available to provide appropriate advice to help the group members explore more options and reach an agreement.(C) To speed up the response process, the human wizard can only select responses from a corpus.(D) The human wizard utilized a text-to-speech toolkit to convert the reply to synthetic speech and then transmitted it to the online meeting.

INTRODUCTION
Conversational Agents are becoming increasingly popular for interacting with humans.In recent years, researchers have concentrated on creating human-like agents by imbuing them with personality.Advances in artificial intelligence spark new excitement for designing agents with diverse personalities in many fields, such as healthcare [28], customer service [21], virtual shopping assistant [9], immigration [8] and hiring [2].Brinda Mehra [23] concluded that the personality of an agent, particularly one that is engaging, witty, and funny, will positively impact users, influencing their perception and usage of the agent.Understanding the impact of agent personality in specific scenarios can help relevant practitioners specify design directions.
However, the impact of an agent's personality may vary in different fields.For example, Li et al. [22] developed two virtual interviewers with distinct personalities and found users are more willing to confide in and listen to a virtual interviewer with a severe and assertive personality.But M. Dibitonto et al. [12] found that designing a sensitive personality for the agent on campus may motivate some students to apologize for their rude interactions.Therefore, it is necessary to discuss the unexplored aspects of agent personality on users, such as convincing humans to change their decisions.So, the key to designing an AI personality is to answer the question of what kind of impact an agent personality will cause under specific scenarios.After reviewing a lot of works on agent personality [2,12,21,33] and AI-enabled collaborative system [13,32,36], we found that the existing literature does not provide an answer to the following intriguing and essential question: whether the personality of the agent has an impact on the group decision-making process.
To address the research gap, we conducted a Wizard-of-Oz study [10] with agents embodying distinct personalities based on the Big Five personality model [15,16].According to the Big Five model, we divided the agents into two personality types: introverts and extroverts, as they were the two most examined dimensions of personality in the study of technology adoption and usage [21].We conducted a between-subject experiment with 40 participants, in which half of the groups experienced the introvert agent, and the other half experienced the extrovert agent.We concluded our study by quantitatively analyzing participants' performance and subjective perception collected from the experiment.
The following research questions guided our study: • How do different personalities of agents (introvert vs. extrovert) impact the group decision process?• How do different personalities of agents (introvert vs. extrovert) influence participants' subjective perceptions?

RELATED WORK 2.1 Conversational Agent Personality
Some researchers have investigated how a conversational agent's personality can influence its behavior in various ways.For example, most enterprises use agents to communicate with customers, and the personality of the agents will have a significant impact on customer satisfaction [11].Andrea et al. [9] created two agents with different personalities as virtual shopping assistants and explored their impact on customer satisfaction, aiming to understand how to align agent personality with human decision-making personality.In the job interview scenario, Virtual agents with different personalities and genders also affect users [2].Chen et al. [8] created a personality-driven conversational agent that supports the social integration of immigrants by taking the agent's personality [15,16] as the design direction.
Most researchers' design of agent personality is based on the Big Five personality model.Researchers by Ahmad et al. have systematically developed a framework for how to instill personality cues in conversational agents and how to organize the range of potential expression variation on the Big Five personality model, supporting other researchers in designing personality-specific AI [1].Lannoy and Justin provided linguistic cues for how to set up introverted and extroverted agents while pointing out their impact on customer satisfaction and emotional connection [21].This provides the theoretical basis for the agent based on introversion and extroversion in this paper.

Group Decision Support
Several previous studies have investigated group facilitators in meeting-style decision-making processes and the extension of group decision support systems (GDSS) in place of group facilitators.Research by Bostrom et al. [5,24] has demonstrated that group facilitators can improve group decision-making efficiency.Niederman et al. [26] surveyed 238 group moderators' experience/training volume characteristics on the amount of conference facilitation.These studies have clarified the concept that the role of the group facilitator is to help the group achieve the decision-making goals.
Many decision support systems act as anthropomorphic robots as group facilitators.A new intelligent meeting room system called EasyMeeting [7] is based on early computing systems to provide services and information for the needs of participants in meetings.With the development of computing and communication systems, interactive and integrated smart conference rooms gradually matured.Weibel et al. [34] introduced their research results of smart conference rooms named SMaRT, which provides a system that does not require explicit human-computer interaction.The conference support service allows users to focus more on their decision-making process.Farnham et al. [14] customized a chat system to facilitate group decision-making, proving that group discussions using the system can lead to higher-quality decisions.The research mentioned above provides enough evidence to confirm the effectiveness and acceptance of anthropomorphic robots as meeting hosts.It also suggests the development of technology-driven group decision support systems and other designs.However, no research has been conducted to explore the personality traits of anthropomorphic robots.

RESEARCH METHOD
Following the experimental design in [32], we designed two agents and conducted a Wizard-of-Oz experiment [10] to investigate how different personalities of the agent (introverts vs. extroverts) impact discussion processes and participants' perceptions in a decisionmaking group.In the following subsections, we first describe the experiment task and design of the agent.We then present the facilitation protocol of the human-wizard.Lastly, we lay out the details of the experiment design.

Experiment Task
We chose to simulate a travel discussion task.The experimental task design was inspired by previous research work [17,20].Many studies of decision-making discussion topics use the classic Lunar Survival Scenario task [17], which sets the scene on the moon where participants must decide how to prioritize items that will help save their lives.In this study, we adapted the experimental scenario, and we chose a topic that is closer to everyday life -the travel task.Two participants were tasked with jointly deciding on a travel destination, mode of transportation, place of stay, and carry-on items for a 5-day trip.The AI agent acted as a group member and assisted in facilitating the decision-making process.
Although there were two participants in the discussion group in this study, we still refer to it as a "group" rather than a "dyadic conversation" mediated by an agent.Firstly, we refer to the experimental design of Shamekhi et al. [32], who designed an agent to form a discussion group with two human members.They define it as a "group facilitation agent."We used the same experimental setting.Secondly, in our research, we prefer it when the agent interacts with humans as an equal rather than as an assistant.This indicates that there are "three participants" in the experiment.Our experiment task is a group decision-making process, where we focus on how the three members interact and reach a consensus instead of only one-to-one conversations.

Conversational Agent Design
Following [25,28], we designed two agents to facilitate group decision-making among Chinese native speakers.The introvert agent (I-Agent) is designed to get straight to the point, not paying much attention to pleasantries or emotions expressed by the users.The other extrovert agent (E-Agent) is designed with a sociable, talkative, enthusiastic, and curious personality.Based on the research conducted by Asher [3], we believe that the personality traits of a bot are crucial in defining the overall user experience.Therefore, to improve the conversational experience, we developed two different personalities by adapting specific tones and using personalized corpora.In line with a study on how modality in Chinese indicates one's personality [37], we prepared corpora for the two personalities to be experimented with, considering modality (e.g., might, should, and must) and interjections (e.g., E-Agent often used Chinese interjections to make itself sound more enthusiastic).In addition, we selected a synthesized tone of an enthusiastic female for the speech generation for the E-Agent and that of a calm female for the I-Agent.Both synthesized tones were offline voice models available in the text-to-speech toolkit.Table 5 and Table 6 show several example sentences used by the agents in the experiment: The agent took a non-physical form since its appearance may influence the outcome [30,32].The agent and participants held the conversation in an online meeting room.Everyone spoke only by audio without switching on their webcams.
Contemporary conversational dialogue systems typically consist of four key components [19]: a) Speech recognition for converting speech input into text form; b) Dialogue understanding, which involves analyzing the user's intent and the context of the conversation; c) Response generation, where the system formulates an appropriate reply based on the analyzed input; d) Text-to-speech conversion to render the system's generated text into spoken language.
However, in our experimental design, we adopted a Wizardof-Oz methodology, wherein human controllers substituted some functionalities originally intended for the automated system (see Figure 1).Consequently, the system bypassed the need for speech recognition, dialogue understanding, and response generation components, as the human wizard directly managed these.For instance, when a user inquired about vacationing in Beijing, the human wizard not only comprehended the inquiry but also generated the appropriate response, which was then converted into speech using a text-to-speech toolkit 1 .This approach, while unconventional, allowed us to investigate user-agent interactions without the influence of automated system constraints, thereby shedding light on the intricacies of human interactions with conversational agents, as demonstrated in previous research [4,6,29,31].
The design of the agents is divided into two parts, referencing CASSY [31].One was to determine when the agent was supposed to respond to humans.In our design, the agent responded to group members in two situations: 1) group members directly asked the agent; 2) the agent actively guided group members through unexplored choices while monitoring ongoing group discussion.The other part was to design rules and corpora for the agent to generate replies.We also followed the restriction protocol proposed in [29] to generate a limited range of responses, thus allowing the agent to exhibit a realistic level of intelligence.For instance, if group members asked questions beyond the protocol, the human wizard would reply, "I don't understand, " avoiding unpredictable distractions from the discussion agenda.

Facilitation Functionalities
The agent's goal is to facilitate group decision-making from two aspects: 1) guiding group members to explore all possible choices fully and 2) helping them reach an agreement on topics that require consensus.We conformed with these two objectives when designing the agent and conducting experiments.In general, the agent was designed to possess four functions shown as follows: • (a) Social Interaction: The agent actively responded to the participants and answered their questions when asked.The agent initiated a greeting at the beginning of each experiment and actively responded during the discussion (e.g., "I agree" and "That sounds good" in Chinese).• (b) Choice Exploration: The agent also recommended choices not considered by the participants, helping the group members thoroughly compare the benefits and drawbacks of all choices.To better control our experiments, we prepared a fixed list of choices for each topic.These choices were listed on the questionnaire to be filled by participants.If the discussion of the participants failed to cover all choices for a specific topic, the agent would guide the participants through the remaining choices.For instance, if the two participants preferred to travel in Shanghai without mentioning another choice, Beijing, the agent would ask, "Why not travel in Beijing?".• (c) Agreement Facilitation: A complete decision-making session contains topics that require group members to reach an agreement.Specifically, our experimental setting required group members to decide on a travel plan.Group members needed to reach a consensus on the destination, accommodation, and transportation.When the group members disagreed and failed to compromise, the agent randomly supported one of the participants and provided more reasons.• (d) Agenda Facilitation: The agent rigorously adhered to the prescribed discussion agenda, leading the group through the designated topics.This ensured that the discussion remained focused and free from distractions, aided by effective task and self-introduction, ice-breaking, and time management strategies.The ice-breaking process involved both participants introducing themselves to each other and openly sharing their previous encounters with conversational agents.
With these functions, we devised strict control rules regarding when and how the agent intervened in the discussion.As shown in figure 3, the flow chart regulates how the human wizard interacts with the participants, thereby better controlling all experiments.Unlike previous work [30,32] conducting face-to-face experiments, all of the decision-making sessions were performed on an online meeting platform2 , since according to our pilot study, an online meeting room allowed the human wizard enough time to generate responses and convert them to speech, allowing the agent to be more active during the discussion.
The wizard was not allowed to speak and was only responsible for transmitting artificial speech generated by the toolkit to the online meeting.Group members only heard the generated speech, so they easily believed that the agent was a real conversational AI similar to Cortana or Siri.

Experimental Design and Procedure
We conducted between-subjects experiments to investigate how the two different personalities (extroverts vs. introverts) of agents facilitate group decision-making.Each experiment session was randomly assigned to one of the two groups.We constructed the experimental scenario as an online meeting, as shown in the figure 2 so that participants would not be distracted by the physical context or the agent's physical attributes.The avatar of the agent is a robot icon.The three group members (two humans and one agent) were asked to communicate only by voice throughout the process, with no cameras on or screens shared.
Task Introduction: Participants were informed of three key aspects of the discussion task: (i) They would be participating in a discussion about a travel plan with both a human and an agent; (ii) The objective of the discussion session was to reach an agreement on the first three topics (destination, accommodation, and transportation), with no time limit.Personal choices were made on the remaining topics.
(iii) The response time and content of the agent were determined automatically based on the conversations with the participants.To make the participants believe that the agent was a real intelligent robot, the host claimed that it was generated using AI technology and asked the participants to consider the agent as a member of the discussion group.Once the participants had a clear understanding of the experimental task, they were provided with a form (as presented in Table 3) to document their initial decisions.All participants agreed to have their conversations recorded for further analysis.They were not informed about the purpose of the experiment or the personality traits of the agent they were interacting with.
Discussion Facilitation: During the experiment, the agent adhered to the principles outlined in section 3.3.To facilitate the human wizard's responses, a corpus of pre-prepared replies (some of which are listed in Tables 5 and 6 in our Appendix) was provided.This approach not only expedited the response process for the human wizard but also ensured comparability across all experiments.
Post Experiment Interview: After discussion, the agent concluded the whole process and thanked all participants.The experimenter instructed participants to fill out forms to record their new decisions, as shown in Table 3.To answer RQ2, participants were asked to complete a questionnaire (as shown in Table 4).We conducted brief interviews with participants to elicit more feedback and insights about their experiences.We began by asking about their impression of the personality of the agent.We also questioned them about their attitudes on social interactions and reasons for the change of decision.

Participants
We recruited 40 participants through advertising within the researcher's university and on various social media platforms.These participants comprised a mix of full-time employees (15%) and graduate school students (85%).All participants held at least a bachelor's degree and had prior exposure to AI-related systems.This participant selection criterion aligns with the research objective, as we aimed to investigate future human-AI collaborative work scenarios.We looked for participants who have a solid grasp of current AI systems and are willing to collaborate with AI.
To experiment, we randomly paired these 40 participants to form 20 two-member groups.Ten groups engaged in discussions with the E-Agent, while the others interacted with the I-Agent.Demographic statistics, including age and gender, of these participants are recorded in the Table below.

Survey Measures
Our measurements consisted of objective measures responding to participant decisions and subjective feedback from the questionnaire, answering RQ1 and RQ2, respectively.For RQ1, we recorded the travel task forms that participants filled out before and after the experiment.The decision performance was reflected by calculating the change in participants' decisions before and after the experiment(Individual shift) and the change in the agreement of two members(Consensus shift).For RQ2, we asked participants to rate their perception of the agent.In addition, we used an open coding technique to analyze the post-experiment interview data.

Decision
Performance.The goal of the agent is to facilitate a consensus-building process.So, we used consensus and opinion shift as measurements of decision outcomes.Before and after the group discussions, participants independently recorded their decisions for candidates.Inspired by [32], we examined two kinds of decision shift: 1) Consensus shift, for which we used the difference between the intraclass correlation coefficients (ICC) of a pair's choices before and after the discussion.So a higher value indicated more consensus shift.2) Individual shift, for which we calculated the ICC of each participant's pre and post-ratings.So a higher individual ICC indicated less individual opinion shift.Because participants experienced the agent in pairs, their ratings may be influenced by their common experience (e.g., a group made a smooth decision versus a difficult one).We used Linear Mixed Models (LMM) with the condition as a fixed factor and group as a random factor to control for intraclass correlation (i.e., random intercept models) to account for such group effects [27].3) Time.We use time as a metric to measure the efficiency of the discussion.We ran a T-test on the consensus shift and time and a linear mixed model (group as the random factor) on individual shifts to compare decision outcomes between the I-Agent group and the E-Agent group.
3.6.3Qualitative analysis.We collected all the interview transcripts, which are in Chinese.Only the quotations used in this paper were translated.In the post-experimental interviews, we asked the participants six questions.So, we used these six questions directly as themes for our qualitative analysis.

RESULTS
In the following sections, we examine the differences in decision outcomes between the two conditions(RQ1) and then explore the participants' perceptions through questionnaires and interviews (RQ2).

Decision Performance
As shown in Table 2, the E-Agent group had more consensus shift, which means that E-Agent increased the consensus of the group's decisions and its performance was significantly stronger than the I-Agent group (p= .011,p<.05).Moreover, the E-Agent group had more individual shifts than I-Agent (p =.02,p<.05).In addition, the Total discussion time in the E-Agent group is shorter, but the difference is not significant (p=.58,p>.05).We concluded that an agent with an extrovert personality is more likely to facilitate discussion groups to reach consensus decisions.

User Perception
As shown in Fig 4, for the usefulness dimension, the I-Agent score is higher than the E-Agent with significance (p=.01,p<.05).For other dimensions, there are no significant differences between the two groups.These results show that agents with introverted personalities are more likely to be perceived as useful.

Qualitative Findings
The above results suggest that participants subjectively rated introverted agent higher than extroverted agent, but the quantitative results show that extroverted agent is more effective in facilitating decision agreement.To further understand the discussion process, we interviewed them from three perspectives.

Impression of the personality of the agent.
Before the experiment, none of the participants were aware of the experiment's study goal or the agent's character setting.We questioned them about their perceptions of the agent's personality after the experiment.Two of the 40 participants commented on their experimental group's agent with the opposite personality type.These two participants belonged to the introvert and extrovert experimental groups, respectively.During the interview, P15(in the E-Agent group) mentioned, "I only see an avatar, so I'm not sure if it's extroverted.I can't read its expressions or movements."P30(in I-Agent group) said "As there is only one agent, it is unclear if it is considered extroverted or introverted.Are there more serious agents?"4.3.2Attitudes on social interaction.The agent is designed to have a social interaction facilitation function.The agent will actively make comments instead of passively answering.The participants had different attitudes towards this behavior of the agent.Some participants felt that this proactive way of expressing their opinions could help facilitate the discussion.For example, both P15 and P16 remained silent at the beginning of the session because they did not know each other and felt awkward, and they felt that the agent's intervention helped them to escape their awkwardness.They agreed that "At the beginning of the experiment, we didn't know each other and didn't know how to start the discussion, but every time we got embarrassed, the agent was able to give suggestions to guide us deeper into the discussion.With a virtual facilitator leading the discussion, we could make decisions quickly, and the whole process flowed naturally." But some people don't like the social interaction of agents, whether they are talking with an extroverted or introverted agent.
P12 said: "I prefer the agent to provide suggestions when we ask it, rather than proactive suggestions" P37 said: "Although agent's advice has made the discussion more efficient, I always feel like it's pushing us to decide quickly.I do not like that." P18 said: "I do not like calm and stable personality, and I feel like this agent is trying to pressure me to change my mind." 4.3.3Reasons for change of decision.Many participants in both the I-Agent and E-Agent groups enjoyed the agent's voice and were receptive to its perspective.
P2 said:"It appears to be calm and steady.I was completely convinced by the advice it provided." P40 said:"Its tone was very cute, and I was glad to listen to Its suggestions." But P23 took the opposite attitude, even though he also changed his opinion.He thought he had to change his opinion because he couldn't get the agent's approval.He said: "My partner and I couldn't agree on a travel destination, but the agent favored her side, so I decided to follow the majority."  factors.While the participants rated the introverted agent higher subjectively, the extroverted agent appeared to be more effective in reaching a decision agreement, as highlighted in the quantitative results.
In exploring this contradiction, it is essential to consider the influence of individual perceptions and the dynamics of group decisionmaking.The qualitative findings suggest that participants' initial expectations and perceptions of the agent's personality might have influenced their subjective ratings.For instance, the participants' lack of awareness regarding the experimental goal and the agent's character setting before the experiment could have led to biased perceptions.The comments from P15 and P30 in the interview indicate a significant ambiguity in perceiving the agent's personality, suggesting the need for more explicit characterization of the agents during the experiment.
Furthermore, the different attitudes towards the agents' social interaction facilitation function further highlight the complexity of the participants' interactions with the agents.While some participants appreciated the proactive intervention of the agent, facilitating smoother discussions and decision-making, others expressed discomfort with what they perceived as the agent's attempt to rush the decision-making process.These varied responses suggest that the agents' facilitation style might have influenced participants differently based on their personal preferences and decision-making styles.
To address this contradiction comprehensively, future research should focus on refining the experimental setup to ensure a more precise delineation of the agents' personalities and a more nuanced understanding of participants' expectations and reactions.Moreover, conducting follow-up studies that delve deeper into the participants' cognitive processes and emotional responses during the decision-making process could provide valuable insights into the underlying mechanisms driving the observed differences in the ratings and effectiveness of the two types of agents.

DISCUSSION
Our results are divided into decision performance and user subjective perception.The quantitative analysis of decision performance shows a stronger facilitation effect for the E-Agent(extrovert) (consensus shift and individual shift).Still, participants rate the I-Agent higher on the usefulness dimension.
For decision performance, section 4.1 results show that an agent with an extrovert personality is more likely to facilitate discussion groups to reach consensus decisions.The I-Agent is designed to get straight to the point, not paying much attention to small talk or emotions expressed by the users [28].This may be perceived as more professional, as illustrated in this quote:(P2)"It appears calm and steady.I was completely convinced by the advice it provided."However, whether or not consensus is reached may depend less on the perception of the facilitator and more on the pleasant atmosphere during the discussion, which the E-Agent appears to do a better job of.P3 commented on E-Agent:"This agent is very responsive and lively; I can easily engage with it."On the contrary, P18 said about I-Agent:"I do not like calm and stable personality, and I feel like this agent is trying to pressure me to change my mind." For user perception, participants rate introverted agents as more useful.We speculate that the tone of the agent may also have an impact on this outcome.Based on the participants' comments, we discovered that the agent's tone caused a lot of concern.The agent's tone seemed to influence participants' judgments of the agent's usefulness.P33 thought I-Agent's tone sounded trustworthy, while P40 thought E-Agent's tone was charming and was glad to listen to E-Agent's suggestions.This is contrary to the conclusion of the study of Hu et al. [18], which found that "using different tones in a response does not affect the helpful level."They explained in their paper: "This is because helpfulness measures if the responses contain useful and concrete advice.In other words, to achieve a higher helpfulness score requires more background knowledge and extra information".We guess that an AI-based agent diverts participants' attention from tone due to AI model limitations, and the Wizard-of-Oz approach we adopted in our study avoids this problem, so participants' attention will shift to tone.
Tone design of agent.The tone an agent uses can impact group discussions, according to user feedback.The research found that extroverted agents are better at facilitating travel tasks, even though a calm voice is perceived as competent.These quantitative results highlight the importance of considering the agent's personality when designing conversational agents.We made that assumption because our task was relaxing.People may be more interested in the agent interaction process when the task is less stressful.Therefore, an extroverted tone design might work better in a laidback task.If it's a serious task, like deciding on health care or a policy, previous empirical studies [35] have demonstrated that doctors like the system to deliver concise and direct professional advice and want the AI to play the assistant role in medical decisionmaking scenarios.Doctors are rarely interested in interacting with the system.Designers should consider the scenario's seriousness and the task's goal when designing an agent's character or tone.In serious situations requiring professional guidance, the time spent conversing should be kept to a minimum, and the agent should be created with a calm tone.For easy and fun tasks, designers may consider adopting an agent with an enthusiastic tone and interaction functionality, which may improve the user experience.
Functional design of agent.Participants had different attitudes towards the agent's unsolicited advice("Choice Exploration" function).Some participants expressed positive perceptions about the agent intervention.For example, P30 said"The agent consistently provided timely suggestions when we were unable to generate a good idea considering multiple factors." But some participants had a negative attitude toward the agent's active intervention.P12 said "I prefer the agent to provide suggestions when we ask it, rather than proactive suggestions".The unsolicited advice feature of the agent may have the exact opposite effect on different people, even in relaxing tasks.Therefore, we intend to investigate in the future which types of individuals the agent's "unsolicited advice" function is best suited for.
Limitation.Our experimental design is limited to three-person groups.In some cases, one participant may change their decision when the agent expresses its opinion which favors the other participant's side, as illustrated in the following quote: (P23)"My partner and I couldn't agree on a travel destination, but the AI favored her side, so I decided to follow the majority."Participants may feel less pressure to defer to the majority when there are more participants in a group.In the future, we plan to look into the state of multi-person group discussions.
This study is also limited to a single scenario, 'The Lunar Survival Scenario Task, ' which served as the exclusive means to assess the conversational agent's impact.While this approach was instrumental in addressing our research questions, it also warrants recognition that the findings may carry a degree of context-specificity.Consequently, the generalizability of these findings should be interpreted with caution.To mitigate this limitation, future research endeavors may explore a broader range of scenarios or real-world contexts to enhance the applicability of the findings.This would not only strengthen the robustness of our conclusions but also contribute to a more comprehensive understanding of the conversational agent's potential impact.
The study's use of conversational agents might not have fully captured the intended introverted and extroverted personalities, as indicated by participants' difficulty distinguishing the agent's personality.To address this limitation, future research could employ more advanced conversational agents or robots capable of expressing personality traits through facial expressions and body language.This approach would create a more immersive interaction environment, allowing for a more comprehensive investigation into the impact of these personality traits on group decision-making processes.Integrating these technological advancements into future studies could significantly enhance the authenticity of experimental simulations and provide a deeper understanding of the role of personality in human-agent interactions, thus contributing to the development of more effective conversational agents for facilitating collaborative decision-making.

CONCLUSION
After analyzing quantitative and qualitative data, we observed that the extroverted agent facilitated group decision consensus more effectively.In contrast, the introverted agent was perceived as more useful by the participants.However, it is crucial to note that our experimental design was confined to three-person groups, and the results might vary within a larger multi-person setting.We plan to continue investigating how agent personality affects larger group dynamics in the future.We specifically want to broaden the scope of our research to obtain a more thorough knowledge of how personality affects collective decision-making, emphasizing the expression of personality traits through body language and facial expressions.For the creation of conversational agents that can successfully support decision-making in larger collaborative environments, this research direction will provide valuable insights.

ACKNOWLEGMENTS
This work was supported by the National Natural Science Foundation of China (Grant no.62172397) and Youth Innovation Promotion Association CAS (Grant no.2020113)

Figure 1 :
Figure 1: A diagram of discussion procedure.Two participants and the agent discuss a travel plan in an online meeting.(A) The group members go through all discussion topics under the agent's guidance and ask for information if necessary.(B) The human wizard behind the agent keeps an eye on the discussion process and is always available to provide appropriate advice to help the group members explore more options and reach an agreement.(C) To speed up the response process, the human wizard can only select responses from a corpus.(D) The human wizard utilized a text-to-speech toolkit to convert the reply to synthetic speech and then transmitted it to the online meeting.

Figure 3 :
Figure 3: The flow chart of the Wizard-of-Oz experiment procedure.The human wizard strictly followed the flow chart when interacting with the participants.

4. 3 . 4 2 :
Qualitative Analysis of the Contradictory Findings.Upon analyzing the subjective impressions and the participants' attitudes towards the agent's personalities, it became evident that the discrepancy between the subjective ratings and the effectiveness in facilitating decision-making might stem from various underlying Decision outcome I-Agent Mean(SD) E-Agent Mean(SD) Test PResults of decision outcome measures(LMM:linear mixed model).The results of the LMM show that this result is not influenced by the group(p=.51,p>.05).