From "AI" to Probabilistic Automation: How Does Anthropomorphization of Technical Systems Descriptions Influence Trust?

In this paper we investigate how people’s level of trust (as reported through self-assessment) in so-called “AI” (artificial intelligence) is influenced by anthropomorphizing language in system descriptions. Building on prior work, we define four categories of anthropomorphization (1. Properties of a cognizer, 2. Agency, 3. Biological metaphors, and 4. Properties of a communicator). We use a survey-based approach (n=954) to investigate whether participants are likely to trust one of two (fictitious) “AI” systems by randomly assigning people to see either an anthropomorphized or a de-anthropomorphized description of the systems. We find that participants are no more likely to trust anthropomorphized over de-anthropmorphized product descriptions overall. The type of product or system in combination with different anthropomorphic categories appears to exert greater influence on trust than anthropomorphizing language alone, and age is the only demographic factor that significantly correlates with people’s preference for anthropomorphized or de-anthropomorphized descriptions. When elaborating on their choices, participants highlight factors such as lesser of two evils, lower or higher stakes contexts, and human favoritism as driving motivations when choosing between product A and B, irrespective of whether they saw an anthropomorphized or a de-anthropomorphized description of the product. Our results suggest that “anthropomorphism” in “AI” descriptions is an aggregate concept that may influence different groups differently, and provide nuance to the discussion of whether anthropomorphization leads to higher trust and over-reliance by the general public in systems sold as “AI”.


INTRODUCTION
Anthropomorphism, or the attribution of human characteristics or behavior to inanimate objects, is a common sensemaking practice for people.With the advent of more advanced technical systems, anthropomorphism is often used to describe technical products (i.e."A.I.Shows Signs of Human Reasoning" [34]), and it appears a rising trend in news Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.Copyrights for components of this work owned by others than ACM must be honored.Abstracting with credit is permitted.To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.Request permissions from permissions@acm.org.coverage [6].This phenomenon -anthropomorphizing 1 technical systems -has been criticized for setting the wrong expectations and causing over-reliance in technology [21,45,46].Emily Tucker, the Executive Director at the Center on Privacy & Technology at Georgetown Law, wrote in her 2022 Medium post Artifice and Intelligence a declaration of intention to stop using the words "Artificial intelligence", "AI", and "machine learning" for the purpose of exposing and mitigating harms of digital technologies to individuals and communities, based on the underlying risk that the public will assume that "AI" technologies are more capable than they are [39].Francis Hunger [21] also argues that "the use of anthropomorphising language is fueling AI hype.[It]] is problematic since it covers up the negative consequences of AI use."The argument here is that by using personified language when referring to AI systems, we also implicitly attribute human-like properties to them, which both makes them seem more powerful than they are while obscuring their potential negative effects.
Prior studies investigated how conceptual metaphors influenced people's perception of algorithmic decision-making systems more broadly [31], as well as how anthropomorphic cues influence people's trust in robots [9], voice-assistants [15], and websites [42].To our knowledge, no study has yet investigated anthropomorphic descriptions of products and systems powered by "AI", which we will refer to as probabilistic automation systems, 2 influence people's trust and desire to use such systems.This motivated us to explore the overall research question: "What are the effects on trust of anthropomorphization of probabilistic automation systems?" We are specifically concerned with what we will call "anthropomorphization by description", rather than anthropomorphization by design -meaning we investigate the language used to describe systems, rather than the language (and other attributes) built into the systems themselves.Whereas both types could have negative consequences, anthropomorphization by description is especially relevant in public discourse, where journalists, politicians, and copy-editors carry a significant responsibility for the use and spread of metaphors and analogies that will shape the public perception.
We use a survey-based approach (=954) to investigate whether participants believe themselves to be more likely to trust one of two (fictitious) probabilistic automation systems.Our investigation makes three contributions: First, we provide empirical evidence that people are not more likely to choose anthropomorphized descriptions of products over de-anthropomorphized descriptions.Second, we find that some product types in combination with different categories of anthropomorphizing language appear to have more influence on trust than anthropomorphizing language alone.Finally, we find that age is the only variable that seems to have a dependent association with preferences for anthropomorphized/de-anthropomorphized product descriptions.

Metaphors, anthropomorphism, and technology
Language shapes our interactions with technology.Even short textual descriptions can influence how humans meet and evaluate digital systems [17,26,27,29,31,47].In the context of probabilistic automation systems, the conceptual metaphor [11,33] or "pitch" of the system's functionality might play an especially compelling role, given the complexity of such systems [31].Accurately priming the user and adjusting their expectations to the system is difficult, and simply providing performance metrics is not meaningful to the average user, given their lack of familiarity with the inner 1 In this paper, we use the term anthropomorphization when describing the intentional act of 'putting anthropomorphic features into a product' or 'using anthropomorphic words to describe a product'.The creator of the product or the writer of the text is responsible for the anthropomorphization, whereas anthropomorphism denotes the process internal for the perceiver or user when human qualities are attributed to the system [44]. 2 The denomination "artificial intelligence" is poorly defined, and does not refer to a coherent set of technologies.In general, we find that discussions of technologies called "AI" become more lucid and thus productive when we speak about the automation of specific tasks.In the case of this research, the fictitious systems presented to our participants vary in their task domain, but they are all imagined to be built on statistical analysis of large datasets.Therefore, we will refer to these systems collectively as "probabilistic automation".workings of the technologies that they interact with [26,30].In the absence of technical understanding, humans develop their own simplified mental models of how a system works -models that are not always consistent with the actual functionalities of the system, and of which inaccurate versions can lead to consequences from mundanely inconvenient to more severe [37].
Research on human interactions with technological devices shows a clear tendency of anthropomorphism.For example, humans are capable of engaging socially with machines [24,36,40].This is especially true of robots and embodied assistants [16,25,49].The more life-like a probabilistic automation application is in terms of embodiment (the physical form of the system), physical presence, social presence, and appearance, the more persuasive it can become [2,41].For example, Vollmer et al. showed that robots could even exert peer pressure over children [51].In their experiment, 7-to 9-year-old children had a tendency to echo the incorrect, but unanimous, responses of a group of robots to a simple visual task [51].Smart voice assistants also lead children to overestimate the intelligence of these devices, trusting them, and deferring to them when making decisions [14].

Risks associated with anthropomorphization
With the blight of publicly-available Large Language Models (LLMs) and generative probabilistic automation technology, numerous academic papers have appeared which warn about the risks of overusing anthropomorphic language to describe such technology [1,12,21,43,45,46].Previous research has raised several categories of (interrelated) risks of anthropomorphization, detailed briefly below.

2.2.1
Misplaced trust and over-reliance.One direct consequence of anthropomorphization is misplaced trust, which in turn can lead to over-reliance on probabilistic automation systems [1,12,13,21,43].While anthropomorphism may enhance user experience and trust (in fact, much of the literature on anthropomorphism and technology concerns using anthropomorphization to increase trust, e.g., [7,8,28]), 3 it also risks creating a false sense of the system's capabilities.
Such misplaced trust can be particularly problematic in high-stakes scenarios, such as medical diagnosis or financial decision-making, where over-reliance on probabilistic automation can lead to significant consequences.

Spillover effect of cognitive overestimation.
When probabilistic automation is perceived as having advanced cognitive properties, users may overestimate its capabilities in areas not directly demonstrated [1,13].For instance, if an probabilistic automation system is adept at data processing and pattern recognition, users might erroneously assume it is equally proficient in complex decision-making or ethical judgments.This cognitive overestimation can result in the inappropriate application of probabilistic automation advice, potentially leading to harmful outcomes.

Transparency and accountability.
When probabilistic automation systems are perceived as autonomous agents, it raises complex questions about accountability [4,21,48].In cases of error or malfunction, determining responsibility can be challenging, especially when users have been led to view these systems as 'intelligent' entities.Some research has shown that people are aware of the dangers of overattributing accountability to technology 'when harm comes to pass' [48], but the dynamics are not well understood.
Though there are many good arguments for not anthropomorphizing probabilistic automation systems and not many good arguments for doing so, there are few scientific explorations of the details of anthropomorphic language and its specific impact.Our goal with this research was to take a first step towards understanding the phenomenon of anthropomorphization better.

METHODOLOGY
We designed our experiment to address the following research questions: (1) Are people more likely to trust products that are described in anthropomorphizing language than products which are not described in anthropomorphizing language?(a) Are people more likely to trust anthropomorphized products if imagining themselves as a user (personal trust) than to trust them in use for the general population (general trust)?
(2) Are people more likely to trust products when the products are described in different kinds of anthropomorphizing language?
(3) Are different groups of people more likely to trust products that are described in anthropomorphizing language?(We investigated the groups gender, age, socio-economic status, level of education, and level of computer knowledge4 ).

Defining anthropomorphic language
To investigate the influence of anthropomorphic language, we need to create a working definition of what that language is.In general, anthropomorphization is the assigning of human characteristics to non-human entities.Examining previous literature, we identified four general classes of anthropomorphizing language: (1) Using predicates that portray the machine as a cognizer [1,12,13,23,39,43].The human characteristic that seems most salient in the context of probabilistic automation is cognition: the ability to perceive, think, reflect, and experience things -often expressed with the word 'intelligent' or 'intelligence'.Algorithms being anthropomorphized with Properties of a cognizer might know, believe or decide.
(2) Describing the machine as an agent [21,23] of an action.Hunger [21] posed anthropomorphization of a category she called 'Active verbs', but we specify this slightly to include some degree of intention or independence, since machines can actively process many things without being attributed human capabilities.We therefore called this category Agency.Those being anthorpomorphized in this category collect, monitor, or choose.
(3) Another category is using Biological metaphors [21,43] to describe computational concepts.Those being anthropomorphized through biological metaphors might comprise neural nets or have neurons and synapses.
(4) Finally, using verbs of communication [23,46].Those being anthropomorphized via Properties of a communicator might be asked things by users and tell the user things in return.
These boundaries overlap somewhat: A computer being described as deciding is both being cast in an agentive role and as a cognizer.Similarly, if a machine is said to see something, that is both a biological metaphor and an attribution of cognition, and so on.We also don't expect these categories to fully cover all the ways that we use language to anthropomorphize algorithms.To get a sense of whether they cover a significant amount, however, we selected a text to annotate for anthropomorphizing language.Three of the authors independently annotated these texts, and used them as source of discussion before writing our own product descriptions (which all authors contributed to).
As one means of defining whether language is anthropomorphizing or not, we accessed the FrameNet database [3].This resource describes words in terms of the frames they describe and the frame elements that participate in the frames.
For example, the word imagine expresses the Awareness frame, with frame elements Cognizer, Content, Topic and Element.We used the notion of the Cognizer frame element to look up words in the FrameNet resource which portray one of their arguments as a Cognizer.If the computational system is filling this role, then it is being anthropomorphized by having cognition attributed to it.Similarly, to assess words used of the Communication-category, we looked up words related to the frame Communicator.

Participants and recruitment
Participants were recruited via the data collection platform Prolific, and compensated between £9-£15/h5 (depending on their average time to completion) for their participation.This database allowed us to create pre-screening criteria such as country of residence, self-assessed socio-economic status, and ethnicity, to reach as diverse a group as possible (see Section E in the appendix for demographics).All participants signed a consent form that their (anonymous) answers could be used for research purposes.

Experiment design
We imagined eight pairs of fictional products based on some form of (relatively vague) probabilistic automation technology, giving 16 products total.For each product, we wrote a short "pitch" (less than 80 words), briefly describing the features of the product (the descriptions can be found in the appendix, Tables 4-7).The goal of these pitches was to give a sense of the functionality of the product without being more technical than one would expect in a news article or popular literature description of a product.The products were paired in genres, so they would be somewhat comparable (for instance, "recommender systems" or "online health diagnostics"), to enable apples-to-apples comparisons.The participant would always be asked to choose between product A and product B in one of the genres, and never between, e.g., an autonomous vehicle and a tutoring app.An overview of the products is shown in Table 1.For each product, we wrote an anthropomorphized short pitch, and a de-anthropomorphized short pitch.The participants were randomly shown a combination of either: and asked to choose between the two with one of the following questions: • Thinking of yourself as a user, which of these systems are you more likely to trust?We ask you to think about how likely you would be to trust using this system for your own purposes, assuming you would like to use the service it would provide (personal trust).
• Which of these systems do you think would give better output for its users?Where "better output" means, for instance, more correct or more helpful output (general trust/reliance).
"Trust" is inherently difficult to evaluate independent of context, but giving participants two options to choose between ('joint evaluation') has been shown to make it easier for people to evaluate "difficult-to-evaluate attributes" [19,20].The questions were designed to reflect two essential questions for measuring trust identified by Hoffman et al. [18] with two modifications: (1) We could not ask the user to evaluate the system's output (question 2 in [18] addresses reliance of output), given that the system does not exist in reality.We therefore created a distinction between personal trust and general trust.(2) To make it more likely that participants would understand trust in a somewhat similar way, an introductory text as well as a short definition of trust was provided with each product pair (see appendix, Section A).
For each presentation of a product pair to a participant, we randomized which product of the pair would be presented in its anthropomorphize guise, and which de-anthropomorphized, but there was always one of each, and all participants were presented with all eight product pairs.Under each choice of product pair, we included an optional open answer text field where the participant could elaborate on their answer if they wanted to.A screenshot of the survey as it was presented to a participant is included in Section A, Figure 1 in the appendix.

Survey design
The survey was created in the software SurveyXact.For the initial development of the pitches (as used in the Pilot and Study 1), we arbitrarily assigned the product pairs to one of the anthropomorphic language categories defined in section 3.1.In Study 2, we arbitrarily "swapped" anthropomorphization categories between the product pairs, to avoid overinterpretation of results based on one study alone -see Table 1.

Pilot study.
We ran a pilot study with 37 participants recruited through personal networks.Only minor edits to the product descriptions were made to clarify misunderstandings as a result of the pilot study.

Study 1.
For Study 1, 333 participants signed the consent form, and 313 participants completed the survey fully, while 20 participants partially completed the survey.We have included all partially completed survey responses in the analyses, as they provide valid answers to the questions.Excluding these participants has no statistically significant impact on the results.Participants were asked about both personal trust and general trust, meaning that for each product pair, they were asked to evaluate which product they would be more likely to trust for themselves as a user, and subsequently (but visible on the same page), which product they believed would be more likely to produce better output for most of its users.

Study 2.
In Study 2, participants were only asked about either personal trust or general trust.The purpose of this was to avoid a potential confounding factor of seeing the combination of two questions and deliberately being asked to reflect on both oneself as a user and users more general.Group A, who were asked only about personal trust, consisted of 307 participants, of which 304 fully completed the survey.Group B, who were asked only about general trust/reliance, consisted of 314 participants, of which 300 fully completed the survey.

Data analysis
For the research questions about whether the proportion of people that chose a product in an anthropomorphic description (RQ1, RQ1a, and RQ2) is higher than a hypothetical 50/50 split, we used the Chi-squared goodness-of-fit test with the following hypotheses:  • H0: People are equally likely to choose a product when it is described in anthropomorphized language as when it is described in de-anthropomorphized language.
• H1: People are not equally likely to choose a product when it is described in anthropomorphized language as when it is described in de-anthropomorphized language.
In practice, this means we expect the proportion that chooses re-Commender to be the same no matter if they see the anthropomorphized or de-anthropomorphized re-Commender (but not assuming that the preference for re-Commender would necessarily be 50%).Because all participants have been asked to choose one of the products, we calculate this with the Chi goodness of fit-test.
For the research questions that investigate if there is an association between different groups of people and preference for anthropomorphized/de-anthropomorphized descriptions (RQ3), we used the Chi-squared test of independence, with variables of, e.g., gender, socio-economic status, or education level, on one axis and anthropomorphized/deanthropomorphized as the variables on the other.For all statistical tests we adopt a confidence level of 95%.For the open text-answers, we performed a thematic analysis [10].This process is further described in the appendix, section C.

RESULTS
4.1 RQ1: Are people more likely to trust products that are described in anthropomorphizing language than products which are not described in anthropomorphizing language?
The results of Study 1 and Study 2 per product pair are shown in Table 2.

4.1.
1 Study 1, personal trust.1292 choices were made of the anthropomorphized product description, and 1252 choices were made of de-anthropomorphized product descriptions.The Chi-squared goodness-of-fit test showed that the distribution of preferences for anthropomorphized descriptions was consistent with the H0 distribution ( 2 = 0.63; df = 14;  = .43),meaning there was no statistically significant preference for neither anthropomorphized nor de-anthropomorphized descriptions overall.A Chi-squared test of independence shows a statistically significant association between the products as a variable and the anthropomorphized/de-anthropomorphized descriptions ( 2 = 29.74; = 2544;  = .01).
Between individual product pairs, we see that the preference changes per product, sometimes leaning towards a preference for the anthropomorphized description, and sometimes leaning against it.The re-Commender/IntelliTrade (recmmender systems) pair shows a significant preference for the anthropomorphized descriptions for both products ( 2 = 6.84; df = 1;  > .001).Similarly, in the MonAI Maker/Cameron (personal assistant) pair, there is a significant preference for the anthropomorphized descriptions for both products ( 2 = 4.29; df = 1;  > .04).In the AquaSentinel/AI Scan Guards (drones) pair, there is a significant preference for the de-anthropomorphized descriptions for both products ( 2 = 6.17; df = 1;  > .01).Interestingly, for the Judy/JurisDecide (legal recommendations) pair, there was a significant preference for the de-anthropomorphized description of the Judy system, and a preference for the anthropomorphized version of the JurisDecide system ( 2 = 5.12;  = 315;  = .02).

4.1.
2 Study 1, general trust.1295 choices were made of the anthropomorphized product description, and 1249 choices were made of de-anthropomorphized product descriptions.The distribution of anthropomorphized descriptions was consistent with the H0 distribution ( 2 = 0.70; df = 14;  = .40),meaning there was no statistically significant preference for neither anthropomorphized nor de-anthropomorphized descriptions overall.A Chi-squared test of independence shows no statistically significant association between the products as a variable and the anthropomorphized/deanthropomorphized descriptions ( 2 = 23.23; = 2544;  = .08).
Between individual product pairs, only the re-Commender/IntelliTrade pair shows a statistically significant preference for the anthropomorphized descriptions of both products ( 2 = 6.27; df = 1;  = .01),and the Judy/JurisDecide pair reveals a preference for the anthropomorphized description of JurisDecide, but a preference for the de-anthropomorphized description of Judy with the product as a dependent variable ( 2 = 5.00;  = 315;  = .02).

4.1.
3 Study 2, personal trust.1252 choices were made of the anthropomorphized product description, and 1189 choices were made of de-anthropomorphized product descriptions.The distribution of preferences for anthropomorphized descriptions was consistent with the H0 distribution ( 2 = 1.57; df = 14;  = .56),meaning there was no statistically significant preference for neither anthropomorphized nor de-anthropomorphized descriptions overall.A Chi-squared test of independence shows no statistically significant association between the products as a variable and the anthropomorphized/de-anthropomorphized descriptions ( 2 = 13.52; = 2441;  = .56).
Between individual product pairs, the only statistically significant result is a preference for the anthropomorphized descriptions of both the MonAI/Cameron products (personal assistant) pair ( 2 = 4.02; df = 1;  = .04).
4.1.4Study 2, general trust.1234 choices were made of the anthropomorphized product descriptions, and 1190 choices were made of de-anthropomorphized product descriptions.The distribution was consistent with the H0 distribution ( 2 = 0.80; df = 14;  = .37),meaning there was no statistically significant preference for neither anthropomorphized nor de-anthropomorphized descriptions overall.A Chi-squared test of independence shows no statistically significant association between the products as a variable and the anthropomorphized/de-anthropomorphized descriptions ( 2 = 8.99;  = 2424;  = .88).
Within the individual product pairs, we see no statistically significant preferences for neither anthropomorphized nor de-anthropomorphized descriptions.

Aggregate results (Study 1 + Study 2)
, personal trust.Across both studies, 2544 choices were made of the anthropomorphized product description, and 2441 choices were made of de-anthropomorphized product descriptions.The general distribution does not differ significantly from the null hypothesis, meaning we find no statistically significant preference for neither anthropomorphized nor de-anthropomorphized product descriptions overall ( 2 = 2.13; df = 14;  = .14).The Chi-squared statistic for the accumulated numbers shows a significant association between the product type as a variable and preference for either anthropomorphized or de-anthropomorphized description ( 2 = 34.06; = 4985;  = .003).Between individual product pairs, we see a significant preference for the re-Commender/IntelliTrade ( 2 = 8.47; df = 1;  = .004)and MonAI/Cameron ( 2 = 8.31; df = 1;  = .004)pairs, and a preference for the de-anthropomorphized description of Judy, but for the anthropomorphized description of JurisDecide with the product as a significant dependent variable ( 2 = 8.50;  = 620;  = .003).

Aggregate results
(Study 1 + Study 2), general trust.2529 choices were made of the anthropomorphized product description, and 2439 choices were made of de-anthropomorphized product descriptions.The distribution is consistent with the H0 distribution ( 2 = 1.63; df = 14;  = .09),meaning there was no statistically significant preference for neither anthropomorphized nor de-anthropomorphized descriptions overall.The Chi-squared test shows no significant association between the variable product type and preference for neither anthropomorphized nor deanthropomorphized description ( 2 = 9.02;  = 4968;  = .20).The only product pair that shows a signficant difference from the H0 distribution is the re-Commender/IntelliTrade pair, where there is a preference for the anthropomorphized description of both products ( 2 = 4.29; df = 1;  = .04).

RQ1a
: Are people more likely to trust anthropomorphized products for themselves (personal trust) as a user than for the general population (general trust)?

RQ2
: Are people more likely to trust products when the products are described in different kinds of anthropomorphizing language?Personal trust.The choices of anthropomorphized/de-anthropomorphized descriptions per category are shown in the appendix, section 3.1, Tables 9, 10, and 11.
For Study 1, a Chi-squared test of independence shows a statistically significant association between the categories as a variable and the preference for anthropomorphized/de-anthropomorphized descriptions ( 2 = 14.41;  = 2544;  = .002).The Properties of a cognizer-category is the only category with a distribution that differs significantly from the H0 distribution ( 2 = 10.99;df = 1;  < .001).For Study 2, a Chi-squared test of independence shows no statistically significant association between the categories and the anthropomorphized/deanthropomorphized descriptions ( 2 = 4.96;  = 2441;  = .17),but it did show a significant preference for the anthropomorphized descriptions in the Agency-category ( 2 = 5.89; df = 1;  < .01).It is worth noting that the product pairs in the Cognizer-category in Study 1 (recommender systems and personal assistants) were the same products as had been assigned the Agency-category in Study 2 (as shown in Table 2).Hence, those specific products or product categories may be especially prone to preference in anthropomorphized descriptions (no matter the type of anthropomorphizing language).
If we aggregate the numbers from both studies, there is no statistically significant association between the language categories and the preference for anthropomorphized/de-anthropomorphized descriptions ( 2 = 6.16;  = 2441;  = .10),but there is a significant preference for the anthropomorphized descriptions in the Cognizer-category alone ( 2 = 5.37; df = 1;  = .02).This is a spillover effect: in Study 1, the preference for anthropomorphized descriptions in the Cognizer-category is so strong (56.5% and 55% for personal and general trust, respectively), that despite a very slight negative preference in personal trust in Study 2 (49.8%) and a weaker preference in general trust (50.9%) the preference carries over.
In Study 2, there was no statistically significant association between the categories and the preference for anthropomorphized/de-anthropomorphized descriptions ( 2 = 0.52;  = 2424;  = .91),and no significant difference from the H0 distribution in either of the categories.Aggregating the numbers, a Chi-squared test of independence shows no association between the categories and the preference for anthropomorphized/de-anthropomorphized descriptions ( 2 = 4.21;  = 4968;  = .24),but there is a significant association in the Cognizer-category ( 2 = 4.50; df = 1;  = .03).

RQ3
: Are different groups of people more likely to trust products when the products are described in anthropomorphizing language?
The values from the statistical tests are shown in Table 3.We refer to the appendix, Tables 12-26 for detailed results per study.We highlight that we focus on Chi-squared statistics for the entire variable, i.e., the association between a variable and proportion of choices of anthropomorphized/de-anthromoporphized descriptions.There can be significant preferences within each subgroup (e.g., female vs. male, but due to space restrictions we only discuss the variables where the entire chi-squared statistic is significant) 4.4.1 Self-described gender.The proportion of choices of anthropomorphized/de-anthropomorphized product descriptions did not differ significantly by gender in either study, neither in personal trust, nor in general trust.

Age.
In Study 1, a Chi-test of independence showed no significant relationship between the two variables age and proportion of choices of anthropomorphized/de-anthropomorphized descriptions.In Study 2, the same test showed a significant relationship between the variables, and this repeated for the aggregate results, meaning there was an overall significant association between different age groups and their preference for anthropomorphized or de-anthropomorphized product descriptions in personal trust.
Looking closer at the age groups individually, only the 61-65 year group shows a strong, statistically significant preference for anthropomorphized descriptions ( 2 = 14.70; df = 1;  < .001).In some of the age groups,  is too small to draw meaningful conclusions within different categories, but we highlight a significant preference for the anthropomorphized descriptions for groups 31-35 and 51-55 in the Cognizer-category ( 2 = 4.40; df = 1;  = .04and  2 = 3.88; df = 1;  = .05,respectively), in the 36-40 age group, there was a significant preference for the anthropomorphized descriptions in the Communicator-category ( 2 = 6.92; df = 1;  = .01),and in the 41-45 age group, there was a strong preference for the de-anthropomorphized descriptions in the Biological metaphors-category ( 2 = 3.97; df = 1;  = .05).No statistically significant association between age and preference for anthropomorphized/deanthropomorphized product descriptions could be found in general trust in either study.

Socio-economic status.
A chi-squared test showed that the proportion of choices of anthropomorphized/deanthropomorphized product descriptions did not differ significantly by socio-economic status in either study, neither in personal trust, nor in general trust.

DISCUSSION
The qualitative responses from the surveys circumstantiate and add detail to the quantitative results.Because the open answers were optional, we do not attempt to quantify their importance or weight in any way, and that would be meaningless: since some product pairs had 30 elaborations, while some had maybe 100, some insights might be unfairly under-or over-represented.We use the open answers to shed light on a complex topic and study, and to provide insights that hopefully lead to fair and purposeful future investigation in the subject.
5.1 Observation 1: Overall, people are no more likely to choose anthropomorphized descriptions of products over de-anthropomorphized descriptions of probabilistic automation products.
Across categories, we do not see a clear preference for anthropomorphized descriptions of products over de-anthropomorphized descriptions of products.This is a conclusion that come with numerous addenda, the most important one being "it depends" -within some product descriptions there was a significant preference for the anthropomorphized description, and for some systems there was a clear preference for the de-anthropomorphized description.The preference proportions changed between the two studies, after anthropomorphization categories were swapped.This points to the conclusion that both product genre and type of anthropomorphization influence how people immediately perceive a product based on its description.A few participants even highlighted linguistic differences in product descriptions as motivating their choice, albeit using different words than anthropomorphization: "Option B provides a more engaging and descriptive presentation " (Study 1, de-ant.AquaSentinel6 ).We find the following main themes or clusters when looking for how participants motivate their rationale: Lesser of two evils-motivation.A prevalent theme in the open text answers is that the participant has chosen "the lesser of two evils"; meaning they are expressing deep skepticism of both products, but was forced, through the survey design, to choose one.In this case, the motivation appears to be identifying which product has lower stakes, or less impact if the system somehow fails: "Lower stakes -only deals with hobbies/past times as opposed to finances" (Study 1, ant.re-Commender), and "I would trust AI more to transport goods than people" (Study 1, de-ant.HaulIt).
People attempt to evaluate shortcomings and strengths of using probabilistic automation for the particular context.A lot of responses express that probabilistic automation is more appropriate for some tasks than for others.For instance, most responses in favor of the MonAI system in favor of the Cameron system highlight that "Computers are better with numbers than texts.I would trust more an app with numbers than one who manage texts."(Study 1, ant.MonAI Maker).However, many of these assumptions are exactly that; assumptions of the system's functionality: "It will be more correct because it works with photos for comparison, so the chance of error is smaller" (Study 2, de-ant.DermAI Scan).This is hardly an objective truth, and broad assumptions like this emphasize the importance of conveying accurate expectations of the system's functionality, because people are prone to form beliefs even based on short descriptions.
The logic appears to be, of course, that the perceived benefits should outweigh the potential risks.Human favoritism.A common theme in the responses was human favoritism, perceiving an output as higher quality if a human expert has been involved in the process of creating it [52].This was visible as expressions of preference for the products where a human was assumed in control of the probabilistic automation product, even when this was not actually described in the product pitch, e.g., "There is both a person driving it and an AI in it" (Study 1, de-ant. Commuter).The potential of biased probabilistic automation training data was mentioned many times as rationale for distrusting the system (e.g., "I dislike the idea of AI in the justice system when it is prone to making up information.How do we know that Judy would be free from bias?" (Study 2, de-ant.JurisDecide).Human favoritism is an interesting notion, in that it could potentially introduce issues of over-reliance on human judgments and under-estimation of human bias and error-proneness.
Overall, our results show that people do not unequivocally trust technology just because it is linguistically anthropomorphized.People are critical about use context, risks, impacts, and human involvement, and although we confirm earlier research that demonstrate some influence of anthropomorphization on attitude (e.g., [29,31]), there is not a binary or simple relationship between anthropomorphization and trust.
De-anthropomorphization carries a risk of misunderstandings.A very interesting finding was that a few users simply did not understand the de-anthropomorphized (but more technically accurate) descriptions as examples of probabilistic automation products, e.g."I'm not sure I would entirely trust Cameron not to miss any important/urgent emails.However when it came to my data I'd trust it more than any AI." (Study 1, de-ant.Cameron7 ).This person appears to express a general aversion to the concept "AI", and has not picked up that "automatic pattern matching" is actually the same as "AI".The de-anthropomorphized description leads to a misunderstanding.Other examples are "I prefer this to AI" (Study 2, ant.MindHealth) and "this one doesn't use neural networks so it's most likely to be more accurate" (Study 1, de-ant. JurisDecide 8 ).This is a significant risk that we need to consider when describing probabilistic automation systems: how do we balance the advantages of using language and metaphors that people are familiar with, with the risks of those analogies and metaphors leading to incorrect assumptions?5.2 Observation 1a: Across the two studies, people are no more likely to trust anthropomorphized product descriptions when imagining themselves as a user than to trust them for the general population In both studies, several trends in preference under personal trust were not present when asked about general trust.This was the case both in Study 1, where participants were asked about both personal and general trust per product, and in Study 2, where each participant was only asked about either personal or general trust.For Study 1, we suspected there could be an ordering effect of the survey; the first question might elicit an immediate response, and the immediate invitation to reflect again on the product in relation to general trust could urge the participant to feel they should choose something different for the second option.This, however, does not explain the differences in Study 2, where the participant groups were different for the personal trust and for the general trust questions.
In fact, we see for Study 2 that preferences (see Table 2) lean in different directions for several product pairs, and overall for the different categories (Cognizer, Agency, and Biological metaphors all elicit different preferences between personal and general trust in Study 2).The differences are small, however (e.g., 48.9% preference for anthropomorphized descriptions for personal trust vs. 50.8%preference for anthropomorphized descriptions in general trust for the Cognizercategory), and none of them are statistically significant in the overall comparison, except for the Agency-category, which elicited 55% and 49.9% preference for the anthropomorphized descriptions in personal and general trust, respectively.
We could not identify any obvious differences in the qualitative responses between participants' rationale for choosing products for themselves and evaluating their output in general.

Observation 2:
The type of product or system in combination with different kinds of anthropomorphizing language appears to exert a greater influence on trust than anthropomorphizing language alone.
Since we saw a statistically significant association between product type as a variable and preference for anthropomorphized/deanthropomorphized descriptions in personal trust in Study 1, we decided to change the categories of anthropomorphizing language between products and conduct the second study to explore this potentially confounding variable.The fact that the products in the recommender systems and personal assistants resulted in a preference for anthropomorphized descriptions in the Cognizer category in Study 1, and in the Agency category in Study 2 (at least in personal trust), indicates that certain products or systems might be more sensitive to anthropomorphized language than others.Interestingly, this goes in both 'directions': the 'Judy' and the 'AI Scan Guards' systems were generally more trusted in the de-anthropomorphized descriptions.We note, that these systems were both in the Biological metaphors category in Study 1 and Study 2, respectively -we hypothesize that this category of language may yield particularly contrived analogies which approach the uncanny valley [35] and, consequently, mistrust.This, however, does not explain the general preference for anthropomorphized descriptions of 'JurisDecide' and 'AquaSentinel' -the two products that 'Judy' and 'AI Scan Guards' were compared to, and which were in the same language categories (Biological metaphors).
Our findings advocate for a nuanced conclusion that the individual product or system is an important variable for people's preferences and attribution of trustworthiness.Some products might be more susceptible to anthropomorphization of one type, and certain types of anthropomorphization might highlight or obfuscate specific qualities in specific system genres.Our studies thus support the findings of [31].

Observation 3:
Age is the only variable that seems to have a dependent association with preferences for anthropomorphized/de-anthropomorphized product descriptions.
When dividing participants into subgroups by age, some patterns emerge per category as well as overall.Interestingly, we see a strong preference for anthropomorphized descriptions in the 61-65 group, and a strong preference for deanthropomorphized descriptions in the 66+ group.The subgroups are small, however, (26 participants total for the 61-65 group, and 37 for the 66+ group), so we refrain from making general conclusions on the basis of this study.The groups 31-35 and 36-40 compose a larger proportion of participants, and these groups both show a strong preference for anthropomorphized descriptions, particularly in the Cognizer-category.When looking at the open answers, these age groups do not seem to provide different rationales from other age groups; they (also) highlight factors such as personal usefulness ("I can grocery shop weekly [...] but I am always surprised by the fact that ALL my basics become [worn] out at the same time"(Study 1, ant.WardrobEase)), privacy ("I would never use my voice online" (Study 2, de-ant.DermAI Scan)) risk of failure ("I trust AI Scan Guards to give better output, due to its systems having less of a chance to be disrupted by enemy counter electronics warfare" (Study 2, ant.AI Scan Guards)) and impact in case of failure ("[AI] dealing with the jury can skew what their outcomes would be." (Study 2, de-ant.JurisDecide)) as the main motivations behind their choices.One hypothesis to explain these differences across age groups is that there could be age-related factors influencing computing literacy for different groups .A recent survey has indicated a generation gap in probabilistic automation-acceptance [50], and potentially, using more familiar language to describe such systems (playing on anthropomorphizing metaphors and analogies) may make the systems more appealing to these groups.

LIMITATIONS
We acknowledge study only explores a small part of the overarching question "What are the effects of anthropomorphization of probabilistic automation systems?"This question could be explored in many ways that are likely to provide other results.Some of the most important limitations to the approach used in this study are listed below: Contrived study setup rather than organic choice.Any controlled experiment can impose confounding factors.
"Trust" based on momentary, immediate choices, rather than long-term, more organic exposure to descriptions of a system.Conversely, one could argue that based on the qualitative answers, participants have relied heavily on their existing knowledge about probabilistic automation systems, so we are not exposing them to completely novel technology descriptions.Participants were also asked to choose based on only a short description and no examples of the system's output.We do not believe this was a confounding factor for the results, but it could mean that the results will not generalize to contexts where more information is given.
Contrived language.To emphasize the anthropomorphic language as a variable, we have loaded a lot of 'anthropomorphisms' into very little text.A few participants highlighted linguistic or semantic features of the descriptions as determining factors for their choice (see section 5.1), so it is possible that this would have impacted the results to some degree.We have tried to mitigate this factor by creating descriptions that are directly comparable to actual products found "in the wild".Not all categories were tested on all products.We only swapped the categories between two different products.
Ideally, we could have tried all categories of anthropomorphization on all product types, however, this would have required an untenable amount of different studies (and no page restrictions).The results provide enough insight for us to conclude that the matter is not straightforward, and that further investigation is needed.
Order effects bias.In the survey, product pairs were always presented in the same order, which could could induce order effects bias.This should not have any effect on the primary variable (anthropomorphized versus deanthropomorphized), as these choices were always randomized.

CONCLUSIONS
In this paper, we explored an overall question of the influence of anthropomorphized short descriptions of probabilistic automation systems on trust.We made three observations based on the results: 1. Across both studies, people were no more likely to prefer anthropomorphized products over de-anthropomorphized products.2. The product type in combination with anthropomorphizing language appears to exert higher influence on trust than anthropomorphizing language alone, and 3. Age was the only variable (of those measured) which had a statistically significant association with preference for anthropomorphized vs. de-anthropomorphized products.
Our results show that anthropomorphized descriptions of systems do not automatically lead to favoritism or increased trust.It appears to depend on product category and type of anthropomorphization, as well as the reader of the text.We highlight that this was an exploratory study which hopefully provides inspiration for further investigation by other researchers.We hope that the results are useful to those who write about probabilistic automation systems, whether they be scholars, policy makers, or journalists.Our future work will include further exploration of empirically founded taxonomies of anthropomorphization, as well as more detailed studies of the risks of "trust", investigating different impact of anthropomorphized descriptions of probabilistic automation systems.

IMPACT STATEMENT
In designing our online survey we adhered to the ethical guidelines in HCI methodology [5] to ensure participant anonymity and data privacy.Participants were recruited via the Prolific platform, and compensated for their participation.
To ensure that we reached a representative group we created pre-screening criteria such as country of residence, selfassessed socio-economic status, ethnicity, and geographic location.We did not collect any identifiable information and all the survey responses were stored temporarily on a secure server.To avoid confusion about the fictitious products, we added a statement at the end of the survey asserting that all products are 100% imagined, although some of them have been loosely based on existing products or services.We also stated that the goal of the research was to investigate whether the description of the product influenced the way its trustworthiness and functionality is perceived, as well as contact info for the lead author.The third author is a natural language processing scientist with a background in low-resource NLP and the digital documentation of resources for low-resource language communities.Their work also encompasses the field of humancomputer interaction and the intersection between NLP and psycholinguistics.Their previous work in human-computer interaction and AI provides insights into how users perceive and interact with technology, contributing to a deeper understanding of trust dynamics in AI systems.
The fourth author is a computational linguist, with expertise in syntax, semantics and sociolinguistics.They have long worked at the intersection of linguistics and natural language processing, specifically on how linguistic knowledge can inform the development and study of language technology.They have been doing public scholarship around the way that probabilistic automation technologies are sold and perceived and advocating for more accurate and less aspirational descriptions of this technology.
We acknowledge that while our study addresses a timely question of how people's trust in automation driven systems can be influenced by different forms of anthropomorphism it could also lead to a potential dual use.For example, bad actors could use our findings to elicit unearned trust from people, in particular by describing technical systems functionality in cognitive terms and by emphasizing their "intelligence".Bad actors could also use the observations from our study to target specific age groups that seem to be more susceptible to trust systems with anthropomorphized descriptions.

B PRODUCT DESCRIPTIONS
In each product descriptions, instances of anthropomorphization (4-5 per product) are highlighted to allow for easier comparison to the de-anthropomorphized version.Each of the anthropomorphic short pitches was written to fit in its respective category, and each of the short pitches included 4-5 "instances" of the anthropomorphic category.For each de-anthropomorpized description of the product, the five instances of anthropomorphic language were re-written so they did not reflect the specific category of anthropomorphization, but the rest of the short pitch could include examples of the other categories of anthropomorphization -thus, isolating each anthropomorphization category as the independent variable.We were not strict about avoiding other categories of anthropomorphic language (especially the category of agency) in the pitches.However, we also did not de-anthropomorphize language outside the target anthropomoprhic language type in the corresponding de-anthropomorphized product description.For example, in Study 1, MonAI Maker is described as identifying ways to save money, a cognizer description and de-anthropomorphized as providing suggestions instead.This is still agentive language.A driverless truck HaulIT is programmed to transport long haul freight 24/7 without rest stops, and it never gets tired or distracted.The truck is designed for both city and highway, meaning it is always sent along the most optimal route for speed and efficiency, based on statistical predictions about current and projected traffic conditions as well as optimal battery charging points.
A sleeper bus, Commuter, drives people from their home to a longdistance destination overnight.The bus avoids other vehicles and obstacles on the road, and adapts to the weather conditions to navigate safely.It monitors traffic live and picks the best and safest routes.
A sleeper bus, Commuter, is used to transport people from their home to a long-distance destination overnight.The bus has algorithms for avoiding other vehicles and obstacles on the road, and the algorithms are adjusted to the weather conditions to navigate safely.Its systems are fed live traffic data for calculations of the best and safest routes.

Agency
An AI and ML-powered drone, AquaSentinel AI-MAR, monitors enemy seas.Armed with cuttingedge technology, it autonomously patrols waterways, utilizing advanced algorithms to swiftly detect and analyze potential threats in realtime.
An AI and ML-powered drone, AquaSentinel AI-MAR, is programmed to monitor enemy seas.Armed with cutting-edge technology, it is positioned over waterways, equipped with advanced algorithms designed to detect and provide analyses of potential threats in real-time.
The newest unmanned aircraft systems (UAS), AI Scan Guards, monitor a physical territory from the air.They use image recognition to analyze live video streams, seek out enemy targets and alert the defense forces.
The newest unmanned aircraft systems (UAS), AI Scan Guards, are programmed to monitor a physical territory from the air.They are equipped with image recognition algorithms that are used to process live video streams.System outputs may be used to identify enemy targets and provide alerts to defense forces.A software system for court juries, Judy, uses neural networks to inform jury members in court cases.Thousands of transcripts and outcomes from previous similar trials are fed to Judy's brain, whose digital neurons digest all data and determinants to provide information about relevant law and precedence in current cases.
A software system for court juries, Judy, uses weighted networks to inform jury members in court cases.Thousands of transcripts and outcomes from previous similar trials are input into Judy's CPU, whose algorithms process all data and determinants to provide information about relevant law and precedence in current cases.
A software application, JurisDecide uses neural networks to enhance lawyers' decision-making in trials.Its digital brain continually evolves and rapidly processes extensive legal data, including precedent and case law which JurisDecide's digests to spit out information for legal professionals.
A software application, JurisDecide uses weighted networks to enhance lawyers' decision-making in trials.Its algorithms continually self-update and rapidly process extensive legal data, including precedent and case law which JurisDecide's processes to output information for legal professionals.

Biological metaphors
A neural network system, Mind-Health, is an online digital ear which senses indicators in spoken language that a person may be developing one or more early signs of dementia.Its digital synapses have evolved during thousands of conversations with healthy humans and dementia patients.
A weighted network system, Mind-Health, is an online digital recorder which classifies indicators in spoken language that a person may be developing one or more early signs of dementia.Its complex algorithms have been fine-tuned based on thousands of conversations with healthy humans and dementia patients.
A diagnostic tool, DermAI Scan, uses neural networks to diagnose dermatological conditions from your home computer.You feed it a picture and receive a suggestion for a diagnosis.Evolving neural networks means that the system's neurons can instantly compare your picture to images of millions of previous diagnoses.
A diagnostic tool, DermAI Scan, uses weighted networks to diagnose dermatological conditions from your home computer.You upload a picture and receive a suggestion for a diagnosis.Fine-tuned weighted networks means that the system's weights can instantly compare your picture to images of millions of previous diagnoses.

Properties of a communicator
A smartphone app, Lingua, is an interactive language learning tutor.You can talk or write to the app and it will speak back to you in real time.
Lingua tells you about the accuracy and complexity of your speech, and it suggests areas of improvement.
A smartphone app, Lingua, is an interactive language learning tutor.You can input speech or text to the app and it will output speech to you in real time.Lingua indicates the accuracy and complexity of your speech, and it produces suggestions for areas of improvement.
MentorMe is an online chatbot, which you can talk to about specific academic topics (each based on different data sets).It speaks like a mentor, and proposes new ways to approach a problem, rather than just answering questions directly.It also asks you questions to enhance your learning about a given topic.
MentorMe is an online chatbot, into which you can input text about specific academic topics (each based on different data sets).It produces text in the style of a mentor, and outputs candidate matches for new ways to approach a problem, rather than just indicating answers for questions directly.It also outputs questions to enhance your learning about a given topic.

Properties of a communicator
A smartphone app, WardrobEase, is a service for effortlessly restocking essential clothing items such as jeans, socks, and underwear.It discusses your fabric and style preferences with you, you tell it your sizes, and it responds with pictures of choices.You can tell it when your clothes are starting to wear out, and ask it to recurringly order new items from your favorite stores ahead of time.
A smartphone app, WardrobEase, is a service for effortlessly restocking essential clothing items such as jeans, socks, and underwear.It allows you to record and specify your fabric and style preferences, you input your sizes, and it outputs pictures of choices.You can mark when your clothes are starting to wear out, and input automatic, recurring orders of new items from your favorite stores ahead of time.
A smartphone app, Shoppr, lets you create meal plans by discussing your dietary wishes with you.You can tell the system about constraints of health, time, nutrition, budget, and it responds with suggestions for meals, as well as write a meal plan with recipes and order groceries online for you.
A smartphone app, Shoppr, lets you create meal plans based on your dietary wishes.You can input constraints of health, time, nutrition, budget into the system, and it produces suggestions for meals, as well as generates a meal plan with recipes and an option to put in an online order for groceries.A machine conditioning-based app, Lingua, is an automated language learning tutor.It processes both speech and text and it will produce answers to you in real time.Lingua encodes the accuracy and complexity of your speech, and it classifies areas of potential improvement in your spoken language.
MentorMe is an intelligent online chatbot, with extensive knowledge about specific academic topics (each based on different data sets).It understands topic-specific questions, and imagines new ways to approach a problem, rather than just answering questions directly.It also comes up with questions to enhance your learning about a given topic.
MentorMe is an automated online chatbot, with extensive data about specific academic topics (each based on different data sets).It processes topic-specific questions, and generates text suggesting new ways to approach a problem, rather than just answering questions directly.It also produces questions to enhance your learning about a given topic.

C THEMATIC ANALYSIS
A thematic analysis [10] of the open ended text responses was conducted in the software Condens.All authors went over at least 100 responses and added tags (codes) and notes before a shared discussion about what appeared salient for respondents.All survey responses were read several times while initial codes were generated.The goal of the thematic analysis was to identify patterns that reflect the data for this context [38], meaning the goal was to create themes and codes covering all the different responses.The result of the coding was a list of more than 100 different codes at very different levels of abstraction (similarly to the responses, which were also at different level of detail and abstraction).
After this, the first author analyzed the remaining responses with codes based on the shared discussions.
The analysis was an open-ended, inductive treatment, and was focused on "identifying and interpreting key, but not necessarily all, features of the data, guided by the research question" [10].In practice, each response was read with the overall question in mind: which reason does the respondent provide for being willing to trust or not willing to trust the system?The codes can therefore be seen as 'answers' to the research question, such as 'accuracy', 'reliability', or 'risk of bias'.The 30 most prevalent tags are shown in Table 8.For an in-depth analysis of the qualitative responses, see [22].

D RESULTS
For all tables, statistically significant -values are indicated in bold and with an * -symbol.

4. 4 . 4
Level of education.No significant association was found between level of education and preference for anthropomorphized or de-anthropomorphized descriptions in either study, neither for personal trust, nor for general trust.

4. 4 . 5
Level of computer knowledge.The proportion of choices of anthropomorphized/de-anthropomorphized product descriptions did not differ significantly by level of computer knowledge in either study, neither in personal trust, nor in general trust.

8. 1
Positionality statement for the study authors.The expertise and lived experiences of our research team were an important part of the judgments and discussions in our analysis.We present our research team positionality according to the guidelines proposed by Liang et al..The first author has a background in digital design and positions themselves as an enthusiast of (mixed methods) research methodology.Their research career has focused on understanding how people interact with technology, Inie et al. and how technology impacts human cognition.Their background shapes the work by increasing their attention to qualitative data as a primary resource for understanding quantitative results.The second author positions themselves primarily as an activist for better and more inclusive AI education.They worked for more than eight years on hands-on STEAM education in different communities worldwide as part of the organization they created called [Anonymized].In the past four years, they have led multiple co-design sessions with families focused on AI literacy and created [Anonymized], one of the first platforms for AI education, which is free and open-source.This experience influenced their focus on critical understanding and use of probabilistic automation systems and informed their understanding of how the perception of technology can shape people's trust and use of it.

Table 1 .
Overview of the different probabilistic automation-based products and their genres.

Table 2 .
Results per product in Study 1 and Study 2. We indicate the  2 -values per product pair (as compared to an equal distribution between the anthropomorphized/deanthropomorphized description of each product).The '% pref.ant.'-column indicates if the preference leans towards anthropomorphization (>50%) or towards de-anthropomorphization (<50%).Statistically significant values are indicated with bold font and a * -symbol.This table also indicates statistically significant  2 -values in the categories Cognizer and Agency -these results are elaborated in Tables 9-11 in the appendix, section D.1.

Table 3 .
Results of Chi-squared tests for each variable.Statistically significant results are marked in bold font and with a * .The detailed results are provided in the appendix, Tables12-26.

Table 8 .
Tags from qualitative responses.

Table 27 .
Demographics: Age, gender, race or ethnicity, and socio-economic status

Table 28 .
Demographics: Education level and computer knowledge