Profiling the Dynamics of Trust & Distrust in Social Media: A Survey Study

In the era of digital communication, misinformation on social media threatens the foundational trust in these platforms. While myriad measures have been implemented to counteract misinformation, the complex relationship between these interventions and the multifaceted dynamics of trust and distrust on social media remains underexplored. To bridge this gap, we surveyed 1,769 participants in the U.S. to gauge their trust and distrust in social media and examine their experiences with anti-misinformation features. Our research demonstrates how trust and distrust in social media are not simply two ends of a spectrum; but can also co-exist, enriching the theoretical understanding of these constructs. Furthermore, participants exhibited varying patterns of trust and distrust across demographic characteristics and platforms. Our results also show that current misinformation interventions helped heighten awareness of misinformation and bolstered trust in social media, but did not alleviate underlying distrust. We discuss theoretical and practical implications for future research.


INTRODUCTION
In this digital age, where information spreads at an unprecedented speed and volume, misinformation poses a pervasive and challenging threat.As social media platforms have grown from mere social connectors to global influencers, they have also become major vehicles for spreading misinformation.This spread of misinformation can have profound implications ranging from impacting public behavior during health crises to shaping political landscapes [5,20].Governments, industry, and other stakeholders recognize the urgency of addressing issues of misinformation [29,68] and have begun to implement solutions aimed at tackling these issues, such as fact-checking or flagging misleading content using algorithmiccentered approaches [3,28,64,75].
However, misinformation isn't just a technical problem to be solved; it is a human-centric issue rooted in perception, cognition, and emotion [78].As misinformation pervades social media, one important negative consequence of misinformation is the erosion of trust in social media platforms and information sources [3].The erosion of trust not only challenges the credibility of platforms, but also risks turning them into echo chambers, limiting their role as vibrant, diverse, and informative spaces.Therefore, it is crucial to examine to what extent people trust social media.Without a situated understanding of trust, interventions might fall flat or even exacerbate the issue [77].It is also important to acknowledge that not all misinformation interventions inherently warrant increased trust.To mitigate the impact of misinformation, we need to examine if and how trust can be reconstructed in the wake of its breach.
Trust, the cornerstone of any relationship or interaction, is especially pertinent in digital ecosystems since it involves security, authenticity, and reliability in a world where connections are formed virtually [26].In the context of social media information dissemination and communication, trust influences user behavior, from engagement with content to decisions based on information received from these platforms [30].More crucially, the concept of "distrust," a related term of trust, further complicates our understanding of the dynamics of trust.Many existing works have either considered trust and distrust to be two extremes of the same dimension or did not explicitly examine distrust [77].Simultaneously, some scholars contend whether trust and distrust are indeed two poles of a single continuum or if they stand as distinct, independent concepts [9,38].These contrasting perspectives highlight the need for conceptual clarity, particularly within the realm of social media, as these platforms are powerful vehicles for information dissemination.Thus, a nuanced understanding of not only trust but also distrust in the context of social media is essential.
Furthermore, different populations often bring with them unique historical, cultural, and socio-economic experiences that shape their interactions and perceptions of social media [33].These differences may influence how various groups view and respond to misinformation interventions, and consequently, how they trust or distrust digital entities.In parallel, each social media platform has its own culture [72].For example, with its concise messaging format and rapid news sharing, Twitter1 may be perceived and trusted differently than Facebook's community-centric feeds.And yet, little work has explored both trust and distrust concerning diverse demographic groups across various social media platforms.Understanding these demographic and platform-specific nuances is crucial, as it allows for deeper insights into misinformation interventions and how they align with the perceptions of diverse audiences.Our work addresses these research gaps.
This paper investigates the complexities of trust and distrust in this heightened age of misinformation on social media.Specifically, the following research questions (RQs) guided our research: RQ1.Are trust and distrust in social media inherently linked, such that an increase in one means a decrease in the other?Or can they coexist independently?RQ2a.How do trust and distrust in social media differ across platforms?RQ2b.How do trust and distrust in social media vary across different demographic groups?RQ3.How do people's experiences with misinformation interventions associate with their trust and distrust in social media?
To answer these questions, we conducted a survey study with a nationally representative sample in the U.S. (1,769 participants) in March 2023.Our results offer empirical evidence supporting the idea that trust and distrust can be viewed as distinct concepts rather than merely opposite sides of a singular notion.This dual trust-distrust perspective enriches our comprehension of the complex dynamics of online trust.Our analysis further suggests that individuals can be grouped into different categories based on their trust and distrust levels.Our results also reveal that the levels of trust and distrust vary across platforms and show variations in how demographic factors relate to these levels on different social media platforms.Furthermore, our findings suggest that implementing misinformation interventions in social media has the potential to amplify individuals' awareness of misinformation while concurrently strengthening their trust in various social media platforms.However, these misinformation intervention features do not necessarily reduce underlying distrust in social media.Recognizing these nuances is essential, as it paves the way for addressing issues of trust and distrust and designing future misinformation interventions.
In this work, we contribute: (1) a comprehensive empirical study that investigates the relationship between trust and distrust in social media, along with an in-depth analysis of the variances in these dynamics across diverse platforms and among various demographic groups, collectively contributing an enhanced theoretical comprehension of trust and distrust dynamics; (2) new scales for measuring trust and distrust in social media that we validated in our study that benefit future researchers; (3) an understanding of people's use of, perceptions about, and trust in misinformation interventions on social media; and (4) theoretical and practical implications for future work.
Before further discussions, we first establish our operational definitions for trust and distrust.In this paper, we define trust as an individual's belief in the competence, benevolence, integrity, and reliance of social media [77].We conceptualize distrust as a cognitive and emotional state stemming from perceived dishonesty, skepticism towards intentions or outcomes, fear of potential harm or deceit, and concerns of malevolence from another entity.We will discuss the measurements in detail in section 3 Methods.

BACKGROUND & RELATED WORK 2.1 Trust and Distrust in Social Media
Trust is the foundational component that cements stable relationships, whether between individuals, organizations, or the intersection of information and technology where these connections are vital [22,25,35].While a significant body of work has examined trust within the realm of social media, there remains a gap in understanding distrust in the same context [36].The scholarly debate on this issue presents two dominant perspectives: one positing trust and distrust as opposite ends of a singular continuum [39], and the other conceiving them as separate, distinct entities [38].For example, prior work has found that while trust may correlate with the frequency of Facebook usage, distrust doesn't necessarily mirror this trend [8].To this end, scholars [66] have argued that failing to discern their interrelationship could yield incomplete insights and suggested that future work should further examine the dynamics between trust and distrust.As such, this ongoing debate highlights the need to clarify the discourse on trust and distrust with empirical findings.
Furthermore, trust and distrust are highly contextualized and are cultivated or eroded in specific tasks within particular situations [7,24,54].For example, prior work has highlighted factors influencing trust, including service quality and the usability of a platform [60].In another context, different factors were highlighted when examining trust in AI [31], such as the degree of automation in the AI system and its performance capabilities.These studies underscore the idea that trust is deeply rooted in its context.Therefore, trust within the context of social media deserves its distinct analysis and attention [77].Disentangling the complex relationship promises to deepen our insight and pave the way for creating more trustworthy social media spaces.

Demographic & Platform Differences between Trust and Distrust in Social Media
A substantial body of research has explored demographic differences in the establishment of trust across a diverse array of contexts, such as online commerce [32], health information websites [14], AI-supported tools [56], and social media [65].These disparities in trust can span over a range of factors, from psychological considerations to demographic attributes.For instance, some studies highlighted the association between demographic variables, such as gender and age, and trust levels [11,15,59].Their findings revealed that women and older adults tend to approach online information with greater caution and trust that information less than their male and younger counterparts, respectively [15,41].However, despite the considerable research on demographic differences in trust [44,51,69], research examining the interplay between demographic factors and distrust dynamics within social media is sparse.Addressing these multifaceted inquiries necessitates an integrated approach transcending isolated examination of demographic variables.Given the heightened emphasis on understanding trust, it is important to extend this focus to distrust.Furthermore, the rapid evolution of the social media environment, characterized by new platforms and features within existing platforms, poses a moving target.This dynamic landscape underscores the need for ongoing research to maintain the timeliness and relevance of our understanding of trust dynamics.Therefore, our work seeks to bridge this research gap; to comprehensively explore the demographic factors that underpin not only trust but also distrust in social media.
Likewise, each social media platform, inherently designed with unique features and user experiences, fosters its own distinct culture in the digital ecosystem [72].Prior work comparing trust across these platforms indicates that user trust varies considerably [10,16,48,74], suggesting that there is no monolithic "trust" sentiment when it comes to social media; rather, user trust is fragmented, nuanced, and platform-specific.Furthermore, little research has examined platform differences in distrust in social media.However, the lack of understanding of how distrust varies across platforms may lead to misguided interventions and policy implementations, ultimately failing to address core issues.Our research seeks to address these research gaps.

Trust, Distrust, and Misinformation Interventions
Scholars have argued that trust should be understood and measured as a fluid state that evolves based on various situational factors [78].
From this perspective, both trust and distrust arise from specific interactions and contextual circumstances.Within the scope of our study, we focus on trust in the context of misinformation on social media.
Prior work has shown that repeated exposure to information leads people to perceive that information as more likely to be accurate, illustrating the persuasive influence of repeated misinformation [71].Additionally, when confronted with information they previously believed to be false, individuals tend to develop negative emotions toward social media platforms, and they tend to instead gravitate towards other platforms that elicit positive emotions [46].Consequently, platforms face the challenging yet vital task of combating misinformation to retain users.In recent years, social media platforms have implemented various strategies to counter misinformation, anticipating that it would enhance trust and diminish distrust, such as fact-checking, warning labels, and content removal [50].
As misinformation interventions have continued to evolve, a body of research has investigated their effectiveness from various perspectives.For instance, some studies have used machine learning models to identify and flag misinformation surrounding crisis events [64,73].Furthermore, recent work has investigated the effectiveness of misinformation interventions, such as factchecking, time-lagged approaches, account banning, and combined approaches [4].This study found that existing interventions were unlikely to be effective if implemented individually.The success of an integrated approach is contingent upon the characteristics of each intervention, their interplay, the pattern of misinformation propagation, the length of the event, patterns of user engagement, the number of followers users have, and the evolution of these elements during a disinformation campaign [4].Another area of research examines how specific features of misinformation interventions influence people's attitudes.For example, Mena's experimental study highlights the significant impact of flagging misinformation to reduce intentions of sharing false news [40].However, very little research has explored users' perceptions, trust, and distrust regarding misinformation interventions employed on social media platforms.Therefore, our work aims to understand the ways in which these interventions may be related to people's trust and distrust in the context of the misinformation age on social media.

METHODS
In this section, we describe our study procedure, including an analysis of the changes implemented in social media platforms to address the spread of misinformation (see subsection 3.1) and our survey study (see subsection 3.2). Figure 1 shows an overview of our study flow.This research was approved by the Institutional Review Board at our institution.

Characterizing Changes in Social Media Platforms to Combat Misinformation
Prior work has proposed a variety of approaches to examine the changes made to social media platforms, such as analyzing industry blog posts and screenshots using the Internet Archive Wayback Machine and examining the changes logged to the social media repositories [13,19,70].Inspired by existing work, we first collected and exported all available blog posts from four major social media companies, including Facebook, Twitter, YouTube, and TikTok, about their platforms.These platforms were selected as they are Step 1 includes collecting blog posts regarding misinformation intervention features on social media and analyzing these blogs using the Rapid Qualitative Analysis method [45].
Step2 includes designing and deploying the survey study (guided by the findings from 1 ), followed by multiple rounds of pilot studies and the final launch of the study.
among the most commonly-used social media platforms for gathering news and information at the time of our study [34].In this step, we focused on "visible" misinformation intervention features that can be seen by users as they serve as more direct, tangible touchpoints between the platform and its users than the backend or algorithmic interventions.
We specifically targeted the period from January 2017 to January 2023 in our data collection process among social media platforms' blogs, aligning with the rise of widespread misinformation campaigns [42].The blog posts were filtered based on keywords, such as misinformation and misinformation intervention strategies.These selection criteria were applied uniformly across all platforms.Each retrieved post was then manually reviewed by two researchers in our research team to confirm its relevance to misinformation interventions.
Given the potential immediate and salient impact of these features, it is important to understand their role in fostering a trustworthy digital ecosystem.Characterizing the changes in social media platforms also reveals incremental adjustments that may have important implications in the fight against misinformation.
Then, we conducted a rapid qualitative analysis [23] of these posts.Rapid qualitative analysis is a method to obtain targeted qualitative data and comparative results when data collection targets and processes are highly structured [23].Research has demonstrated the effectiveness and rigor of rapid qualitative analysis to be comparable to traditional qualitative analysis, despite the streamlined process present in the former [45].Two researchers first examined the data in its entirety to establish a general understanding of the problem space.Then, given the major overlap in changes made across platforms, the two researchers independently categorized the major features and changes that social media platforms implemented to combat misinformation into higher-level themes.
Overall, several major themes were identified across the existing misinformation interventions using our rapid qualitative analysis: 1) labeling/tagging, 2) credible information curation, and 3) actionable external source verification, as briefly illustrated in Figure 2. Specifically, (A) Labeling/Tagging Features include the labeling or tagging of potentially misleading or false information shared on social media platforms.Labels or tags can provide additional context and warnings to help users discern the credibility of the content.(B) Curation Features (e.g., Credible Information Centers) are dedicated spaces or sections in social media platforms that curate and showcase credible information from authoritative sources.(C) Verification Features (e.g., Clickable External Links) enable access to additional information associated with a particular post or content.These links can lead to additional external resources, fact-checking websites, or trusted sources of information, allowing users to verify the accuracy and credibility of the shared content.These themes also align with misinformation intervention approaches seen in existing literature [2].

Survey Study
3.2.1 Overall Study Design.Following our analysis of changes across social media platforms, we incorporated key findings into our survey design.Our survey aimed to investigate people's trust and distrust in social media in the context of misinformation countermeasures.
To ensure the validity of our survey, we conducted multiple rounds of pilot studies.First, we conducted three informal pilot studies with our research team and colleagues.During the pilot studies, we identified several areas for refinement.For example, we simplified the language in our questions for clarity to avoid academic jargon, ensuring they were understandable to a general audience.Additionally, we included clarifying examples (e.g., the screenshots shown in Figure 2) next to complex questions to reduce potential misinterpretation.
Subsequently, we conducted a formal pilot test with 100 respondents using the Qualtrics online research panel 2 .The feedback and responses received during the pilot tests were invaluable in refining and developing the final survey instrument.Respondents were compensated based on the estimated time required to complete the survey 3 .
We also included one attention check question in the survey to improve the scale validity further.This attention-check question  was placed early in the survey and had an obviously correct response to identify inattentive respondents.This process allowed us to screen out these respondents prior to conducting any analyses.

Survey Study
Recruitment & Overview of Participants.We recruited survey respondents from a third-party service, Qualtrics online research panel, in March of 2023.Qualtrics recruited participants from a nationally representative pool, based on the following inclusion criteria: participants were adults (aged 18+) who had used at least one of the four social media platforms of interest (Facebook, Twitter, YouTube, or TikTok) within the last three months.These four platforms were chosen due to their significant impact on global information dissemination, their distinct modes of user interaction, and their important influence on the spread of misinformation at the time of our study [43].In our study, participants were only asked about platforms they had engaged with in the past three months.Specifically, the reason is that we were interested in answers from people who recently experienced the platforms and the features, as this should yield more accurate and reliable perceptions of misinformation intervention features and trust/distrust.Before participants were asked to assess their trust concerning misinformation interventions, we presented them with vignettes (e.g., as shown in Figure 2).These vignettes were designed to provide a concrete example of how the misinformation intervention functions, which can help minimize interpretation variability and enhance the accuracy of participants' responses.
To ensure data quality, we excluded participants who met one or more of the following exclusion criteria: 1) those who completed the survey in under 10 minutes, which was considered "speeding" based on our pilot test results; 2) those who provided gibberish or unrelated responses to the open-ended question on the definition of misinformation, including nonsensical words like "gllllsscc"; and 3) those who took part in any of the previous pilot tests.
Participants Overview.In total, 1,769 participants from the United States were included in our final dataset for analysis.Detailed demographic characteristics of the participants are shown in Table 1.This study contained a nationally representative sample, meaning the demographic distribution closely aligned with the demographic distribution of the United States population.
The majority of respondents identified as women (59%), followed by men (40%) and non-binary or undisclosed (1%).The average age of respondents was 48 years (SD=17).Likewise, the largest proportion of respondents identified as Caucasian/White (50%), followed by African American or Black (26%), Asian (12%), American Indian or Alaskan Native (9%), and Native Hawaiian or Pacific Islander (1%).13% of the participants indicated that they were Hispanic or Latino.Furthermore, 36% of respondents had a high school diploma or less, 19% had an associate degree, 28% had a bachelor's degree, and 17% had a postgraduate degree.Moreover, 44% of respondents identified themselves as Democrats or Lean Democrats, 28% claimed to be Independent, and 26% saw themselves as Republicans or Lean Republicans.Our respondents also varied across all annual household income levels, which we categorized into low-income (27%) and moderate-to-high-income (73%) based on the 2022 U.S. Federal Poverty Level (185%) Guidelines [1].

Measures.
Overall, our survey questions focused on participants' trust and distrust of social media, their experiences with misinformation intervention features, and their demographic background.Below, we provide detailed measurements and scales used in this study.
Trust.Trust in social media was measured using four survey items with a five-point Likert scale, with responses ranging from strongly disagree (1) to strongly agree (5).This four-item measurement was adapted from prior work [18,77], which emphasized the importance of specifying the trustee (i.e., the object of trust) and the context of the study when measuring trust.In our research, we operationalize the context as the situations in which users encounter misinformation on social media.The items of trust measurement correspond to four trust dimensions detailed in a systematic review, including benevolence, integrity, competence, and reliance [77].These four items were utilized to create the following four statements that comprised the measurement of trust in social media in our study: (1) I believe that <social media platform>4 cares about helping me avoid misinformation.Rationale: This measure reflects the dimension of benevolence, which relates to people's perceptions about the intentions of social media platforms, and their perceptions about platforms' levels of concern for users' well-being in the misinformation age.(2) <Social media platform> is reliable because it attempts to combat the spread of misinformation.
Rationale: This measure corresponds to the dimension of reliability.In this context, reliability refers to the perceived effectiveness and commitment of a social media platform in combating misinformation.(3) I feel very confident about <social media platform>'s ability to address misinformation.Rationale: This measure aligns with the dimension of competence.Competence refers to the level of confidence that people place in <social media>'s ability to tackle misinformation effectively.(4) I am willing to act upon the information I get on <social media platform>.
Rationale: This measure relates to the dimension of reliance.
Reliance signifies the level of trust users place in the information received from <social media>, and whether or not they trust that information enough to take action based on it.
Note that in the actual survey study, the term "<social media platform>" was replaced with specific platform names, including Facebook, Twitter, TikTok, and YouTube.This customization allowed for a more targeted assessment of participants' trust perceptions towards different social media platforms, which aligned with findings from the aforementioned systematic review [77] in that trust in social media may differ depending on the platform.
Respondents also rated their level of trust in social media when it contains a particular type of misinformation intervention feature, from strongly disagree (1) to strongly agree (5).Specifically, we provided four-item measurements including "In your opinion, when a social media platform has this <misinformation intervention feature>"... 1) it shows that the platform cares about helping me avoid misinformation.",2) "it shows that the platform is reliable because it attempts to combat the spread of misinformation", 3) "it makes me feel more confident in the platform's ability to address misinformation", and 4) "I am more willing to act upon the information I get on this platform.".In this set of questions, the placeholder term "<misinformation intervention feature>" was replaced with specific feature names (i.e., labeling, curation, verification features as shown in Figure 2).
Distrust.We measured distrust in social media using four survey items with a five-point Likert scale, ranging from strongly disagree (1) to strongly agree (5).This measurement of distrust was inspired by the systematic review in social media [77].Four items comprised the measurement of distrust in social media in our study, including skepticism, dishonesty, malevolence, and fear, which were utilized to create the following four statements: (1) I am skeptical about whether <social media platform> keeps my interests in mind when it makes decisions on addressing misinformation.
Rationale: This measurement aligns with the dimension of skepticism.It reflects distrust in <social media platform>'s intentions among users, raising doubts about whether the platform prioritizes the user's interests when making decisions related to addressing misinformation.(2) <Social media platform> intentionally allows misinformation to stay on its platform.
Rationale: This measurement indicates a dimension of dishonesty.If a user believes that <social media> knowingly permits misinformation to persist on its platform, it may lead to distrusting social media.(3) <Social media platform> transmits misinformation for its own interests.
Rationale: This measurement also relates to the dimension of malevolence.It implies the perception that <social media platform> propagates misinformation to serve its own interests, shedding light on the belief that it prioritizes its own agenda over providing accurate information.( 4) The prevalence of misinformation on <social media platform> makes me fear using this platform.
Rationale: This measurement maps to the dimension of fear.
It reflects a failure to address misinformation may lead to fear or apprehension towards using social media due to the widespread of misinformation.
Additionally, respondents rated their level of distrust in social media when it contains a particular type of misinformation intervention feature, from strongly disagree (1) to strongly agree (5).The four-item measurements of distrust include "In your opinion, when a social media platform has this <labeling/tagging feature>"... 1) I am skeptical about whether the platform keeps my interest in mind when it makes decisions about addressing misinformation", 2) "it shows that the platform intentionally allows misinformation to stay on it.", 3) "it shows that the platform spreads misinformation for its own interests.', and 4) "it shows that misinformation is widespread on the platform, making me fear using it".Similarly, in this set of questions, the placeholder term "<misinformation intervention feature>" was also replaced with specific feature names (i.e., labeling, curation, verification features as shown in Figure 2).
Frequency of exposure to misinformation intervention features.To understand how often participants were exposed to misinformation intervention features (i.e., Labeling/Tagging Features, Curation Features, and Verification Features), they were asked the question, "How often do you see the above kind of feature on social media?".Note that participants were shown example images of the individual features (see Figure 2) alongside this question for reference.Ratings were provided ranging from (1) never to (5) always.
Relationship between experiences with misinformation intervention features and people's attitudes and behaviors.We also aimed to explore the potential influence that participants' prior experiences with the misinformation intervention features had on their awareness of misinformation, information-sharing intentions, and desire to receive information from that social media platform.For example, with respect to labeling/tagging features, we used the prompt "Thinking about your experiences with this feature, please indicate how much you agree or disagree with the following statements about this labeling/tagging feature.".Participants were asked to rate their level of agreement, ranging from strongly disagree (1) to strongly agree (5), with three separate statements, including: (1) Overall, these labeling/tagging features make me more aware of misinformation.
(2) I am more likely to share posts from social media platforms that have these labels/tags.(3) When a social media platform has these labeling/tagging features, it makes me want to receive more information from the platform.
We replicated these questions to ask participants to reflect on the rest of the misinformation feature categories (i.e., "labeling/tagging features" in the above prompt was replaced with "curation features" and "verification features").This approach enabled us to examine and compare the extent to which participants' experiences with each type of misinformation feature impacted their misinformation awareness, information-sharing intentions, and desire to receive information on platforms.
Demographic Background.Participants were asked a few questions about their demographic background, including age, sex, race and ethnicity, education, political ideology, and income.Participant demographic data is summarized in Table 1.
Age. Participants were asked to provide their age in the survey (as a numeric value).Table 1 shows grouped categories of age distribution.
Sex. Participants were provided options of "Female", "Male", and "Prefer to self-describe", and "Prefer not to answer".
Race & Ethnicity.Participants were first asked "Do you consider yourself Hispanic or Latino?''.Then, they were asked to choose one or more races with which they most closely identify.Response options included "African American or Black", "American Indian or Alaskan Native", "Asian", "Native Hawaiian or Pacific Islander", and "White", with the additional options of "Prefer not to answer" or "Self-describe". Education.
Participants were asked about their education level using the question, "What is the highest degree or level of school you have completed?"Response options included "Less than high school", "High school graduate", "Associate degree", "Bachelor's degree", and "Postgraduate degree".
Political Ideology.Participants were asked to describe the political viewpoint with which they most closely aligned.Response options included "Democrat/Lean Democrat", "Independent", "Republican/Lean Republican", and "Other, please describe." Income.

Data Analysis.
We used a variety of statistical analyses to investigate the relationships between demographics, information practices, and trust and distrust.First, we calculated descriptive statistics for all variables, including means and standard deviations for continuous variables and frequency with percentages for categorical variables.This allowed us to gain a general understanding of the distribution of our variables of interest.
To explore the relationships between variables, we used correlation analyses.We calculated Spearman's correlation matrices for continuous variables and chi-squared tests for categorical variables.To explore the relationship between trust and distrust, we employed factor analysis (using R package nFactors [53]) to group correlated factors together into a few factors.For clustering analysis, we used the Gaussian Mixture Model (GMM) [55] (using R package mclust [61] and Python package sklearn.mixture[49]).The GMM adeptly captures intricate distributions by accommodating multiple Gaussian distributions, since some data points sit ambiguously between distinct patterns.GMM's soft clustering assigns probabilities to these observations, effectively addressing their inherent ambiguity.We then delved into a detailed analysis of the specific group, employing descriptive statistics and a General Linear Model (GLM) to thoroughly examine key demographic variables.Additionally, we aimed to uncover insights into the user's behavioral patterns and habits.We also conducted multiple regression analyses to examine the effects of demographic variables, such as age, gender, and education, on trust and distrust behaviors across different platforms as well as with different misinformation interventions.
To examine differences between groups, we performed independentsample t-tests and Kruskal-Wallis Rank Sum Test (using R package stats [52]).In cases where significant differences were found, we performed post-hoc tests using Dunnett's test (using the R package FSA [47]) to determine groups that differed significantly.We considered statistical significance at a significance level of  < 0.05 and reported effect sizes where applicable to indicate the strength of the relationship between variables.

RESULTS
We first investigate the complex dynamics of trust and distrust (subsection 4.1).We then show results regarding the platform differences and demographic differences in trust and distrust in social media (subsection 4.2).After that, we present results related to people's use of misinformation intervention features on social media and how their use of these features is related to their attitudes and trust in social media (subsection 4.3).

Dynamics of Trust and Distrust in Social Media
4.1.1Validity and Reliability of Social Media Trust and Distrust Scale (SMTDS).One of the core components of this study was to evaluate trust and distrust in social media.Since the measurement used in our study was a combination of previous work, we assessed the validity and reliability of our measurement with Factor Analysis and Cronbach's , tests commonly used for multiple Likert questions in a survey that form a scale [67].
The correlation graph (see Figure 3) presents four distinct pairwise correlation matrices, each for Facebook, TikTok, Twitter, and YouTube.These matrices reveal that within each social media platform, the different aspects of trust are positively correlated with each other.These same patterns are observed for the aspects of distrust, where different aspects of distrust are correlated among each other.However, despite these correlations within trust and distrust dimensions, there is a noticeable separation between the trust and distrust items in the matrices.Specifically, the reason is that we were interested in answers from people who recently experienced the platforms and the features, as this should yield more accurate and reliable perceptions of misinformation intervention features and trust/distrust.This separation implies that trust and distrust are related but distinct constructs: they are interconnected yet different, particularly in how they manifest across various social media platforms.
We used Factor Analysis [58], a technique to group correlated factors into a few factors, to evaluate our survey construct validity.If trust and distrust, based on how we measure them, are two separate constructs, then we would expect the four trust questions to load into one factor and the four distrust questions to load into another factor.The value of Kaiser-Meyer-Olkin (KMO) test is 0.85.The result of Bartlett's test of Sphericity was significant ( 2 = 15521.21,  = 28,  < .01),suggesting that there was a substantial correlation in the data that we could summarize using factor analysis.Our scree plot (see Figure 11 in Appendix A) suggests using two factors, with eigenvalues greater than one.Parallel analysis also indicates two factors.
The type of Factor Analysis that we used was Maximum Likelihood Factor Analysis, specifically factanal function in R. For rotation, we used promax as we did not expect the factors to be totally independent but slightly correlated, as observed in the correlation results.Table 2 shows the results for two factors.We can see that the four trust measurements load into one factor, with factor loading ranging from 0.81 to 0.89, and the other four distrust measurements load into another factor, with factor loading ranging from 0.60 to 0.86.The total variation explained (TVE) is 0.67, which is acceptable based on Hair [21].For the four-item survey instrument measuring trust in social media, we achieved an excellent Cronbach's alpha of 0.92 on all platforms tested in our study, indicating strong internal consistency.Specifically, Cronbach's alpha for the trust in Facebook was  = 0.92, trust in TikTok was  = 0.91, trust in Twitter was  = 0.93, and trust in YouTube was  = 0.90.Similarly, we achieved a good Cronbach's alpha of 0.84 in our four-item survey instrument measuring distrust in social media.Specifically, Cronbach's alpha for the distrust in Facebook was  = 0.82, distrust in TikTok was  = 0.84, distrust in Twitter was  = 0.85, and distrust in YouTube was  = 0.84.
Collectively, these results suggested that our constructed survey is valid and reliable in measuring trust and distrust in social media.

Relationship of Trust and Distrust in Social Media.
To examine the relationship between trust and distrust in social media, we first tested the correlation between these two concepts, to answer RQ1.If trust and distrust are opposites of the same construct, they would be perfectly or near-perfectly negatively correlated.In other words, we would expect the correlation between trust and distrust to be very close to -1 if they were truly opposites of the same concept.Meanwhile, a strong negative correlation might suggest they are opposites on a continuum; a weak or no correlation may imply they are separate constructs.
Correlation Analysis.Our results show there is a weak but statistically significant negative relationship between trust and distrust with a Pearson correlation coefficient of -0.27 and a Spearman correlation coefficient of -0.23 across all platforms.In detail, we observe correlation coefficients as follows: in TikTok ( = -0.12, < 0.05), in Twitter ( = -0.24, < 0.05), in YouTube ( = -0.25, < 0.05), and in Facebook ( = -0.39, < 0.05).In other words, these results suggest that those who reported trusting TikTok, Twitter, YouTube, and Facebook distrust them less, and vice versa.Yet, while significant, these relationships are far from -1.
The scatterplot in Figure 4 (A) shows a negative correlation, which suggests that for many users, high levels of trust correspond with low levels of distrust, and vice versa, in the context of social media.This supports the conventional wisdom that trust and distrust may be inversely related.However, the presence of data points in the upper-right corner, where both trust and distrust levels are high, indicates that there is a subset of the population for which trust and distrust coexist.
Furthermore, our results also show that data points across the four platforms exhibit similar patterns, as shown in Figure 4 B-E.However, when looking at each platform individually, we observed that a greater number of people exhibit high distrust and low trust in Facebook (see the upper-left-hand corner in each graph), as shown in Figure 4 (B).On the other hand, people display a moderate level of both trust and distrust in YouTube, as shown in Figure 4 (E), evident from the concentrated points in the center of the graph.Collectively, these results suggest that while trust and distrust can be viewed as related constructs, they can also be distinct, with additional evidence provided shortly with clustering analysis.
Clustering Analysis.As previously mentioned, our correlation results and factor analysis suggest that trust and distrust could be related but distinct concepts.To further elaborate on and validate this finding, we performed a clustering analysis-a process of grouping data based on the information describing them and their relationships within the data [12].Through clustering, we can identify distinct groups of users who may share similar levels of trust and distrust.To guide us in estimating the appropriate number of clusters, we used Bayesian Information Criterion (BIC) [17].Our analysis showed that the BIC curve remains relatively flat beyond four clusters, suggesting that using four clusters for the model fit is appropriate (with more details available in Figure 12 in Appendix A Appendices).Therefore, we chose to run our clustering model with four clusters.
The clustering analysis with four clusters (Figure 5) resulted in groups consisting of users with low trust but high distrust and users with high trust and low distrust , indicating that trust and distrust exhibit diametrically opposing behaviors among participants in our study, further suggesting the two extremes continuum.
However, two distinct clusters also emerged in our results, consisting of individuals with both high trust and high distrust (see upper-right corner ) and individuals that spanned around the center of the cluster.The Spearman correlation between trust and distrust within the high-trust-high-distrust group is 0.82 , suggesting a strong positive correlation between these variables, indicating a high degree of association between them.When high distrust is present, we find that high trust can also be present, which differs from the green cluster .Excluding the group with high trust and high distrust, the Spearman correlation between trust and distrust stands at -0.52.While this correlation is slightly more negative compared to that observed within the full respondent pool, it remains moderately inverse, suggesting that trust and distrust, although related, may not constitute diametrically opposed constructs on a single continuum.In other words, trust and distrust not only exist at the two extremes of a continuum; instead, they can also be ambivalent, exhibiting a more complex relationship.
The demographic breakdown (see Table 3) of the high-trusthigh-distrust group shows that males constituted the majority with 57%, followed by females at 42%, and a small segment (1%) preferring not to answer or identifying as non-binary.The average participant was 37 years old, with the largest age groups being 35-44 (37%) and 25-34 (35%).Ethnicity was predominantly non-Hispanic or Latino (87%), with 40% identifying as African American or Black and 36% as White.Educational attainment varied, with 35% holding a bachelor's degree and 30% having completed high school.In terms of political affiliation, 63% leaned towards the Democratic side.Household income levels were predominantly moderate-tohigh (67%) as per the U.S. Federal Poverty Level guidelines.To further discern the differences between the high-trust-highdistrust group and other groups, we first focus on the demographic patterns observed within this cluster of users .To achieve this, we used a generalized linear model, specifically logistic regression, to estimate the probability of membership in the hightrust-high-distrust group based on a range of demographic variables.After adjusting for multiple comparisons using the Bonferroni method (see Table 6 in Appendices), we found that age significantly influenced group classification, with a coefficient (log-odds) of -0.06 ( < 0.001).This result suggests that the likelihood of being classified as non-high-trust-high-distrust groups increases with age.
Educational attainment was also a significant factor: high school graduates were less likely to be in the high-trust-high-distrust group than those with less than a high school education, indicated by a coefficient of -1.46 ( < 0.05).Gender differences also emerged, with females showing lower odds of the high-trust-high-distrust group membership than males, as reflected by a coefficient of -0.88 ( < 0.05).Politically, individuals identifying as Independent or Republican were less likely to be in the high-trust-high-distrust group compared to Democrats, with coefficients of -0.93 ( < 0.05) and -0.64 ( < 0.05), respectively.Our analysis suggests that the remaining variables examined did not significantly affect the probability of being in the high-trust-high-distrust group .
Our analysis also revealed a tendency for younger individuals to be part of the high-trust-high-distrust group .To empirically test this observation, we conducted a non-parametric comparison using Dunn's test, which is appropriate for assessing differences between groups' medians without assuming a normal data distribution.The results confirmed (see Figure 13 in Appendix A), with high statistical significance ( < 0.05), that the median age of the high-trust-highdistrust group is substantially lower at 35.5 years, compared to the other groups' median age of 50 years.This finding suggests that high trust and high distrust in social media are characteristics particularly prevalent among younger individuals, notably within the 25-44-year age bracket.Such a demographic pattern emphasizes the need for more research into the social and psychological factors that foster these attitudes toward social media among younger populations.
In our analysis (see Figure 6), we examined the relationship between intervention observation frequency and social media usage within the high-trust-high-distrust group compared to other groups.Dunn's test revealed a significant difference at the 0.001 level between the high-trust-high-distrust group and others in terms of intervention observation.Focusing first on the frequency of intervention observation, within the high-trust-high-distrust group , the median frequency was found to be 3.33, suggesting that individuals in this group sometimes observe the interventions.In contrast, the median for the other group was 2.33, indicating that they rarely notice these interventions.Turning to social media usage, we observed that the median frequency within the hightrust-high-distrust group stood at 4.5, indicating that members of this group typically use social media multiple times a day.On the other hand, the median for the other groups was 4. Overall, we found that the high-trust-high-distrust group was more likely to engage with social media and, relatedly, with the misinformation interventions presented on social media, potentially affecting their perceptions and behaviors in this online environment.
Collectively, our results show that trust and distrust in social media are not simply opposites.Instead, trust and distrust may co-exist, manifesting in complex ways among users, who notably largely engage with social media.These findings point toward a unique demographic and behavioral pattern among the high-trusthigh-distrust group, particularly among younger individuals, suggesting the need for further research into the factors driving these attitudes toward social media.

Platform & Demographic Differences in
Trust and Distrust in Social Media

Platform Differences in Trust and Distrust in Social
Media.
To answer RQ2a, we present the results of multiple comparisons between trust and distrust levels across the four social media platforms studied.Specifically, our results show that respondents' trust ( 2 =92.34,  < 0.05) and distrust ( 2 =95.82,  < 0.05) in social media significantly differ across platforms.Our post-hoc tests (see Figure 7) further reveal that respondents significantly trust TikTok, Twitter, and YouTube more than Facebook ( < 0.001).The median trust level between TikTok and Twitter, as well as between Twitter and YouTube were not statistically different ( = 0.34,  = 0.24).Moreover, we found that respondents trusted YouTube significantly more than TikTok, with the median difference between the two platforms at 0.25 ( < 0.01).
As for distrust, our results showed significant differences between Facebook, Twitter, and TikTok with YouTube, respectively.The results showed that participants exhibit lower distrust towards YouTube than other platforms tested in our study.More precisely, the distrust difference is 0.25 between Facebook and YouTube ( < 0.001), Twitter and YouTube ( < 0.001), as well as TikTok and YouTube ( < 0.001).Comparisons between other platforms (Facebook-Twitter, Facebook-TikTok, Twitter-TikTok) did not present significant results in our study.More details are available in Table 7 in Appendix A Appendices.
In short, when comparing social media platforms on trust and distrust levels, we found that Facebook was significantly less trusted and YouTube was significantly less distrusted.

Demographic Differences in Trust and Distrust in Social Media.
To answer RQ2b, we presented results of multiple regression models, examining how respondents' trust and distrust in Facebook, TikTok, Twitter, and YouTube is associated with their age, education, gender, income, race, and ethnicity, as well as political ideology.
Demographic Differences in Trust.Table 4 reveals a consistent pattern where age, coded as a continuous numerical variable, is inversely related to trust across all social media platforms at a high significance level ( < 0.001).For instance, with each incremental year in age, trust in Facebook diminishes by  = −0.016± 0.002, in TikTok by  = −0.020± 0.003, in Twitter by  = −0.019± 0.003, and in YouTube by  = −0.008± 0.002.
Regarding education, those with higher educational attainment reported lower trust in Facebook compared to those with less than a high school degree, with a bachelor's degree, in particular, presenting a significant decrease in trust ( = −0.662, < 0.05).The association between education and trust did not reach significance for other platforms, indicating a nuanced impact of education on trust in social media.
In terms of race and ethnicity, the findings are mixed.Black participants showed significantly higher trust in Facebook ( = 0.354,  < 0.01) and YouTube ( = 0.413,  < 0.001) compared to White participants.Other racial comparisons did not yield significant results.In terms of political ideology, Independents showed significantly lower trust in YouTube ( = −0.318, < 0.001), and those identifying with political affiliations other than Democrat or Republican expressed markedly lower trust in YouTube ( = −1.061, < 0.001).Gender differences were not statistically significant in respondents' trust in social media.Likewise, no significant relationship was observed between income levels and trust in social media.
Demographic Differences in Distrust.Table 5 presents the results of multiple regression models, examining how respondents' distrust in Facebook, TikTok, Twitter, and YouTube is associated with their age, education, gender, income, race and ethnicity, and political ideology.
Age appears to have a significant negative correlation with distrust in YouTube ( = −0.006, < 0.01), indicating that older individuals tend to distrust YouTube less.No other significant agerelated effects are observed for distrust across Facebook, TikTok, or Twitter.Education does not appear to be significantly associated with distrust in any social media platforms at the  < 0.05 level.
Gender differences in distrust toward social media were not statistically significant, suggesting that distrust does not vary markedly between genders.In terms of income, no significant effects were observed, indicating that income levels do not play a substantial role in the level of distrust in social media.When examining race and ethnicity, significant findings include that Hispanic respondents exhibit less distrust in Facebook ( < 0.05), and Black respondents show less distrust in TikTok and Twitter ( < 0.05).Additionally, respondents from 'Other' racial groups indicate significantly higher levels of distrust in Twitter ( < 0.001).We also found that Republican or Lean Republican respondents have significantly greater distrust in Twitter than Democrat or Lean Democrat respondents ( < 0.001).No other significant differences were noted across the political ideologies for the remaining social media platforms.Collectively, our results show associations between demographic factors-such as age, education, gender, race and ethnicity, and political ideology-and levels of trust and distrust in social media.The strength and nature of these associations can vary across platforms.

Perceptions of and Experiences with
Misinformation Interventions

Relationship Between Experiences with Misinformation
Intervention Features and People's Attitudes.91% of our respondents reported seeing misinformation intervention features on social media.Of these, 54% frequently saw the labeling feature, 56% saw the curation feature, and 56% noticed the verification feature.Among those who had seen the misinformation interventions, we examined the relationship between participants' experiences with these interventions and their awareness of misinformation (Figure 8A), likelihood to share posts from social media (Figure 8B), and likelihood to receive more information from the social media platform (Figure 8C).
Of those who had seen the misinformation interventions, 71% agreed or strongly agreed that labeling increased their awareness of misinformation on social media, while 8% disagreed or strongly disagreed.In addition, 61% and 55% agreed or strongly agreed that the curation feature and the verification feature increased their awareness of misinformation, respectively.A small percentage (10% on curation and 14% on verification) disagreed that these features increased their awareness of the matter.
Furthermore, 31% of participants agreed or strongly agreed that they were more likely to share information from social media with the labeling feature, with a larger proportion of individuals disagreeing (22%) or strongly disagreeing (15%).In comparison, curation and verification features were more likely to make participants want to share information from social media, with 45% and 41% agreeing or strongly agreeing, respectively.
Finally, regarding the likelihood of receiving information from social media, 50% of respondents agreed or strongly agreed that curation features influenced them, while only 40% and 42% of respondents felt the same regarding labeling and verification features, respectively.Concurrently, 27% of respondents reported that they disagreed or strongly disagreed that labeling made them want to receive more information, 19% felt similarly on curation, and 22% on verification.In summary, while most of our participants agreed that misinformation interventions heightened their awareness of misinformation, many participants were neutral or disagreed that these interventions enhanced their likelihood to share and receive information.Nonetheless, individuals in our study were more inclined to share and receive information on platforms that employ curation features than any other misinformation interventions, whereas the labeling feature made our participants become more aware of misinformation than other features.

Relationship between Misinformation Intervention Features and Trust and
Distrust in Social Media.The correlation matrices show the interplay between trust and distrust among social media users under various intervention strategies: Labeling, Curation, and Verification (see Figure 9).Trust dimensions (Reliance, Benevolence, Competence, Reliability) consistently demonstrate moderate to strong positive correlations with one another, underscoring a cohesive construct of trust.For example, users who perceive a platform as benevolent also tend to regard it as competent.In terms of distrust (Skepticism, Malevolence, Dishonesty, Fear), there are also positive correlations, particularly strong between Skepticism and Malevolence, suggesting that users skeptical of the platform's intent may also view it as malevolent.Notably, the correlations between trust and distrust dimensions are generally weak, indicating that these constructs may operate independently; an increase in trust does not necessarily equate to a decrease in distrust.
As shown in Figure 10, our respondents indicated levels of trust in the labeling, curation, and verification features used to address misinformation on social media platforms.Participants' trust in social media platforms was assessed by their agreement with the platforms' anti-misinformation efforts.Labeling features had an average trust score of 3.50 ( = 0.93), curation features scored slightly higher at 3.51 ( = 0.92), and verification features came in at 3.43 (SD = 0.92).This indicates a general trust in these features, with Curation slightly more trusted than Labeling and Verification.The small standard deviations suggest that most respondents consistently agree that these features show the platforms' commitment to reducing misinformation and enhancing user engagement.

When a social media platform has <this feature>, it makes me want to receive more information from the platform. (C)
Figure 8: Participants' prior experiences with misinformation intervention features and their attitudes, including awareness of misinformation, likelihood of sharing information from social media, and intention to receive information from social media.
Conversely, distrust across all intervention features is moderate, with average scores indicating a neutral attitude-neither strong agreement nor disagreement.Mean distrust scores are 3.08 (SD = 0.89) for labeling, 3.04 (SD = 0.93) for curation, and 3.01 ( = 0.94) for verification, showing consistent skepticism across interventions.This uniformity suggests that users are cautiously skeptical about the platforms' anti-misinformation efforts, questioning whether platforms are effectively combating misinformation or acting in their own interests.8 in Appendix A reports the factors related to respondents' trust in social media interventions, with a focus on the Labeling, Curation, and Verification features.Age is inversely related to trust across the Labeling, Curation, and Verification features; specifically, for each incremental year in age, trust decreases by  = −0.0102± 0.0013 for Labeling,  = −0.0099± 0.0013 for Curation, and  = −0.0088± 0.0013 for Verification, all with a significance level of  < 0.001.Gender differences are also pronounced; non-binary respondents exhibit significantly less trust in the Curation feature with coefficient value  = -0.7003( < 0.05).Likewise, racial and ethnic disparities are evident; Black respondents show a higher level of trust in Labeling ( = 0.1770,  < 0.05) and Curation features ( = 0.1868,  < 0.05).Political affiliations also reveal a strong correlation, where, for instance, Republicans demonstrate significantly less trust in the Verification features with coefficient value  = -0.3669( < 0.001).Education and income show less pronounced effects on trust levels, lacking significant correlations with trust, suggesting that education levels and economic status do not play major roles in trust towards social media interventions.

Demographic Differences. Table
Similar to trust, from Table 9 (in Appendices), we can see that age is a significant factor in distrust but with a smaller effect.For example, for Curation, each additional year in age is associated with a  = -0.0069( < 0.001), and for Verification, the corresponding  = -0.0055( < 0.001).Gender differences are also seen, with females exhibiting a greater tendency towards distrust in social media misinformation interventions.In the Curation task, the  = -0.1673( < 0.001); for Verification, the  = -0.1600( < 0.01); and for Labeling, the  = -0.1322( < 0.1).This suggests that female respondents in our study generally have less distrust of these misinformation interventions than males.Meanwhile, other demographic variables such as education, income, and race do not demonstrate consistent patterns of significance in influencing distrust.

DISCUSSION
Our analysis attempted to tease out the complex relationship between trust and distrust and how this relationship differs demographically and across social media platforms.Meanwhile, we contributed a set of validated survey scales to measure trust and distrust that future research can use.Collectively, we contribute to further theorizing the dynamics of trust and distrust.Furthermore, we examined people's perceptions of and experiences with misinformation intervention and how their prior experiences with misinformation intervention may influence their trust and distrust in social media.Building upon our results, we discuss the significance of theorizing the relationship between trust and distrust, as well as the practical and design implications arising from our work.

The Significance of Theorizing the Relationship between Trust & Distrust in Social Media
Through an empirical study, we investigated the nuanced relationship between trust and distrust in the realm of social media.Our results showed weak yet statistically significant negative correlations between trust and distrust in social media (e.g., Facebook, Twitter, TikTok, and YouTube).These results suggest that the dynamics of trust and distrust might coexist within our participants.This multifaceted interplay offers a fresh perspective in understanding these constructs.

Dynamics of Trust & Distrust in Social Media:
In this paper, we presented analyses to showcase the complex relationship that exists between trust and distrust, and further shed light on new research opportunities to explore the heterogeneity of "trust and distrust profiles" among users.For example, our clustering analysis shows that users have diverse trust dynamics, and this variation is crucial to understanding their attitudes and behavior towards social media usage.Specifically, our results show an obvious cluster of people with high trust and high distrust.This finding may indicate that while users might trust specific aspects or functionalities of a platform, they simultaneously remain wary or distrustful of other elements of the platform.Such coexistence of high trust and distrust could arise from users discerning between the credibility of information sources (i.e., other users on the platform) and the reliability of the platform to continue providing trustworthy information (i.e., misinformation interventions).This pattern may also indicate that many users, rather than being universally skeptical or trusting,

Labeling
Curation Verification have a much more nuanced view of their online experience.Results from our analyses also help future researchers reconsider the use of a single trust-distrust Likert scale, instead utilizing a set of valid and reliable questions that represent trust and distrust separately, as presented in our SMTDS (see subsubsection 4.1.1).Nonetheless, our clustering analysis did not reveal a low trust, low distrust group.This may be attributed to our recruitment strategy, where we focus on participants who are currently using social media.We assume that those who use social media, possess some trust in the platform and thus, continue using the platform.If we were to recruit participants who did not currently use social media, we might be able to identify the low trust and low distrust cluster in our results.Nevertheless, we suggest future work continue to dig into the nuanced characteristics of social media users and define their placement within specific trust and distrust clusters.Some additional user characteristics worth examining may include content consumption preferences, external information sources [27], and psychological traits [6] (e.g., risk aversion or propensity for skepticism can also influence trust dynamics).By dissecting user characteristics, we can better understand the diverse trust and distrust landscapes on social media platforms.

Level of Distrust
Additionally, our findings provide several trajectories for future theoretical work.First, the lack of a universally accepted definition of distrust in the academic community presents challenges in operationalizing this concept.Our study examines this understudied topic of distrust.While we aimed to explore the boundaries of distrust in the social media context, we recognize that this approach may limit the scope of our findings.Moving forward, we suggest future research to dive deeper into the nuances of distrust, exploring its various facets in different contexts.By expanding the definition and measurement of distrust, subsequent studies can offer a more holistic understanding of how trust and distrust coexist and interact in different environments.Such research will complement our findings and contribute to the discourse on trust dynamics.
Second, future work should work to develop theoretical models that encapsulate the various trust-distrust profiles.These models would serve as a foundational framework, elucidating how and why certain profiles form, their stability over time, and their responsiveness to external stimuli or platform changes.Another avenue for future research is to examine the temporal evolution of these profiles.Key questions to explore may include: Do these profiles remain consistent over time?Or do they shift due to broader societal shifts, individual experiences, or changes made to the platforms?Examining the temporal dynamics of trust profiles provides academic insights and has practical implications in shaping users' experiences, informing platform design, and understanding societal shifts.

Situated Perspective in Understanding
Trust & Distrust in the Social Media Misinformation Age: What further adds a layer of complexity is the backdrop against which these relationships exist: the context of misinformation.Our findings underscore the growing relevance and importance of misinformation intervention features.A striking 91% of respondents reported encountering such features on their chosen social media platforms.This finding suggests the pervasive nature of misinformation on social media and the subsequent efforts made by these companies to combat it.While these features are being encountered, the frequency for which they are encountered is sparse, as our results showed that many users still either rarely or never see these features.This might indicate that these features are not uniformly distributed or that they only activate under certain conditions or algorithms, suggesting areas for potential improvement in deployment.
Our results also empirically showed that these misinformation intervention features raised the awareness of misinformation on the platform.While, in theory, this could potentially decrease trust, our results showed that the features actually increased trust on average instead.A user who inherently distrusts a platform might view its fact-checking interventions skeptically, whereas one who trusts the platform may perceive it as a seal of authenticity.Simultaneously, those who inhabit the gray zone of both trust and distrust may weigh these interventions differently, oscillating between acceptance and doubt.Our results showed that the labeling features offer users a direct way of discerning potentially harmful information and that a significant 71% felt it increased their awareness.This suggests that immediate and visually recognizable cues are crucial for aiding user judgment in information consumption.However, the fact that curation-a proactive and possibly more nuanced approach to misinformation-is associated with a higher likelihood of sharing and receiving information is noteworthy.This might indicate that users appreciate and trust pre-vetted content aggregated for accuracy and relevance.
Collectively, by situating our understanding of trust and distrust within the context of misinformation and its countermeasures, we deepen our theoretical understanding and provide insights that could shape the future of digital information design and dissemination.Our findings highlight the importance of continuous efforts and innovations to preserve the integrity of digital information spaces to maintain user trust and address issues of distrust.Looking ahead, specific avenues of investigation warrant attention in future research.For example, as social media platforms evolve, they will undoubtedly introduce a new generation of misinformation countermeasures.As such, a longitudinal study that tracks the efficacy of these interventions over time, alongside shifts in user trust and distrust, could be crucial.

Practical & Design Implications
We provide practical implications in real-world contexts in multiple ways, ranging from designing tools that enable people to navigate areas of skepticism and distrust to implications regarding policy and regulations.
Our results provide initial evidence regarding different types of dual trust-distrust profiles, suggesting that understanding the feedback loop between user-generated content and trust and distrust dynamics could be interesting.How do certain types of content being either flagged or endorsed shape users' trust, as well as their distrust in the platform and their subsequent content creation or sharing behaviors?Answering these questions will shed light on the complex interplay between content consumption, content creation, and the evolving perceptions of users.Therefore, to dig into the underlying reasons for forming the high-trust-high-distrust group, qualitative follow-up studies such as interviews can help unpack the nuances behind their simultaneous trust and distrust in social media, providing rich, contextual insights that quantitative data alone cannot reveal.This knowledge can guide platforms in designing more effective interventions to combat misinformation and foster a more trusting and informed user community.Moreover, it could influence content moderation strategies, ensuring a healthier digital ecosystem and more responsible user engagement.
Additionally, given our findings concerning the nuances across demographics, design indicators may resonate with specific cultural or contextual sentiments, such as local endorsements, regional checks, or community-driven verifications.In our study, we found significant differences in trust levels in social media platforms across political ideologies.Specifically, in our study, Independents exhibited significantly lower trust in YouTube, while those with political affiliations other than Democrat or Republican showed even lower trust in this platform.Regarding Twitter, Republican or Lean Republican respondents showed significantly greater distrust than their Democrat or Lean Democrat counterparts.Additionally, Republicans demonstrated significantly less trust in the Verification features of social media platforms.Our findings contribute to the existing body of research on how political affiliations shape interactions with online content, particularly in the context of misinformation, moderation, and trust.For example, Sharevski et al. 's work [76] suggested that Republicans and Independents were more likely to perceive misleading tweets as "somewhat accurate" compared to Democrats, who view them as "not very accurate," which aligns with our observation of varying trust levels across political ideologies.In another work, Zannettou [62] has found that most tweets with warning labels are shared by Republicans, while Democrats are more engaged in commenting on these tweets.While this work examined the relationship between user engagement (e.g., sharing and commenting) and political ideology, our work specifically focused on trust and distrust in social media.Collectively, these findings highlight the importance of acknowledging and engaging with the nuanced perceptions that characterize different subpopulations.These insights also suggest a tailored approach to designing and implementing platform moderation strategies that are informed by an understanding of the diverse and complex landscape of user trust.
Our findings also echo a recent review paper focused on misinformation interventions [2].The authors argue that existing misinformation interventions have primarily focused on individualistic approaches, ignoring community factors, such as the role of social norms [37,57].Therefore, to ensure the efficacy of future intervention designs in the realm of misinformation, it is important to integrate both individual and community-based perspectives, anchoring them in the diverse sociocultural contexts of the user base.
Last but not least, policymakers and regulators might also benefit from our work.Instead of drafting policies that singularly focus on enhancing trust, it might be equally crucial to devise strategies that address sources of distrust.For example, a more comprehensive regulatory framework, which promotes trustworthy practices while curbing elements that seed distrust, is essential for fostering a robust online information ecosystem.Another direction to move forward is collaborative policy drafting [63].For example, policymakers could collaborate with social media platforms, content creators, and users to draft regulations.Such a collaborative approach ensures that policies resonate with real-world challenges and user sentiments.Additionally, we suggest that future research could pioneer the concept of "distrust audits." Similar to how platforms undergo privacy or security evaluations, these audits would systematically assess features or areas within a platform that might induce user skepticism.By identifying and addressing these potential pitfalls, platforms may proactively cultivate a trustworthy digital environment.

LIMITATIONS
While our empirical study provides valuable insights into the effects of misinformation interventions on people's trust in social media, some limitations should be acknowledged.First, our study was conducted in the United States, limiting our findings' generalizability to other countries or cultural contexts.Future work should expand the scope of studied populations to include participants from other countries to better understand how misinformation interventions on social media influence people's trust and distrust across different cultures and societies.In addition, we acknowledge that the trust and distrust scales used in our study require ecological validation to ensure their reliability and effectiveness in other real-world settings (e.g., diverse social contexts and different populations).Moreover, our study focused primarily on a subset of visible misinformation features and several main social media platforms.Our work did not explicitly examine other types of interventions that social media platforms use.We also noted a limitation in the study design regarding the evaluation of trust and distrust.Our survey captured respondents' perceptions of their experience with platforms' already-deployed misinformation intervention features, but did not contrast these with a baseline from platforms without such measures.To further determine the effects of misinformation interventions, future work may consider conducting experimental design studies for comparative analysis.Therefore, future work can further study the effects of these other misinformation interventions on people's trust and distrust in social media and expand the scope of the social media platforms examined.

CONCLUSION
Our extensive research, conducted through a large-scale survey involving 1,769 participants in the U.S., has revealed several crucial insights into the dynamics of trust and distrust in social media.Our results show that trust and distrust can be two concepts rather than the two sides of a singular spectrum.This dual lens enriches our theoretical understanding of online trust dynamics.Our findings further classify users based on varied trust and distrust intensities.Moreover, we highlight that both trust and distrust perceptions can shift depending on the platform and are influenced by demographic factors.Additionally, while misinformation interventions can elevate users' misinformation awareness and bolster trust in platforms, they don't necessarily reduce distrust.Our research suggests that focusing solely on trust is insufficient; rather, distrust can be regarded as a distinct concept that requires dedicated attention in the future.

1 Figure 1 :
Figure 1: Overview of our study flow:Step 1 includes collecting blog posts regarding misinformation intervention features on social media and analyzing these blogs using the Rapid Qualitative Analysis method[45].Step 2 includes designing and deploying the survey study (guided by the findings from 1 ), followed by multiple rounds of pilot studies and the final launch of the study.

Figure 2 :
Figure 2: Examples of social media features that seek to combat misinformation: (A) Labeling/Tagging Features, (B) Curation Features, and (C) Verification Features.

Figure 3 :
Figure 3:  Correlation matrix between all individual items for the trust and distrust scales across four social media platforms (i.e., Facebook, TikTok, Twitter, YouTube).Each matrix shows the correlation coefficients between dimensions of trust (i.e., reliability, reliance, competence, and benevolence) and aspects of distrust (i.e., fear, skepticism, dishonesty, and malevolence).

Figure 4 :
Figure 4: Distribution of trust and distrust among our participants across social media platforms: (A) Four Platforms and their centroids.Data points inside the dotted black square at the upper-right corner deviate from the linear pattern, indicating elevated levels of both trust and distrust, signaling a particular group of users who simultaneously hold trust and distrust in complex ways.(B) Facebook , (C) TikTok , (D) Twitter , and (E) YouTube .The larger the data point, the greater the number of people it represents.

Figure 5 :
Figure 5: Clustering using the Gaussian Mixture Model with distinct color-coded clusters (i.e., each color represents a cluster).

Figure 9 :
Figure 9: Correlation matrix for the trust and distrust with different misinformation interventions.

Figure 10 :
Figure 10: Respondents' average trust and distrust in social media with different misinformation intervention features.

Figure 11 :Figure 12 :WFigure 13 :
Figure 11: Parallel analysis to determine the number of components to keep in the factor analysis

Table 2 :
Factor Analysis results on trust and distrust questions.

Table 3 :
Demographic characteristics of the high-trust-high-distrust group

Table 4 :
Multiple regression models explaining respondents' trust in social media (Significance level: *  < 0.05, **  < 0.01, ***  < 0.001).Overall, our results show that older individuals generally trust social media less, and those with higher education also exhibit lower trust, especially for Facebook.Black respondents tend to have higher trust than White respondents, and political affiliation significantly influences trust levels in social media.

Table 5 :
Multiple regression models explaining respondents' distrust in social media.(Significance level: *  < 0.05, **  < 0.01, ***  < 0.001) The analysis suggests that distrust in social media varies less with age but is significantly influenced by race (with Hispanic respondents showing lower distrust in Facebook, and Black respondents displaying lower distrust on TikTok and Twitter) and political ideology (where Republicans exhibit less distrust in Twitter).

<this feature> make me more aware of misinformation. (A) I am more likely to share posts from social media platforms that have <this feature>.
LA, USA) (CHI'22).Association for Computing Machinery, New York, NY, USA, 21 pages.https://doi.org/10.1145/3491102.3501889

Table 8 :
Multiple regression models explaining respondents' trust in social media interventions, represented by Labeling, Curation, and Verification tasks.(Significance level: *  < 0.05, **  < 0.01, ***  < 0.001) The analysis suggests that trust decreases with age and varies significantly with political affiliation, and significant gender differences as non-binary users exhibit markedly lower trust, and Black respondents show more trust in the Labeling and Curation feature.

Table 9 :
Multiple regression models explaining respondents' distrust in social media interventions, represented by Labeling, Curation, and Verification tasks.(Significance level: *  < 0.05, **  < 0.01, ***  < 0.001) The analysis indicates that older age groups show decreased distrust; females generally exhibit less distrust than males, especially in the Labeling and Verification feature.