"The Headline Was So Wild That I Had To Check": An Exploration of Women's Encounters With Health Misinformation on Social Media

Misinformation has emerged as a significant threat to public health in recent years and has been observed across numerous health issues, the most prolific being COVID-19. Though increasing attention has been paid to women's health within the social scientific and HCI communities, very little research has holistically explored the unique challenges women face when navigating health misinformation. To address this gap, we conducted a qualitative diary and interview study aimed at investigating women's perceptions and lived experiences of health misinformation on social media, and how they respond emotionally and behaviourally to health misinformation encountered in their day-to-day lives. We found that participants perceived health misinformation as ubiquitous and poorly-managed by social media platforms, resulting in a lack of trust in current moderation and fact-checking interventions. We also observed that encounters with misinformation triggered negative emotional responses, which participants attempted to navigate through ad-hoc strategies such as drawing on personal experience and reading social media comment sections, which facilitated collective sensemaking. We discuss our findings in relation to the design of targeted interventions which empower women to engage constructively with health information on social media. In particular, we underscore the importance of trust, accountability, and intersectionality in future design and research practice, and encourage a holistic view of how women are impacted by misinformation.


INTRODUCTION
The use of misinformation to promote political causes, drum up military support, and stoke anti-scientific agendas predates the digital era, but social media provides opportunities for false information to spread at an unprecedented scale [26].Though research focusing on the COVID-19 pandemic has dominated in recent years, female reproductive health issues continue to attract a steady stream of misinformation online [60,92,95].The impacts of health misinformation were felt keenly by women throughout the COVID-19 pandemic, with pregnant women experiencing among the highest rates of vaccine hesitancy across the globe [11,27].Furthermore, harmful narratives surrounding fitness and weight management pervade social media [23,78], with downstream consequences for the physical and mental health of women and girls [20].Conversations around misinformation and women's health have been amplified further since the high-profile overturning of Roe vs. Wade in the United States 1 , which marked a shift in the legal landscape surrounding reproductive freedom [50], and renewed fears of an 'abortion infodemic', amid an explosion of guides for dangerous self-induced abortion on social media [57].
The growing inventory of threats faced by women from health misinformation warrants a dedicated analysis of their experiences and needs with respect to cultivating health literacy and mindful engagement with information online.It is appropriate to view exposure to health misinformation through a gendered lens, since structural inequalities and stigmas shape people's everyday information consumption habits, potentially rendering them more vulnerable to certain types of misinformation [87,90].Existing research into women's health misinformation has unpacked how influencers and commercial parties embed aspects of normative feminine identity into misinformation, such as appealing to maternal instinct [14], and leveraging cultural beauty standards to promote misleading weight loss narratives [23,78].Similarly, research conducted following the overturning of Roe vs. Wade illustrated how political and anti-authority narratives are often embodied by misinformation focusing on female reproductive health [47].
While interview, survey, and focus group methods have been used to explore how misinformation shapes women's healthcare decisions [5,35,62,92], little research has investigated how women navigate health misinformation in their daily lives, or considered the wider social media ecosystems in which health misinformation is embedded.Meanwhile, HCI and CSCW research has unpacked how individuals and communities make judgements of credibility in socio-technical contexts [33,39,74] and explored user perspectives towards platform-led fact-checking interventions [4,72].There is value in uniting these streams of research, to promote an understanding of misinformation that is sensitive to the cultural components of women's health, and recognises how social media platforms can influence how individuals engage with information.The aim of our study was to explore how women identify and define health misinformation, and to unpack their strategies for debunking and navigating this material.We addressed the following research questions: RQ1 What are women's perceptions and lived experiences of health misinformation on social media in terms of its prevalence, impact, and existing countermeasures?RQ2 What emotional responses and sensemaking gaps arise when women encounter health misinformation on social media?RQ3 What resources or strategies do women use to navigate and/or debunk the health misinformation they encounter on social media?RQ4 What factors influence women's decisions to share or report health misinformation encountered on social media?To address our research questions, we conducted a diary and interview study with 19 women, to explore their serendipitous encounters with health misinformation on social media over a two-week period.We used Dervin's Sensemaking Methodology (SMM) as a framework for our data collection and analysis, finding that participants perceived health misinformation to be ubiquitous and poorly regulated by social media platforms, contributing to the development of ad-hoc strategies for debunking this content in their everyday life.We also observed a significant emotional overhead associated with encountering misinformation, and that managing emotional distress and fatigue was often as much, if not more of a priority than judging whether a post was objectively true or not.Our study makes the following contributions: among women, and embedding intersectionality in research and design practice.This includes calling for more responsible and accountable use of AI in misinformation detection, and exploring collective health education in diverse online communities.

Terminology
The terms 'misinformation' and 'disinformation' are separated according to criteria of veracity and intentionality [41], where disinformation refers to content which is intentionally designed to be misleading or false [80], and misinformation is used to imply a lack of intent to do harm [34], or to discuss false information regardless of creator intent [30].We use the term 'misinformation' throughout this paper to remain neutral to the poster's intentions [73,77].By extension, the term 'women's health misinformation' is used to refer to misinformation that is either related to health issues which are "unique to, more prevalent in, or manifest differently in women" [44], or have been identified by women themselves as health priorities.The latter category goes beyond specific diseases or conditions, and encapsulates the entire spectrum of health concerns perceived to be important by women, including broader wellbeing and gendered bias in healthcare provision [22].Though some work has cautioned against using negatively valenced terminology such as misinformation to refer to women's health narratives [83], we use this term to remain consistent with existing HCI literature, and to emphasise the potential harms of this material, rather than to invalidate the beliefs of women.

Misinformation and Women's Health
The women's health movement has emerged in the last few decades to address the medical discrimination experienced by women over time, and the tendency for medical research to prioritise male physiology and needs [29].Scholarly understanding of what constitutes women's health has shifted from an exclusive interest in reproductive health issues, to a more holistic understanding of women's wellbeing as a constellation of factors which evolve across the lifetime [22,45].Work in HCI has explored how technology may support women with specific events such as pregnancy [8] and menopause [54], in addition to wellbeing in the workplace [85] and intimate care [1].Broadly, health misinformation often plays on the specific fears and experiences of women [14], or appeals to oppressive beauty standards [23,78].Feminist scholars have established that the spectre of medical oppression continues to nurture a mistrust of medical authorities among women, and influences decisions to adopt alternative forms of healthcare which emphasise separation from traditional medical institutions-a subculture informally dubbed the "new wellness" movement [42].Despite its emphasis on female empowerment, several industries and figures associated with the wellness movement have been accused of commercially promoting pseudo-scientific health treatments which pose a risk to women's health, and leaning into conspiratorial or extremist narratives [10].
Furthermore, the impact of gender on body-image development is long-established within clinical and social scientific research; cultural norms and expectations contribute to increased psychological involvement with physical appearance among women and girls, with negative implications for their wellbeing and self-confidence [18].Such norms are embodied by online weight loss material which promotes slimness, clear complexion, and physical markers of youth at the expense of medical accuracy [23,78].Qualitative work has explored the perceptions, beliefs, and concerns of women towards aspects of reproductive health, and highlighted the impact of misinformation on women's healthcare decisions.For example, Payne et al. [62] showed that misinformation around infertility is a driver behind disuptake of IUD contraception among college-aged women in the USA, and a recent study of women living with endometriosis found that over three-quarters had encountered misinformation regarding the condition online [5].Other work has applied an intersectional lens to women's health: Victoria et al. [92] found that low-income Latin American women often adhere to misinformation around the transmission and treatment of human-papillomavirus (HPV), and Hines et al. [35] explored the experiences of trans women, finding that clinicians themselves are often a source of health misinformation owing to widespread prejudice and misconceptions surrounding transgender healthcare [35].

Socio-Technical Misinformation Interventions
Fact-checking has emerged as a popular countermeasure to misinformation, and has been implemented across several social media platforms.Typically, the intervention involves informing users of a post's factual inaccuracy through a visual label or icon, and issuing a correction sourced from a verified fact-checking organisation [21,72].Fact-checking has been found to correct false beliefs in several contexts and improve critical thinking skills online [28,38], but social media facilitates effortless sharing of content, which makes misinformation detection and correction a difficult problem to scale [98].To address this, machine learning models which automatically detect misinformation based on lexical, semantic, and discourse-level features have been developed [34,41].However, training data is often limited, and biases have been found to pervade several stages of the misinformation detection pipeline, resulting in unjust outcomes for certain user groups [55], and false positives which degrade trust in social media platforms [72].Consequently, there have been efforts within HCI and CSCW to develop solutions which embody fairness and transparency, including community-based, decentralised fact-checking networks [33,40,43].
It is widely recognised that fact-checking cam backfire due to psycho-linguistic factors.The information deficit model on which fact-checking interventions arguably rely, assumes that belief in misinformation stems from a lack of scientific knowledge, and can be corrected by the timely provision of facts [76].However, the model may fail to account for the interpersonal and social factors underpinning people's truth judgements [26]; in particular, the attentionally demanding context of social media, where users frequently experience scattered attention, emotional burnout, and informational overload [30,34,38].To cope with the perceived cost of critically evaluating web content, people often use cues and heuristics to make rapid, rule-based judgements of credibility in such a way that optimises information utility as a function of interaction cost [52].Common factors influencing credibility assessments online include the source of information [39,75], its compatibility with one's prior knowledge or worldview [31], and visual fidelity [74].As a result, there exists an incongruence between what is objectively thought to constitute misinformation by platforms who prioritise the factual contents of posts, and users, for whom a breadth of affective and interpersonal factors can mediate judgements of information credibility.
More proactive, socio-technical interventions use digital nudges to support and enhance users' existing credibility judgement strategies.One browser-based intervention developed by Bhuiyan et al. [12] used a combination of nudging and checklists to successfully draw users' attention to sources of information, as well as the credibility opinions of other users when browsing news content online.Similar approaches have been adopted by social media platforms; Facebook and Twitter have recently deployed popups which appear when a user attempts to share an article they have not read, to encourage mindful sharing habits [32].Though there is little empirical insight into the effectiveness of these specific interventions, similar lightweight nudges have previously been found to reduce the intent to share misinformation in large-scale experiments [39].
Very few HCI studies to date have evaluated these interventions with respect to gender, despite evidence that credibility judgement strategies and priorities may vary across men and women.Existing findings present a mixed picture, with some surveys showing that women consume social media content more critically than men [2,97], and others finding that women are less skeptical of misinformation and more likely to share it [19].Additional factors including frequency of social media use, platform preference, and age can mediate the effect of gender on digital information literacy [68,94], demonstrating the value of holistic and exploratory research.

Sensemaking
Sensemaking refers to the cognitive and behavioural processes by which people come to understand information or complexity in their lives [53].Though initially developed to analyse organisational practices, sensemaking metatheory has since been used to investigate how people managed the COVID-19 pandemic [66], and to understand the role played by platform affordances in the proliferation of conspiratorial content on social media [96].Broadly, sensemaking theory is useful in guiding interpretations of qualitative results [96], and can make clear the behaviours, strategies, and tools people use to navigate complex information landscapes [71].
Dervin's Sensemaking Methodology (SMM) [24] consists of four key interacting elements: gaps, bridges, outcomes, and the situational context of the individual [65].Central to the methodology is the metaphysical concept of a gap, defined as a discontinuity or disruption to a person's understanding that arises during unexpected and irregular situations, of which exposure to misinformation is an example [84].Cognitive gaps mark a discrepancy between what is known, and what a person feels should be known, driving a pressing need to clarify, understand, and correctly 'frame' ambiguous data.Gaps are bridged using internal resources, or by engaging in information seeking strategies [67].Bridges eventually produce an outcome, defined as a momentary clarity or understanding of a situation.This is not a static or stable state, as the sensemaking process continually sees people subjected to new situations and information requiring constant reassessment [53].The key data collection method associated with Dervin's SMM is the flexible micro-moment timeline interview [24,65].The core technique involves asking a participant to recount a specific situation step-by-step and using content-free, neutral questions to probe gaps in more depth.This method can be deployed in an interview context [65,66] and as an analysis framework in diary studies [24].
While sensemaking is often referred to and explored in contexts where this action is undertaken as a solitary activity, collective sensemaking can be a vital component of making sense of acquired information [61].Collective sensemaking describes the processes where information or data shared by others influences or contributes to understanding and acceptance, with studies documenting how online communities collaboratively share insights to make sense of information [48,59,61].There is also a body of work that explores how collective sensemaking makes up the backbone of online health communities [58], supporting information acceptance.These examples are in contexts where people openly and clearly share with one another.However, in our work we explore how collective sensemaking is utilised to garner trust through the sharing of content by others, where the social structure is more implicit.

Summary
In sum, there is a strong body of work within HCI which studies misinformation from a sociotechnical perspective, and attends to the psychological and affective factors which mediate people's information consumption habits.Similarly, work across disciplines has explored how misinformation impacts women's healthcare decisions, and illustrated how normative feminine identity is embedded into pregnancy, childcare, and weight management misinformation, for example.However, there still exists a gap in the literature relating to gender, and how health misinformation is experienced by women in-situ.Despite well-researched differences in digital behaviour and cognition across genders [2,19,94], meta-analyses have revealed that women are underrepresented as research participants in both classic and recent HCI studies [56].Misinformation research often overlooks the dimension of gender [2], perpetuating a set of universalising tendencies which address the needs of the generic or 'average' social media user, at the expense of marginalised groups [51,56].To address this gap, our study systematically attended to the complex interplay of factors which affect how women engage with information in their everyday lives.This involved analysing how women define, identify, and debunk misinformation on social media, and unpacking the emotional impacts of encountering this material, constituting the most intricate analysis to date.

METHODS
The aim of our study was to explore women's perceptions of health misinformation on social media, and to unpack their strategies for identifying, navigating.and debunking serendipitously encountered misinformation.We chose to combine a qualitative diary study with in-depth interviews, to capture the naturalistic contexts in which participants encountered misinformation, and to later probe their sensemaking strategies in greater detail using Dervin's micro-moment timeline interview technique [24].Though diary studies suffer the drawback of selective reporting, they allow researchers to capture important situational states and triggers which may be overlooked by participants recounting events in an interview [81].Given that user interactions with information on social media are highly influenced by emotional, attentional, and contextual attributes [30,34,38], diary studies effectively capture a multiplicity of responses to misinformation, particularly when combined with follow-up interviews, which facilitate reflection and clarification.

Recruitment and Participants
We recruited participants on a rolling basis between June and July 2022.A convenience sample of four female participants (mean age=24.5 years) was initially recruited for a pilot study, to better define the study's target population, and refine the diary study prompts.Feedback from the pilot study suggested that participants who used at least one popular social media platform daily were more likely to serendipitously encounter health misinformation on a regular basis.As such, recruitment was focused on individuals who fit this criterion, and participants were recruited via social media with digital flyers posted to Facebook, Instagram, LinkedIn, and Twitter.We also employed snowball sampling, inviting participants to share the flyer with people in their social networks.Our call for participation included people aged 18 or over who identified as women, were daily users of social media, and had experiences of encountering health misinformation in the past.We limited participation to current UK residents, to ensure consistency in the news and media cultures participants were exposed to over the course of the study.Interested participants were invited to complete a screening survey which collected basic demographic information and data on their social media usage habits.Eligible respondents were contacted further via email, and invited to complete an informed consent sheet before progressing with the study.We recruited 19 women to participate in the study in total, of which two dropped out after submitting a few entries.The entries submitted by these participants were valid and sufficiently detailed, and so were included in our analysis without a corresponding follow-up interview.Participants were living in the UK and spoke English at the time of recruitment.Participants were mostly aged between 18 and 34 years (n=15), and were ethnically diverse.Participants' highest level of education varied from high school (n=1) to undergraduate (n=7) and postgraduate (n=11) degrees.All participants reported daily use of social media, and expressed some level of concern about the impacts of health misinformation on women's health and wellbeing.We summarise participant demographics, including any disabilities or health conditions disclosed to us in Table 1.

Briefing
Interview.We held a remote briefing interview with each participant lasting approximately 25 minutes, to introduce ourselves and the study.We began by querying participants' general attitudes towards health misinformation on social media, including how they defined misinformation, the health topics they were most concerned about, and their experiences with interventions such as fact-checking and content moderation.This interview gave us a snapshot of the assumptions and experiences that participants were bringing to the diary study, and provided an opportunity to brief them on the procedure.

Diary
Study.We asked participants to keep a digital diary for two weeks, recording an entry whenever they encountered health-related content on social media that either contained misinformation as defined in section 2.1, or was provocative or difficult to believe.We used an intentionally broad definition of misinformation, to encourage participants to record not only material that was provably false, but also more nuanced posts which caused uncertainty or intrigue.This maintained the centrality of women's perspectives when defining health misinformation, rather than imposing an external definition which did not resonate with their experiences.Though we provided a list of example health themes to look out for, we encouraged participants to record posts relating to any health topics of personal significance to them [22].We allowed participants to record material from any website or social media platform they used, to explore the range of information sources present in their lives [72].
In line with previous studies of serendipitous encounters with misinformation [72], we used a critical moment diary approach, as opposed to asking participants to record entries at fixed times and frequencies.This better captured the naturalistic and everyday contexts in which misinformation was encountered, and reduced the likelihood of participants feeling pressured to artificially elicit events by actively searching for posts to record.To allow participants to record detailed entries with minimal disruption to their daily lives, we suggested they employ a snippet technique [15]: a well-known system designed to minimise the overhead of keeping diaries on a regular basis [81].Participants were instructed to screenshot or bookmark the misinformation they encountered and make brief notes using a medium of their choosing, before completing the full diary questionnaire at their earliest convenience.
We used a structured questionnaire hosted on the online survey platform Qualtrics as the diary medium, to standardise the ways in which participants reported events, and ensure core details relevant to the sensemaking process were included [37].To collect basic contextual information, the questionnaire began with closed questions about the platforms and devices that participants were using, and their online activities at the time of the encounter.Due to privacy and ethical concerns, participants were not asked to upload screenshots of the posts they encountered, but were instead asked to describe the material in their own words, and justify its inclusion in the study.We then used the structure of sensemaking gaps, bridges, and outcomes to elicit a rich description of participants' strategies for processing information, and how their thoughts and feelings evolved as they interacted with this material [24].We used the following prompts to explore these dimensions: (1) How accurate or reliable did you perceive this post to be, and why? (2) Did this post raise any questions or concerns that needed answering?If so, what were they, and how did you go about answering them?(3) If you investigated the content further, please describe how you did so step-by-step, and specify any additional sources of information you used and whether they helped.(4) Were there any specific features of the website/platform you were on that helped you answer these questions or you felt hindered you? (5) Did you take any further steps such as sharing the content with someone else or reporting it?Why/Why not?
A target of five entries over two weeks was set, and we introduced an incentive for recording high-quality entries: participants were told that the three most detailed diary entries would earn an extra £5 in addition to the baseline study incentive (a £15 shopping voucher).Introducing elements of gamification into diary studies can motivate participants to continue with a study, and make the experience more interactive and enjoyable [79].In our case, feedback from participants suggested that the competition was effective in keeping them engaged with the study, and mindful of the quality of their entries.3.2.3Follow-Up Interview.After two weeks of submitting entries, participants were invited to complete a semi-structured follow-up interview.All interviews were conducted remotely via video conferencing, and lasted approximately 45 minutes.Participants were provided with a copy of their entries at least 48 hours prior to the interview, and were allowed to re-read their entries during the interview as a memory aid.The interviews followed a structure of general reflections on the diary study process, followed by an in-depth discussion about specific entries.We used elements of Dervin's micro-moment timeline interview [65] to systematically explore participants' sensemaking strategies.The sensemaking gaps and concerns described by participants were probed using a '5Ws' paradigm (who, what, when, where, and why) [24] to explore how contextual and social factors shaped their encounters.Participants were asked about their relationships with the posters of the material they encountered (who), as well as the broader technical contexts in which misinformation was embedded (where).Further details were also commonly sought on the conclusions participants reached about the posts they encountered, and if/how they were affected in the days following.Following the interview, participants were sent a £15 shopping voucher, with the three participants who submitted the most detailed entries receiving an additional £5 each.

Data Analysis
The first stage of data analysis involved critically reading and synthesising diary entries as they were submitted.This revealed initial patterns and links between participants' pre-study interviews and the views expressed in their diaries which were queried further in the follow-up interviews.Once participants had completed the study, we manually transcribed their interviews in a clean verbatim style to add a further layer of familiarisation with the data, before transferring all anonymised interview and diary data to NVivo-a software tool for qualitative data analysis.We used reflexive thematic analysis [16,17] to analyse our qualitative data, using a blend of inductive and deductive coding to organise participants' diverse experiences into coherent sensemaking dimensions.If they gave their consent, we occasionally contacted participants during the data preparation process to clarify details in their entries, and ensure that we captured their experiences accurately [13].
We followed a two-stage coding process, beginning with a round of inductive coding.We applied open codes to the data, which captured key activities, perspectives, and resources drawn on by participants.Example codes relating to information seeking behaviours included Types question into Google and Asks partner for opinion, and codes capturing emotional responses included Finds post upsetting, or Is overwhelmed by information.We then used Dervin's SMM as a deductive tool, and mapped our open codes to the key sensemaking dimensions of personal factors, situational context, gaps, bridges, and outcomes [65].For instance, participants' information seeking strategies were collapsed under sensemaking bridges, whereas uncertainty about information validity and emotional overload were sensemaking gaps.We refined our codes through discussion, and created themes by identifying shared patterns of meaning across categories.

Ethical Considerations
Our study was approved by our institution's ethics board.All participants read and signed an informed consent sheet which clearly outlined the study's procedure and the methods of data processing prior to participating.All interviews were audio recorded with the explicit consent of participants, and additional verbal consent for the use of direct quotes was obtained if the participant disclosed information related to their health during an interview.All data from the interviews and diaries was pseudo-anonymised using a participant ID, and all identifiable details were removed from the transcript and quotes unless necessary to provide context.To avoid ethical issues related to directly accessing social media posts [7], participants did not provide links to the posts they encountered, and instead, described them in their own words.

Statement of Positionality
As researchers working at the intersection of HCI and women's health, we position ourselves within a feminist framework to critically examine the experiences of women within socio-technical systems.Our positionality is rooted in the recognition of gendered inequalities within healthcare, and an acknowledgement of how these bear on individuals' health and information literacy in the digital age.We recognise that women's experiences with health misinformation are complex and intersectional, influenced by a multitude of demographic, personal, and contextual factors.As such, we endeavour to be inclusive and intersectional in our approach, and aim to contribute to the development of more gender-inclusive and socially responsible HCI interventions.

General Overview
4.1.1Post Characteristics.A total of 75 entries were submitted between June 1st and July 24th 2022.The posts described by participants mostly focused on dieting and weight loss (n=37), followed by mental health (n=11), physiological health (n=9), reproductive health (n=8), and COVID-19 (n=6).Other miscellaneous topics were also reported, including sleep health and skincare (n=12).Participants encountered misinformation across a range of platforms, the most common being Instagram (n=27), followed by Facebook (n=18), TikTok (n=12), and Twitter (n=7).A total of 11 posts were found on other platforms, such as Reddit, Quora, and Pinterest.The largest share of posts originated from members of the public who the participants did not know personally (n=29), and the second largest from accounts that participants identified as belonging to brands or businesses (n=22).A smaller proportion of posts originated from news channels (n=11) and public figures (n=9).Only four posts in the sample originated from a user the participant knew personally.
4.1.2Overview of Qualitative Results.Our qualitative analysis yielded a number of themes surrounding participants' perspectives towards health misinformation on social media.We report on these themes in the following sections.First, we outline participants' broad perceptions of the prevalence of misinformation on social media, and the factors underlying their disillusionment with platform fact-checking interventions (see §4.2).Then, we describe the emotional responses and sensemaking gaps which arose from participants' encounters with health misinformation (see §4.3), and their strategies for navigating this material, both individually (see §4.4) and collectively, through utilising social features such as comments sections and engagement metrics (see §4.5).Finally, we describe participants' overwhelming tendency not to share or report posts, and their justifications for doing so (see §4.6).

Misinformation is Ubiquitous.
The first aim of our study was to understand women's perceptions of health misinformation on social media, and to explore their attitudes towards existing countermeasures.We found that most participants in our study held a general mistrust towards health information posted to social media, and believed misinformation to be highly prevalent in online communities.This perception was fuelled by the decentralised structure of the Internet, and the belief that "almost anyone" could post medical advice regardless of their credentials.Participants were aware of the low barriers to content creation on social media, and cited a tendency to reserve a baseline level of skepticism when interacting with information online as a result.However, despite harbouring concerns about the veracity of online information, a few participants recalled extensively using the Internet to learn about reproductive health during adolescence, due to cultural taboos around discussing menstruation with family members and healthcare providers.This nurtured a habit of managing reproductive health issues in secret, and using Google to inform the use of over-the-counter remedies for menstrual problems.In these cases, concerns about information accuracy were superseded by the anonymity and privacy afforded by search engines.
"Asian culture is more traditional so in a family, the father wouldn't talk about periods.It's very private, you wouldn't want to share it with your parents or your boyfriend, so usually the first thing you do is just Google it." (P8) 4.2.2Misinformation is Poorly-Moderated.Feelings of mistrust often coexisted alongside a sense of disillusionment with social media platforms' ability and willingness to effectively moderate misinformation.While no participants directly encountered fact-checking labels and content removals during the study period, many were familiar with these interventions and had witnessed them previously.Perceptions towards fact-check labelling were ambivalent: most participants recognised the value in alerting users to the presence of misinformation, but were cynical towards existing implementations.For instance, some participants viewed fact-checking as futile due to the vast quantity of misinformation on social media, while others believed that social media companies could do more if they tried, and were more concerned about user engagement and profit than the quality of information circulating on their platforms.
Furthermore, some participants believed that health topics important to women were receiving little to no attention from platforms, in contrast to high-profile or politicised topics which received the "lion's share" of fact-checking.To illustrate, while a few participants recalled seeing factchecking labels on COVID-19 vaccine misinformation or posts relating to the 2020 US presidential election, they had yet to see the same level of attention given to health issues they felt to be important such as neurodiversity, menopause, or diabetes.Two participants concluded that this topically selective moderation style was likely an ad-hoc response to pressure from policymakers, rather than reflecting the platform's genuine desire to protect users from misinformation: "If there is not much of a voice from the government or policymakers saying 'hey, this is a problem' I don't think [social media platforms] really care to fact-check at all." (P13) Lastly, a lack of transparency at different levels of the misinformation detection pipeline contributed to feelings of mistrust among participants.Participants were often reluctant to take fact-checking captions at face value, as they were unclear as to how social media platforms could "really know" if a post was misinformation or not.This uncertainty was exacerbated by the involvement of AI in the fact-checking process.Several participants felt that the criteria for flagging content as misinformation was obscured from them, and were unclear as to which aspects of fact-checking were delegated to AI and which were managed manually by humans.Despite forming ad-hoc mental models based on observed system outcomes (e.g., posts containing specific phrases being deleted or captioned), participants found moderation algorithms unpredictable and prone to false positives.Here, a few participants reflected on the increasingly extreme measures taken by their friends to avoid their posts being unfairly flagged, such as using code-words to discuss sensitive topics, or setting up alternative accounts in case they were banned from the platform: "With the algorithm, you could only say one word and then you're banned [...] I've noticed that people have had to type certain words in a different way or come up with a code word, because people have been banned and their whole accounts have been closed down.It is quite extreme." (P12)

RQ2: Emotional Responses and Sensemaking Gaps
Our second research question focused on participants' emotional responses to encountering health misinformation, and the sensemaking gaps which arose.Many participants experienced informational gaps, which manifested as uncertainty around the validity of claims being made, and a desire to verify the information.Participants also commonly experienced gaps which were emotional in nature, characterised by confusion, overwhelm, and moral disapproval towards different aspects of the posts they encountered.Overall, the gaps experienced by participants fell under the broad themes of health anxiety and self-doubt, moral and safeguarding concerns, emotional burnout, and discomfort around social media platform surveillance.

Health Misinformation
Causing Anxiety and Self-Doubt.The material encountered by participants sometimes caused them to experience anxiety about their own health.This related to their general wellbeing, or to specific themes such as reproductive health and neurodiversity.Often, this followed encounters with exaggerated information, or websites which offered 'fatalistic and extreme' interpretations of health events such as heavy periods, rashes, or headaches: "I had an issue with my periods, and I Googled it [...] the website said it was potentially a serious problem and I was really upset, but it turned out to be nothing." (P8) In other cases, these concerns related to mental health and neurodiversity.A few participants encountered content on TikTok that they felt oversimplified conditions such as ADHD and dyslexia down to a few generic behavioural traits, presented in a bullet-point style.Participants identified the videos as misinformation on the basis that the traits listed were vague and arbitrary, and not specific to the conditions presented.Some participants felt that the content creators (who were not medical professionals) were attempting to armchair diagnose 2 their followers.Though these participants came to the conclusion that they were unlikely to be neurodiverse, they felt that a substantial portion of time had been spent wondering otherwise, and the material caused them to doubt aspects of their identity which they were previously confident in."There's all these videos about 'five hidden signs of ADHD' and then you check them and you're like oh yeah I have ADHD, when in reality you don't.That kind of misinformation really made me question whether I was neurodivergent or not." (P10) This style of content also caused anxiety for neurodiverse participants.One participant who identified as dyslexic described the video trend as annoying, reductionist, and ultimately unhelpful to the neurodiverse community.She reflected on past encounters with similar content, believing it to have been a significant source of stress during her diagnostic process.The participant believed that the conventions of short, informal content on TikTok encouraged brevity over nuance, and was a factor underlying the "simplistic and reductive" nature of health content on the platform."It's a quick 30 second video that reduces a complex condition down into five bullet points [...] as someone who was recently diagnosed with dyslexia, seeing all these little things in videos indicating that I might have it but still being no closer to getting a diagnosis gave me a lot of anxiety." (P5) 4.3.2Moral Disapproval and Safeguarding Concerns.Another important dimension of the encounters described by participants was the sense of moral disapproval it evoked.Several participants reported feeling a duty of safeguarding towards groups they perceived as vulnerable or likely to be harmed by health misinformation, such as children, the elderly, and those suffering from medical conditions such as eating disorders.Participants expressed strong disapproval towards misinformation with the potential to harm others, particularly if they believed it to be commercial, or motivated by profit, e.g., misleading adverts.Participants were also frustrated at news outlets which used fearmongering headlines to exaggerate public health issues, on the basis that this was likely to distress readers and contribute to anxiety in the general public.
"I think [this headline] is rather irresponsible coming back from the COVID pandemic as health anxiety is generally high and people are scared." (P15) In particular, participants expressed dismay at the possibility of younger, more impressionable girls being negatively impacted by weight loss misinformation.They believed this content to be harmful to girls' developing sense of self, and a few made connections with their own experiences with body insecurity in adolescence, and the harm they had personally experienced as a result of exposure to dieting and body-shaming narratives.For instance, one participant encountered a satirical meme on Instagram which focused on female body types, and became concerned that it could mislead children and younger adolescents.Despite recognising the post as a joke, she maintained that a younger user would likely not pick up on the nuance, and take it at face value.
"Young girls whose bodies haven't even started taking shape yet could see these 'jokes' and think their bodies are wrong." (P16)

Emotional Burnout Driving Inattention and Disengagement.
Over the course of the study, several participants encountered posts which dealt with difficult or controversial topics, ranging from infant mortality and discrimination, to severe mental illness.The emotional weight of this material superseded many participants' concerns about factual accuracy; even if they were able to fact-check a post and state with certainty that it was misinformation, emotional gaps still lingered, and were resolved only through the passing of time.In a few cases, participants exhibited patterns of information avoidance, where they chose not to verify the accuracy of the post any further, for fear of coming across more upsetting material.In this way, they strategically leveraged uncertainty by accepting informational gaps to reduce the risk of emotional overload.
"The post just made me sadder and angrier the more I read it, so I just stopped." (P2) Other participants chose not to engage with posts further due to their attention waning below a certain threshold.This often occurred when participants were disinterested in the topic of the post, distracted by other things (e.g., text messages from friends), or were confident in their current frame of understanding and believed that further investigation would only confirm what they already believed to be true.
"I stopped looking at [the post] because I was pretty sure it was nonsense [...] I was certain my suspicions would be confirmed." (P12).

Feeling Watched and
Profiled by Social Media Platforms.The final sensemaking gap experienced by participants was a feeling of being profiled by social media platforms, through tailored content recommendation algorithms.We observed that most encounters with misinformation occurred while participants browsed their newsfeeds or For You pages, and interacted with a stream of algorithmically recommended content.When asked, most participants were able to identify a specific earlier browsing session as the catalyst for certain posts being recommended, and assumed that engaging with a particular topic would cause the app to recommend them more content of that type.Here, participants expressed concerns about the level of surveillance that platforms engaged in-a discomfort amplified by the lack of solid guidance on exactly what data was being collected, and how it was being used to configure their newsfeeds.Participants speculated on the various ways in which social media apps were "watching" them, from tracking their browsing history and building behavioural profiles, to accessing their microphones and listening to their conversations.While a few participants described taking active steps to regain autonomy over their information diets by tailoring their feed settings, many experienced a sense of "digital resignation" and felt there was no choice but to accept surveillance as an inevitability of using social media.
"You feel like you're being watched by the Internet.Is this really the content I'd want to search for if I was using the platform by myself?Is it the knowledge I really need or is it just knowledge given to me from the platform?"(P8)

RQ3: Individual Strategies for Navigating Misinformation
Our third research question focused on women's strategies for navigating and making sense of encounters with health misinformation.Participants often overcame informational and emotional gaps individually, by drawing from their own internal resources, or leveraging external pieces of information which supplemented their judgements of credibility.We identified three key components of participants' individual sensemaking strategies: reliance on their existing knowledge and intuition; reflection on the intentions behind a post; and leveraging their expectations of a particular platform or source.

Drawing on
Existing Knowledge and Experience.Knowledge accrued through personal experience with a specific health issue acted as a strong basis for determining whether material was true or not.Participants were confident dismissing information which contradicted their direct experiences or was inconsistent with what they had been told by medical specialists.For instance, one participant with a formal diagnosis of dyslexia was able to immediately identify a viral video describing symptoms of dyslexia as "misleading and reductionist", as it contradicted her first-hand knowledge of the condition and its nuances.Another participant immediately dismissed a herbal cure for polycystic ovary syndrome (PCOS) as misinformation, on the basis that she had lived with the condition for years, and had been informed by doctors that no such cure currently existed.When participants had no personal experience with a health issue, they often drew on knowledge obtained through formal education, or more informal channels such as their friends, family members, and television.Participants were suspicious of content that seemed "too good to be true" or exaggerated in some way; an intuition which sometimes took precedent over other cues such as source credibility, and motivated further critical inspection, as expressed by participant P2: "I initially found [the article] trustworthy because it was from a podcast host I follow [...] but the headline was so wild that I had to check." (P2) 4.4.2Reflecting on Information Sources.Participants usually approached content originating from non-medical sources with caution, and a few cited a tendency to completely ignore or skip posts with low-quality authorship.Unreliable sources included gossip tabloids or individuals in participants' social circles with a habit of posting fake or exaggerated content.We observed very little partisan slant with respect to news outlets: participants tended to discuss news sources in terms of their financial intent and the likelihood of an article being "clickbait", as opposed to speculating on political motives.Participants were generally skeptical of content they believed was created with a financial or promotional aim, assuming that businesses would be more likely to lie or exaggerate information.Participants applied similar assumptions to news headlines that were designed to attract their attention rather than provide accurate information.
"There was text that said "sponsored" under the post [...] It makes me a bit more sceptical and less likely to trust it as it just comes from a business promoting something." (P13) By contrast, participants maintained a high level of trust in official health institutions, and tended to view content as more reliable if it appeared to have the backing of a medical professional or originated from an authoritative health entity such as the National Health Service (NHS) and World Health Organisation (WHO).Citations, organisational logos, and links to scientific journals acted as immediate indicators of credibility for participants, with several reporting that references, citations, and professional endorsements were important factors in trusting information.However, citations were not always a guarantee of credibility, as one participant discovered when she encountered a compelling post containing COVID-19 vaccine misinformation: "They gave a real-life example, they used scientific terminology AND included a journal article, so I was like okay that seems believable" (P19).In this case, the participant only discovered that the post was misinformation when it was debunked by another user in the comments' section.

Using
Expectations to Identify Misinformation.Lastly, participants maintained a strong mental model of the people they followed on social media, and therefore, had a sense of the content they should expect to see.Content which did not fit the topical and stylistic pattern of postings for a particular account or individual was often flagged as suspicious and subject to more critical thought than posts which were more familiar.This was particularly true when the material originated from a source participants followed closely, such as a good friend or favourite celebrity, as described by one participant: "[Famous musician] usually posts photos of him and what he's doing in his daily life, or he promotes his music [...] so this post really caught my attention initially." (P9) Participants also used visual cues to determine if material was out-of place.Adverts with poor visual and graphic fidelity were instinctively distrusted by participants, in contrast to "slicker" and well-produced infographics, which participants tended to perceive as more reliable.Visual and linguistic characteristics worked in tandem to produce feelings of suspicion, as expressed by one participant who dismissed a weight loss advert as misinformation due to its poor visual fidelity and writing style that sounded more like a bot than a human: "It seemed very clickbaity, the caption was written in a very shady way with an emoji at the start of each sentence.It literally looked like text from a bad clickbait ad on Google that if you click, you're gonna get a virus.A lot of photos on Instagram are aesthetically appealing but this one looked like it was from 2004." (P7)

RQ3: Collective Strategies for Navigating Misinformation
We observed a strong tendency among participants to utilise the views of others when attempting to verify information and overcome emotional challenges.In two cases, this involved sharing posts with their partners or friends in-person, to further discuss and debate the content.However, collective sensemaking overwhelmingly occurred digitally at different levels of abstraction; some participants simply scanned the like to dislike ratios and reactions to posts, whereas others embarked on lengthy sessions of reading and reflecting on comments in detail.

Interpreting Engagement Metrics.
Participants occasionally used high-level engagement metrics to judge information utility and credibility.Popularity was inferred by the number of likes, account followings, or shares on a post, and often increased the perceived trustworthiness of content.Where platforms such as YouTube, Instagram and Twitter only displayed the magnitude of user interactions via likes, Facebook's reaction system conveyed sentiment, which allowed some users to form more specific impressions of content based on which emojis dominated the reactions.
"I checked the reactions and everyone was laughing so I thought I won't bother opening [the article]" (P12).

Synthesising Comment Sections.
Many participants looked at comments by default when using social media, and described an instinctive tendency to quickly scan the comments section for an overall impression of what others were saying.Reasons for doing so included entertainment value, curiosity, and validation of their own feelings, particularly when material was controversial.Comments were described as being easy to digest, entertaining, and often contained additional information which helped participants understand content more holistically.Some participants found educational value in synthesising the contributions of others, and enjoyed exposing themselves to diverse perspectives.Useful information included scientific and anecdotal evidence backing up a particular perspective, and contextual information about the creators of misinformation, which in some cases, was found to foster sympathy towards vulnerable individuals.
"The replies to the Tweet said [the creator] had lost a child and it turned her into a bit of a conspiracy theorist [...] I didn't want to make fun of someone like that." (P2).
Synthesising comment sections was particularly useful in helping participants overcome emotional gaps, as seeing that others were expressing similar thoughts made them feel more confident and validated in their opinions.This was the case for one participant who encountered body-image related misinformation on a subreddit mainly frequented by women, where a sense of camaraderie was fostered with the other commenters who had collectively debunked the post, and "knew what normal bodies should look like" (P16).However, reading comments was unhelpful in certain circumstances, such as when the views expressed by users were argumentative and inflammatory, or made claims that were inconsistent with one another.When participants could not infer an obvious factual consensus among users in the comments, they felt confused and used cues such as the number of comment likes or the perceived clarity of arguments to decide who to trust, as participant P7 shared: "I genuinely had no idea which person was right, they all had completely different opinions so one had to be wrong, so I was trying to see which one made the most compelling arguments." (P7) Other participants described getting distracted by argumentative comment threads and being pulled among rabbit holes of spectating comment wars.Sometimes, participants consulted the comments expecting other users to agree with them, only to find the opinions being espoused offensive or lacking in empathy.This often led to the spawning of new moral concerns, and ended with participants disengaging from social media with their questions unanswered.One participant reflected on her tendency to get "sucked into" social media comment sections due to the initial entertainment value, only to experience chains of distraction: "I just get sucked into the comment section, I see one person trash talking the other one and it almost becomes entertaining.I might not even go back to [the original post] because the nature of the app itself distracts me and gets me to consume something else right afterwards.It's just a massive doomscroll." (P16) 4.6 RQ4: Deciding Not to Share and Report Our final research question focused on women's behavioural responses to health misinformation.We found that only a minority of participants shared the posts they encountered, and those who did so tended to share privately, via direct message or in-person with a single individual (e.g., their domestic partner).In most cases, participants judged posts to be irrelevant to anyone they knew and therefore not worth sharing, and many were reluctant to share even interesting or topical posts in case they upset or misinformed their friends: "You don't really want to share [health information] in case it isn't true and you harm someone you care about" (P11).This sentiment was particularly strong when the content touched upon sensitive topics and the cost of sharing clearly outweighed the benefits or was likely to cause social conflict.For instance, one participant decided against sharing an article about the organisational failures of a local hospital in case health workers on her friends list responded, and the exchange spiralled into an argument: "Had I shared something like that, I would've gone on a bit of a rant, but I've got people on my friends list that are working for the NHS, and I wasn't prepared to have the backlash of people arguing with me." (P12) Though all social media platforms provided an opportunity to report content, very few participants chose to do so during the study.Strongly influencing this decision was the fact that almost all the misinformation encountered by participants was nuanced and not wholly false, making the act of reporting the post inappropriate.Despite not explicitly being familiar with the community guidelines of the platforms they were using, many participants believed that reporting was reserved for extreme or illegal posts.However, a few participants saw reporting as totally futile, since the quantity of misleading content on social media was so great that reporting one post would make little difference overall.The few participants who actively reported posts found their efforts to be in vain; for instance, participant P15 recalled repeatedly encountering misinformation in her newsfeed despite multiple attempts to report and dismiss the post: "Every single time I saw [the post] I would click the 'I don't want to see posts like this' button, but it would always just appear in my newsfeed the next day." (P15)

DISCUSSION
Health misinformation is a growing socio-technical problem with implications for the wellbeing of women.We carried out a diary and interview study to understand women's perceptions around health misinformation on social media, and investigate their strategies for navigating and/or debunking misinformation encountered in an everyday context.Our study complements a growing body of work focusing on women's health by uncovering a diverse set of user perspectives and concerns around health misinformation and platform moderation, and providing a situated and nuanced account of how women utilise their own knowledge as well as that of others to reduce uncertainty and emotionally regulate once exposed to misinformation.We discuss our findings in relation to existing literature, and illustrate how our work expands existing definitions of misinformation to include the perceptions and priorities of users.We also outline important design themes which follow from our results, and offer directions for future work which innovates around promoting female health literacy.Lastly, we stimulate a critical discussion around socio-technical power imbalances which arise from platform-led, centralised moderation, and discuss how online fact-checking may better support women's needs through collaborative sensemaking.

Misinformation Exists Along a Spectrum
Through analysing 19 women's serendipitous interactions with health misinformation, we built a detailed snapshot of how women made information credibility assessments in their everyday use of social media.Participants often made use of cues and heuristics well-known to influence accuracy judgements, such as source credibility [28], comparisons with existing knowledge [33], and expectancy violation [30].We observed a wide spectrum of engagement with posts, ranging from immediate dismissal all the way through to lengthy research sessions including multiple sources of information.Participants' sensemaking strategies broadly shared the characteristics of being adaptive and ad-hoc, rather than part of a rigorous credibility assessment, and were constrained by affective factors such as inattention and emotional fatigue.
Rather than focusing on material that is provably false, we allowed participants to submit material that they felt met their own personal criteria for misinformation.This facilitated an analysis of how women defined health misinformation, and how they applied these definitions in their everyday lives.Far from suggesting a binary classification in which information is neatly classed as either true or untrue, our results show that participants perceived misinformation as existing along a broad spectrum of veracity, impact, and intentionality.Posts thought to be created deliberately with malicious, financial, or self-promotional aims were looked upon with greater concern and suspicion than seemingly benign material, which conveyed personal, subjective truths and experiences.Such an understanding relates closely to existing classifications of health misinformation; Wardle [93] proposed a taxonomy of distinct mis-and disinformation types, which includes satire or misleading content created with benign intent, and on the other end of the spectrum, completely fabricated content designed to deceive and cause harm.It has been established that different types of misinformation vary in their spreading potential and ability to sow distrust [73,77], but substantially less work has explored how users perceive these risks.Future research in the sphere of women's health may benefit from a similar approach of considering both the factual characteristics of misinformation, and subjective, relational factors such as perceived intent and risk level.This may facilitate a richer analysis of user threat models, and inform priorities for targeting specific 'flavours' of misinformation perceived as harmful by users.

Holistic Approaches to Combatting Misinformation
Our study provided a rich account of how women responded to health misinformation in their daily lives, and how personal and socio-technical factors shaped their experiences.For many participants, making sense of misinformation was not a linear pursuit of objective truth, but a recursive journey of reflection, information seeking, sense-making, and unmaking.The sensemaking gaps experienced by participants upon exposure to misinformation were diverse, and included questions about accuracy, moral disapproval, and genuine concern towards vulnerable users of social media.Gaps were not mutually exclusive, and participants experienced several simultaneously, or spawned new gaps as they interacted with information further.In this light, we suggest that approaches to addressing women's health misinformation be holistic, and address the breadth of informational and interpersonal needs of users when navigating social media.In particular, this should be realised across the dimensions of emotional wellbeing, and intersectionality.

5.2.1
Recognising the Emotional and Personal Dimensions of Misinformation Exposure.Several participants experienced emotional burnout and frustration in relation to their encounters with misinformation.This was sometimes so acute that it prompted them to stop engaging with social media due to sheer overwhelm, or anxiety about coming across even more upsetting information.Though some work has explored how negative affective states can shape how users engage with information [26,89,91], current interventions tend to focus primarily on information-related gaps, and how users can be supported in fact-checking as a linear task with a binary outcome [33].This approach may overlook the holistic and deeply personal aspects of exposure to misinformation, and how these experiences relate to a user's social context.Several studies have suggested a higher degree of concern and anxiety surrounding misinformation among women [2,69,91,94], and our study complements these findings by exploring how specific concerns manifest in real-time as a response to different topics of misinformation.For instance, participants' concerns about younger girls being harmed by weight loss misinformation were experienced viscerally, and related to their own past experiences with body image issues.The multiplicity of personal experiences that women bring to their interactions with health information form an integral part of their sensemaking process, making the exploration of interventions which facilitate active reflection and emotional regulation valuable endeavours for this user group.habits.Some participants described themselves as originating from conservative backgrounds where discussions around menstruation and reproductive health were taboo even with medical professionals, resulting in greater reliance on the Internet and social media for information (see §4.2.1).The impact of menstrual stigma on women's health literacy has been researched extensively, revealing lowered willingness to seek professional medical advice [6,70], greater risk of internalising stigmatising health narratives due to a lack of comprehensive pre-menarchal education [64,88], and bullying from men in the community [49].The weight of this evidence suggests an element of heightened risk for women experiencing stigmatised health issues, and those living in contexts where access to healthcare is low [57].Female adherence to alternative health narratives ought to be viewed through an intersectional lens, which deconstructs narratives of 'shame and blame' and recognises the overlapping roles of gender and cultural background on informational disparities [86].Exploring the needs of participants through this lens involves recognising their reproductive health beliefs as being grounded in their social and cultural contexts, and refraining from negatively labelling, or dismissing these narratives as fiction [83].Effective health literacy initiatives require ongoing adaptation to the needs and dynamics of specific communities, and HCI researchers working in this space should endeavour to understand users' perspectives with judgement-free curiosity.Solutions should align with participants' cultural contexts, and promote accessible tools for self-efficacy and connection with health professionals and the wider community [88].

Inquiring Platform Governance and Responsible AI
On the whole, participants held negative attitudes towards social media platforms' role in combating misinformation.Justifications were diverse, with some believing that platforms ought to intervene more consistently, and others perceiving current interventions to be punitive and overbearing.The divisiveness of platform interventions is corroborated by several studies which examine user attitudes towards official fact-checking [4,72].Despite their differences, we identified a number of shared concerns among participants; namely, that algorithmic moderation was opaque, and that platforms had little genuine interest in safeguarding users against problematic content.Common to almost all participants was a keen awareness of the commercial interests of social media platforms, which many believed to be irreconcilable with their informational and wellbeing needs.We also demonstrated that many participants felt surveilled by social media platforms, and uncomfortable with the opaque and tailored curation of their information diets.
Critical questions about power in socio-technical systems arise from our results.Work in the HCI and CSCW community has long investigated the negative impacts of opaque algorithmic systems for content recommendation and moderation on user trust and emotional wellbeing [25,63].Complementing this work, our results paint a picture of disillusionment among women, who feel "left to the wolves" in an ecosystem of harmful content, selective fact-checking, and paternalistic control over their information diets.The few mechanisms for taking action against misinformation such as reporting posts were under-utilised, with participants perceiving such measures to be inappropriate or futile.Even when participants utilised the tools provided by platforms (e.g., post dismissal), the outcomes were not as they expected, further reinforcing feelings of frustration.Therefore, participants' concerns about transparency in fact-checking were superimposed on a wider picture of disempowerment, and a perceived lack of control over their digital futures.
A substantial body of literature has highlighted the role that distrust and mistrust plays in women's decisions to engage with alternative health narratives.Feminist critiques of the new wellness industry emphasise themes of autonomy and self-empowerment that drive the movement [9,10,42], and recent work has highlighted how intersections of racism and misogyny can shape women's perceptions of health authorities [86].Though participants in our study typically held information from health authorities in high regard, they fostered a great deal of scepticism towards social media platforms, and the algorithms on which they run.Therefore, the theme of mistrust manifested in relation to technical, rather than medical structures of authority in our study.It is clear that machine learning systems create a concrete experience for users when embedded within socio-technical contexts [3], making it crucial to study these experiences as AI continues to govern users' intimate digital lives.We argue, in line with previous work [72], that the success of socio-technical responses to misinformation depend on fostering sustained approval and trust among users.In particular, transparency and accountability emerge as crucial design themes with respect to interventions which leverage AI, such that women feel empowered in their day-to-day interactions with social media, rather than surveilled and profiled.

Supporting Collective Health Education
Our results illustrate how debunking takes place in the richly interpersonal context of social media, and how women draw from the insights of other users not only to fact-check, but to emotionally regulate.Our study complements the view established in previous qualitative work that community debunking is not an individual task with a binary outcome, but a constellation of learning, socialising, and value-building [33].Comments sections allowed many participants to learn about others' experiences, reflect on their own positions, and make peer-supported judgements of credibility.Existing work has shown how people use social media platforms to collectively make sense of extreme events and tragedies through expressing sympathy and managing uncertainty when details are scarce [66,84], or navigating similar experiences through in-depth online sharing [48,59].Though collective sensemaking often involves direct communication between digitally networked individuals, our study illustrated a more passive, and observational style of engagement with the views of others.Participants often took the role of spectators, reading but not getting actively involved in comment exchanges.Regardless, our findings indicate that even in contexts where social interactions were more subtle and implicit, peer support and camaraderie acted as a key scaffold against a backdrop of distrust in platform interventions.
The experiences of participants suggest that while drawing from the wisdom of the crowd is often useful for women, the diversity and unfiltered nature of comment sections present sensemaking challenges.The need for community-oriented interventions in matters of women's health has been discussed previously by Tuli et al. [88], in their analysis of the Menstrupedia forum, an India-based platform for support and information about the menstrual cycle.The authors call for an ecological approach to women's health education, which facilitates respectful and constructive information exchange between a variety of actors, including peers, medical experts, and the community at large.In particular, an appropriate balance of technology-mediated and direct communication is advised, with technology supporting, rather than supplanting human interaction [88].
Our results indicate that the presence of health authorities on social media is often indirect and impersonal, taking the form of fact-checking captions and citations, in contrast to the direct, social involvement of medical experts in online exchanges.The uncertainty experienced by participants who "did not know who to trust" when reviewing comment sections suggests that users may value the input of credible and trusted figures in a more direct capacity.Some recent attention has been paid to the role played by gynaecologists and obstetricians in the debunking of misinformation on TikTok, highlighting that medical professionals act as reputable community figures, and participate in widespread health education on this platform [46,82].As such, a consideration for designers may be to innovate around how medical experts can be directly involved in the diffusion of accurate women's health information, both in closed communities and in the open web, remaining cognizant of other actors in the community, who may include people of all genders and backgrounds [88].

LIMITATIONS
We recognise limitations in the sample and methodologies of our study.Though our participants included women from a mixture of ethnic backgrounds, our participants were mainly concentrated in the age range 25-35, university educated, and all resided in the UK.Future work could recruit a more demographically diverse sample in terms of age, economic background, and location, to explore how health misinformation is experienced by women in different socio-cultural contexts.Furthermore, some participants freely disclosed information about their disability and health status, and discussed how this aspect of their lives intersected with their experiences with misinformation.However, this information was not actively requested, and did not form a central part of our analysis.A dedicated investigation of how disabled women and those living with chronic reproductive health conditions experience health misinformation may have been valuable, as our results suggest that they may be particularly vulnerable to misinformation about the conditions they live with.Finally, due to ethical constraints, we did not have direct access to the posts encountered by participants.As a result, it was difficult to verify the descriptions and source attributions provided by participants.With ethical approval, future work could allow the submission of screenshots, to provide additional context and to remove the burden of describing the post in detail from the participant.This would also allow us to more rigorously determine the falsity level of the post, and identify whether it qualifies as misinformation according to formal taxonomies, facilitating a more obvious comparison between user-identified and officially fact-checked misinformation.

"
There's a lot of misinformation online.Whenever I scroll Instagram or Facebook I see a lot of people without qualifications posting advice about diets or health." (P4) Proc.ACM Hum.-Comput.Interact., Vol. 8, No. CSCW1, Article 128.Publication date: April 2023."The Headline Was So Wild That I Had To Check" 128:11 Proc.ACM Hum.-Comput.Interact., Vol. 8, No. CSCW1, Article 128.Publication date: April 2023."The Headline Was So Wild That I Had To Check" 128:21

Table 1 .
Demographics of participants recruited for our study.All participants self-identified as women and were residing in the UK at the time of the study.
Importance of Intersectionality in Misinformation Research.Our study touched upon the role that cultural taboos towards female bodies may play in women's information seeking Proc.ACM Hum.-Comput.Interact., Vol. 8, No. CSCW1, Article 128.Publication date: April 2023.