"If This Person is Suicidal, What Do I Do?": Designing Computational Approaches to Help Online Volunteers Respond to Suicidality

Online platforms provide support for many kinds of distress, including suicidal thoughts and behaviors. However, because many platforms restrict suicidal talk, volunteers on these platforms struggle with how to help suicidal people who come for support. We interviewed 11 volunteer counselors in a large online support platform, including after they role-played conversations with varying severities of suicidality, to explore practices and challenges when identifying and responding to suicidality. We then presented Speed Dating design concepts around emotional preparation and support, real-time guidance, training, and suicide detection. Participants wanted more support and preparation for conversations with suicidal people, but were conflicted about AI-based technologies, including trade-offs between potential benefits of conversational agents for training and limitations of prediction or real-time response suggestions, due to the sensitive, context-dependent decisions that volunteers must make. Our work has important implications for nuanced considerations and design choices around developing digital mental health technologies.


INTRODUCTION
Suicidality describes an array of thoughts and behaviors surrounding a person's desire to die, from passively wishing one were dead to actively attempting to kill oneself [37,66].The CDC estimates that in 2021, about 12 million U.S. adults seriously thought about suicide, 1.7 million attempted suicide, and nearly 50,000 people died by suicide [5].Many people seek support for suicidality from medicalized services including hotlines, therapists, and hospitals.These resources are often seen as more effective than non-medical support from loved ones, community members, or suicidal peers [40].Yet, for a number of reasons -e.g., harm by the medical system, lack of access, or patients' preferences-many seek support for suicidality outside the medical system.Online platforms are spaces where suicidal people seek support, often anonymously and for free.Some sites are dedicated to supporting suicidal people through non-professional peer support, like message boards [58] or Reddit's r/SuicideWatch [19].Although most other sites, including Facebook [25], TikTok [84], 7 Cups [15], or TalkLife [81], restrict how users talk about suicidality, prior work has speculated that people still seek support for suicidality there [88].
Prior research on suicidality online has centered people who seek support for suicidal distress (e.g., [21,58]), with little work about the people who provide support, such as volunteer counselors and moderators.Suicide prevention research in HCI and related fields has focused on creating technologies -especially AI-based ones-to replace, rather than assist, people who provide suicidality support.These include chatbots to automate counseling [27,42,48,69,75,86] or mass moderation tools like suicide detection technologies [6,9,11,21,25,30,71].However, little work has been done to evaluate how these computational approaches can help support providers [63] in real-world contexts.Our work aims to fill this gap by (1) understanding the perspectives, challenges, and needs of people who support suicidal people online, and (2) evaluating computational approaches to support online volunteers.
We interviewed 11 volunteer counselors on an anonymized online site where mental health peer support is provided, but suicidal talk is prohibited.Prior work speculates that people seek support for suicidality on sites which restrict suicidal talk [88]; a preliminary aim of our work is to verify this claim and understand how these restrictions affect online suicidality support.We conducted our study in two phases: In Phase 1: Semi-Structured Interviews, we asked participants open-ended questions about how they respond to suicidality on the site and conducted Mock Chats where participants role-played text conversations with an author who role-played of seeking support for different severities of suicidality.In Phase 2: Design, we conducted Speed Dating design activities with participants to understand their reactions to computational design concepts around training, real-time guidance, emotional preparation and support, and suicide detection.

Paper & Findings Preview
Our paper describes a study we conducted in two phases.Section 3 provides an overview of related work, background, and methods relevant to both phases.Then, specific methods and results are divided between Phase 1 in Section 4 and Phase 2 in Section 5.
From Phase 1 in Section 4, we find that people seek support for various kinds of suicidality, which many volunteer counselors are unprepared for (Section 4.3.1).Volunteers could accurately distinguish different degrees of suicidality in mock chats (Section 4.3.2),but they responded very differently to the same severity of suicidality (Section 4.3.3).Many responded in direct opposition to the site's prohibition on suicidal talk, fearing that ending a chat and recommending a hotline would harm suicidal support seekers (Section 4.3.4).In part because of the site's restrictions on suicidal talk, participants took on personal responsibility and emotional burdens when deciding how to respond to suicidal support seekers (Section 4.3.4).We highlight design opportunities identified from Phase 1 in Section 4. 4.
Section 5 highlights design activities we conducted in Phase 2 to address these design opportunities for online suicide intervention and support.We find that participants wanted more emotional preparation and support (Section 5.4.1),training and training technologies (Section 5.4.2), and real-time guidance from trained support staff (Section 5.4.3).However, participants worried about using professionalized support staff, since some suicidal support seekers come to the site specifically to avoid medicalized healthcare (Section 5. 4.3).Participants disliked AI-based technologies for real-time guidance and suicide prediction, for fear they would be used to limit volunteers' discretion and worsen support provided on the site (Section 5.4.4).

Highlighted Discussion Points
Our results suggest that: (1) HCI and computational fields that study suicidality should focus more on improving online human care infrastructures for suicidality support rather than creating technologies to automate suicide prevention, because online peer support has the potential to provide healthcare to suicidal people who cannot or do not want to access medicalized systems (Section 6.1).( 2) Technologies commonly proposed for suicide prevention, such as suicide detection models or chatbots to help deliver therapy, may be misaligned with the needs of online volunteer counselors.Although suicide detection technology may help inexperienced volunteers, our participants could assess suicidality without it and worried about predictive technologies being used to restrict their autonomy.Instead, participants wanted to use AI-based technologies for training, e.g., conversational agents to simulate suicidal support seekers to create low-stakes practice conversations (Sections 6.2.1 & 6.2.2). (3) Online sites should reconsider restrictions on suicidal talk on their platforms.These restrictions can inhibit online safe spaces for suicidal people and burden online volunteers, moderators, and bystanders when suicidal people seek support online (Section 6.2.3).

RELATED WORK 2.1 Online Suicidality Support
Online sites for mental health provide spaces for users to support one another with mental distress or mental illnesses like depression or PTSD [7,62].We examine online spaces for suicidality support.
Prior work has focused only on suicidality support in online spaces that are dedicated to suicidality, like message boards for suicidality [58] or social media sub-communities like Reddit's r/SuicideWatch [9,20,21].These sites allow suicidal people to get support and share information which normalizes suicidality and combats stigmatization [57].Support is offered differently across sites: forums like r/SuicideWatch provide semi-public group support; sites like 7 Cups or TalkLife provide one-on-one conversations between a volunteer counselor and a support seeker, or group sessions with many support seekers.
Prior work presents a mixed picture of how these sites affect suicidal people.Some warn that these spaces encourage online volunteers without professional training, who may not be able to provide adequate support to suicidal people [88].Further, online sites dedicated to suicidality support could expose users to content about suicide that could increase suicidal ideation [23].Some suggest that online spaces could proliferate contagion effects, incitement of suicide or too-detailed information about suicide methods [14,49,58].Other research suggests that online spaces can provide community for suicidal people, which helps assuage suicidal distress [20,46,49].Online sites offer safe spaces for other marginalized groups like trans people [73]; we aim to explore whether they could serve as a safe space for suicidal people, as well.
Prior literature on online suicidality support communities overlooks two important aspects of suicidality support online.First, as Pendse et al. [63] suggest, "little work has been done to understand the experiences and needs of the individuals who power or support technology-mediated mental health support, " like helplines or online mental health support platforms.Pendse et al. [63] address in part this by interviewing mental health helpline volunteers in India about their experiences.They call this the human infrastructure of these services, including labor, responsibilities, and emotional burdens.Our work similarly explores human and technological infrastructures for mental health support (specifically suicidality) and similar themes of burdens, responsibilities, and labor practices.However, our research differs from Pendse et al. [63] primarily in terms of the sites studied.Pendse et al. [63] focused on helplines, which are offline, more formal, more medicalized 1 services that provide support for some suicidality but mostly other forms of mental distress.Our research site is online, informal, non-medicalized, run by volunteers who are largely not professionally trained, and we focus specifically on suicidality.
Secondly, our study differs from prior work in that we study suicidality support in a space that is not dedicated to providing it and actively tries to prohibit it.Many, if not most, online platforms restrict how users talk about suicide.For example, TikTok prohibits content that advocates for suicide [84]; Facebook recommends that users report suicidal talk [25]; 7 Cups and TalkLife prohibit all suicidal talk [16,82]; and most sub-communities on Reddit redirect suicidal people to r/SuicideWatch, one of the few online spaces dedicated to supporting suicidal people where talking about suicide is allowed [20].Yao et al. [88] suggest that platforms justify restricting suicidal talk due to liability concerns, lack of adequate support for suicidal people, 2 and risk of harm to other users and volunteers.To our knowledge, Yao et al. [88] conducted the only prior empirical work on online restrictions on suicidal talk (albeit briefly and not as the central focus of the study).They speculate that suicidal people seek support online despite policies prohibiting it.However, it remains unclear how platforms' restrictions impact people who seek support online for suicidality and the people who provide them supportthe latter of which is the focus of our work.Overall, we chose our research site to gain greater perspective into online suicidality support workers and the effects of policies restricting suicidal talk online.

Computational Approaches to Mental
Health & Suicide Prevention 2.2.1 Suicide Detection Technologies.Suicide detection technologies have largely been used for content moderation and suicide 1 Pendse et al. [63] characterize helplines in India as "informal healthcare services." We characterize them as more formal and more medicalized than our research site, because helplines connect directly to "a broader network of mental healthcare" [63] and because helpline volunteers get considerably more training than online volunteers on our site. 2For example, TalkLife and 7 Cups justify restricting suicidal talk because they say volunteers are not trained to support suicidal people [16,82].
gatekeeping online.Some technologies, e.g., the Samaritans Radar Twitter plug-in [71], use deterministic matching to monitor online text for keywords or phrases.Other technologies use machine learning (ML) to predict suicidal behavior and ideation based on social media data [6,9,11,21,30].For example, since 2017, Facebook has used ML-based tools to "identify possible suicide and self-injury content, " which can trigger alerts for moderators, who can intervene by sending resources or contacting "local authorities" [25].This adds to Facebook's existing infrastructure which allows users to flag Facebook friends as possibly suicidal, which can also alert moderators [25].ML-based suicide prediction models have also been used to triage calls to suicide hotlines (e.g., Crisis Text Line) [31] and assist clinicians in medical settings [36].
In the context of our study, prior work on suicide prediction technologies suffers from two limitations: First, it often flattens suicidality and does not conceptualize it as a complex, varied range of experiences.Almost all prior work on suicide prediction models uses binary outcomes or binned risk scales (e.g., low, medium, high).More recent work has proposed methods for labeling more descriptive risk and protective factors related to different kinds of suicidality [9,76]: While an improvement on binary classification, this still does not adequately recognize the fullness of suicidality.This is likely due to data constraints that make it technically difficult currently to train a model to distinguish between different severities of suicidality, e.g., ideation versus crisis [36].A second limitation of suicide prediction technologies is that they put too much emphasis on preventing the act of suicide, rather than on supporting the person experiencing suicidality.Suicidal people often experience longer-lasting suicidal ideation that escalates from passive ideation to a point of crisis [37].Yet, people who experience suicidal distress that does not entail immediate risk of suicidal self-harm have been overlooked in prior work on technologies for suicide prevention.Instead, we argue that supporting suicidal people across the spectrum of suicidality is important, not only to prevent suicide, but also to deter suicidal distress from worsening and to help people "live well with the desire to die" [40].In contrast, our work further probes the potential for suicide prediction technologies to support volunteer counselors in identifying and responding to suicidality.

Natural Language Processing (NLP) Technology for Mental
Health Support & Suicide Prevention.Conversational agents and language models have been used to deliver therapy and emotional support to people experiencing mental distress for at least a decade [27,42,47,48,86] and have been promoted much more since the release of ChatGPT and recent Large Language Models (LLMs) [2].For example, Fitzpatrick et al. [27] created an automated text-based conversational agent ('Woebot') to deliver cognitive-behavioral therapy (CBT).Fully automated conversational agents like these can expand care to people who may not want to talk about their mental distress with a human.For example, Lucas et al. [47] suggest that veterans may be more likely to disclose mental distress to a chatbot than to a person.However, fully automated conversational agents can also make errors which can be harmful, especially to vulnerable support seekers.For example, in an experiment with a fake patient, a chatbot created with GPT-3 told the patient they should kill themself [69].
Recent work has suggested using chatbots to assist human counselors.For example Sharma et al. [74,75] created chatbots to support human-AI collaboration for counseling, where their chatbot suggests more empathetic responses to volunteer peer supporters on TalkLife.They suggest their chatbot increased empathy in peer-topeer conversations, especially for peer supporters who previously expressed difficulty offering empathetic responses.Human-AI collaboration may not completely ameliorate problems with using chatbots in counseling.For example, after deploying a conversational chatbot to give suggests to volunteer counselors on the peer support platform Koko [38], researchers suggested that support seekers responded worse to peer support once they learned it was co-created with chatbot suggestions, because they felt uncomfortable with "[s]imulated empathy" [32].
Finally, Demasi et al. [22] explore the use of chatbots to assist in suicide prevention, specifically in a suicide hotline.They argue that a "chatbot clearly cannot safely and thus ethically take on a counseling role." Instead, they create "Crisisbot, " a conversational chatbot that simulates a suicide hotline visitor in order to train (human) hotline counselors on mock conversations [22].Crisisbot is based on text from real hotline conversations to create multiple personas that mimic the diversity of stories and experiences for which support seekers call a hotline.
AI-based technologies offer promising opportunities for better automating suicide prevention, as well as potential risks to harm or neglect suicidal people.For example, a chatbot used in an eating disorder hotline was recalled recently because it encouraged weight loss and disordered eating [87].As stated above, suicide prediction models focus too heavily on intervention for severe forms of suicidality, which has led to potentially harmful forms of nonconsensual treatment, like wellness checks [4].Though these technologies risk harming suicidal people, they are the most prominent approaches that prior work has offered for suicide prevention.Thus, a major goal of our study is to evaluate the efficacy of these technologies to help volunteer counselors support suicidal people.

Training for Suicide Prevention
Another goal of our study is to understand how online volunteers on a site that does not offer nor require suicide prevention training respond to suicidality, and whether there are opportunities to improve how these volunteers are trained.Many evidence-based therapies exist for treating mental disorders (and mental disorders associated with suicidality) as documented in lists compiled by the American Psychology Association's Division [68], the National Institute for Health and Care Excellence [12], and the National Institute of Medicine [24].Traditionally, the delivery of treatment for mental disorders requires well-trained, licensed counselors or therapists.Training of counselors is expensive and often has limited reach.Across many domains, realistic practice and tailored feedback are key processes for learning.These principles are often the basis for training for clinical skills, where therapy sessions with real clients or role-play simulations are supplemented with coaching by a trainer.For example, didactic training workshops for both motivational interviewing and cognitive processing therapy are more effective and lead to better therapist performance when coupled with personalized feedback and coaching [51,54].Increasingly, digital and self-guided training is being used for counselor training across a wide range of therapeutic models [41,65].
Online support and task shifting, in which lay workers deliver care typically administered by licensed health care professionals, are being increasingly used to expand mental health care access.In particular, the U.N. recommends gatekeeper training for suicide prevention [28], where "gatekeeper" refers to lay people who see many other people day-to-day and can be trained to "identify persons at risk of suicide and refer them to treatment or supporting services as appropriate" [3].Thus, users in many online mental health communities -including our research site-can be considered "gatekeepers." Isaac et al. [33] suggest that gatekeeper training positively affected the knowledge, skills, and attitudes of trainees regarding suicide prevention.

STUDY CONTEXT, POSITIONALITY, & METHODS 3.1 Research Site & Policy Against Suicidal Talk
We anonymize the site we researched and refer to it as "the site" or "the platform."The site is one of the largest online mental health support platforms in the world, where peer-to-peer support takes place primarily through anonymous, text-based conversations between a single support seeker and a single volunteer counselor.Volunteer counselors receive short active-listening training when they register.They are informed of the site's policy to end a chat when a support seeker mentions suicidality in any form, including passive ideation, and to send the support seeker a link to a list of hotlines to contact instead.Volunteers receive no other training on how to identify or respond to suicidality.

Positionality: Incorporating Lived Experience of Suicidality
Critical scholars argue that, in the study of suicidality, scientific and biomedical knowledge is considered legitimate, while knowledge from suicidal people is overlooked [39,40,43].These scholars argue that we should embrace the lived experiences of suicidal people.In this vein, we incorporate our own lived experience into the design and analysis of our study.The paper's lead author experiences regular suicidal ideation, has engaged in suicidal behaviors, and has sought suicidal support online for many years (although not on the site that is the focus of this paper).We relied on the lead author's discretion to make interpretative decisions, as informed by their experience.For example, we motivated the study of how people seek support for passive suicidal ideation online from the lead author's personal experience seeking support online (as well as identifying a gap in prior literature).We note throughout the paper where we especially relied on the author's lived experience.

Participant Recruitment & Demographics
We recruited volunteer counselors at the site to participate in our study.The site's community liaison distributed our recruitment materials on the site.The recruiting materials said that the study would be about suicidality and designing suicide support technologies, but prior experience or training with suicidality was not a requirement of participation.Participation was open to all volunteers who were 18 or older, spoke English, and lived in the United States at some point, which were requirements from our Institutional Review Board.We stopped recruiting participants once we reached data saturation in Phase 1 of the study.
We asked participants open-ended questions about their demographics, training, personal experience with suicide, and experience with people seeking support for suicidality on the site.Table 1 contains summary statistics from these questions.Of the ten participants who completed the demographic survey, eight self-identified as female and two identified as male, most said they were white, Indian or Asian, and they listed multiple nationalities, mostly Indian and American.This is important, because support seekers come to the site from many countries worldwide, especially India and the U.S.Only three had education or training on general mental health, e.g., P9 had a psychology degree.Two said they had formal training around how to identify or respond to suicide (e.g., hotline training) and one said they were in the process of suicide prevention training.Two participants (P9 and P11) were relatively new volunteer counselors, with fewer than 100 chats.All other participants had been volunteer counselors for at least a year (P3 said six or seven years), said they provided support at least once a week (some everyday), and had other roles at the site, like as a moderator, community admin, or a verified volunteer counselor.Six participants said they had personal experience with suicide, two experienced suicidality themselves, and four through a loved one.All participants said they had interacted on the site with a support seeker who they thought was suicidal (see Section 4.3.1).

Harm Mitigation & Institutional Review
We interviewed participants about suicide and other potentially difficult topics.Although participants were regularly exposed to these topics on the site, we took precautions to minimize potential emotional harm during our interviews.We gave content warnings and notified participants that they could stop the interview or skip any activities: P7 was the only participant who obliged, skipping the crisis mock chat.We did not ask about participants' personal experiences with suicide in the post-survey because we worried asking then could be insensitive or triggering.Instead, we waited for an appropriate time during the interview to ask, and gave participants the option to decline answering.This study -including all study and recruitment materials-was approved by the Institutional Review Board at Carnegie Mellon University.

Methods Overview
To understand online volunteer counselors' current practices, challenges, wants, and needs when responding to suicidal people, we interviewed 11 volunteer counselors in a two-phase study.We first conducted interviews during Phase 1 (Semi-structured Interviews and Mock Chats) in September 2022, then invited the same participants back to a second round of interviews in January 2023 for Phase 2 (Speed Dating and Design).As described below, we used the results of Phase 1 to inform the design concepts shown to participants in Phase 2.
Phase 1: Semi-Structured Interviews.In Phase 1, we first asked participants open-ended questions about their past experiences with suicidality on the site and in their own lives, their ability in identifying and responding to suicidality, and what they thought of the site's policies on suicidality.Next, participants participated in mock chats where the lead author role-played a support seeker in 3 to 4 text-based conversations representing various degrees of suicidality, including crisis, high severity, low severity, borderline suicidal ideation, and none.After each mock chat, we asked the participant how they would have responded if they were actually on the site.Finally, participants answered two open-ended design questions, e.g., "Can you think of two ways you would change the site to help volunteers respond to suicidal support seekers?" See Section 4 for results.Phase 2: Design.Our research team analyzed the results from Phase 1 to identify design opportunities and generate design concepts for a Speed Dating activity.We invited participants back for Phase 2 interviews, wherein each participant was presented with 18 design concepts and asked to evaluate them.Participants were also asked open-ended design questions after each concept, e.g., "Can you think of any ideas to better support volunteers in chats about suicide?" See Section 5 for results.

PHASE 1: SEMI-STRUCTURED INTERVIEWS 4.1 Phase 1 Methods: Mock Chats
We conducted multiple text-based mock chats wherein the lead author role-played a support seeker expressing a specific severity of suicidality and participants were instructed to respond as if they were actually on the site.In these mock chats, we aimed to address the following research questions: (1) Can volunteers differentiate between different suicidality severities?(2) How do volunteers respond to suicide talk at different levels of severity?(3) How should the site prepare volunteers when responding to suicidality?
See Table 2 for a description of each mock chat, organized by the suicidality severity that our research team intended.Suicidality severities were defined by the Columbia Suicide Severity Risk Scale (C-SSRS) [66].Each interviewee participated in multiple text-based mock chats, each depicting one of five severities.We made two different mock chats per severity to gain generalizability, so that interviewees responded to a severity level rather than specific language describing it [13,53].For each severity of suicidality, we randomly selected one of two mock chats for each interview.The lead author (who conducted all interviews) used their lived experience to improvise if necessary to make the chat believable, e.g., if participants asked questions that deviated from the pre-written scripts. .We asked open-ended questions about demographics in a post-survey and asked about personal and on-site experiences with suicidality during interviews.'NA' (not available) indicates that the participant did not disclose this information.

Severity Mock Chat Description
No suicidality The mock support seeker talked about feeling sad, lonely, or depressed, because of a reason unrelated to suicide, e.g., a breakup.

Borderline
The mock support seeker described feeling depressed, lonely, hating and feeling exhausted with life, e.g., "I'm ideation trying to survive and I'm barely doing it".These feelings do not constitute suicidal ideation according the C-SSRS, but are close [66].
Low severity The mock support seeker described wanting to die, e.g., "I just wanna fall asleep and not wake up" or "I feel like I non-crisis wanna fall into a void." The mock support seeker made clear that they wanted to talk about having these suicidal ideation thoughts, but were not at-risk of ending their life, e.g., "Not like I actually wanna do that, I just get those thoughts, but it kinda helps to talk actually."

High severity non-crisis ideation
The mock support seeker described active thoughts about ending their life with either 1) intent but no mention of methods or plan, e.g., "I've been thinking of ending it all"; or 2) methods but no intent and no plan, e.g., "I have thoughts about overdosing, but I never make a specific plan for how I would actually do it and I would never go through with it anyways."

Crisis
The mock support seeker talked about imminently attempting suicide, with specific plans and some intent, but also talked about a self-or other-interruption during an active suicide attempt, e.g., "I've been waiting for my roommate to go to bed so I can go upstairs and end it all but he's still up" or "I have all the stuff I need.I just don't wanna end up in pain." 3 Table 2: Descriptions of each mock chat, per its intended suicidality severity.For extended descriptions, see Appendix B. [66] and the lead author's personal experiences of suicidality to write the mock chats.For example, for a high severity mock chat, we used a C-SSRS example ("I thought about taking an overdose but I never made a specific plan… and I would never go through with it"), then wrote a larger scenario where the mock support seeker would eventually say this line.We wrote a mock chat with low severity ideation based on research [61] and the lead author's personal experience indicating that daylight savings time induces worse suicidal ideation.

C-SSRS
The crisis mock chat was written to immediately disclose that the support seeker was in the process of a suicide attempt.Every other mock chat was written so that the mock support seeker progressively disclosed more severe feelings of suicidality until peaking in severity after four to five messages, then continued for several messages to give the participant time to respond.For example, the high severity mock chat started with feelings of depression or loneliness, then talk of wanting to die, and finally talk of wanting to kill oneself.We iteratively refined the mock chats within our research team and through pilot testing, with the goal of making them as realistic, natural, as brief as possible, and so that the two mock chats representing the same severity level felt similar in terms of severity, urgency, and risk.Most participants said that some elements of the mock chat were not completely realistic (e.g., the pace of the conversation was too quick) but, in general, were similar to real conversations around suicidality that they had experienced on the site.For example, about mock chats, participants said "I get that a lot" or "that's relatable" or "it's a very common situation." We also asked participants general questions about their experiences with suicidal people on the site to verify that our mock chats depicted realistic scenarios: Participants said that people regularly come to the site seeking support for a variety of kinds of suicidality, which led us to create the multiple severities presented in our mock chats (see Section 4.3.1 for more details).

Data Analysis.
We automatically transcribed all online interview recordings, then manually reviewed them for accuracy.The first four authors analyzed participants' responses from the Mock Chats, then the Speed Dating activity separately, while also looking for complementary themes that surfaced between these two activities.To analyze data collected during the Mock Chat activity, the lead author copied participants' descriptions of and responses to each mock chat into a spreadsheet, organized by the severity of suicidality present in the mock chat.The first four authors then analyzed participants' responses together, looking for common reactions, diagnostic techniques, and challenges that participants brought up during their mock chats.Following each chat, participants described how serious they thought each mock chat was.Four authors, blind to the eliciting chat, rated participants' descriptions randomized for each rater on a 5-point Likert severity scale, from 1 (no suicidality) to 5 (crisis).For example, one participant's description was: "the [support seeker] is having a challenging time, but they have not shown any signs of being suicidal, " which received a mean severity rating of 1.25.Ratings were internally consistent, with a Cronbach's alpha of 0.89. Figure 1 presents the mean of four authors' ratings for each mock chat.Participants' judgments were approximately normally distributed, so we report the means here and in later in Figure 2 [80].(See Appendix C for medians.)Confidence intervals in Figures 1 and 2 were calculated via standard parametric methods [29, p. 284].Finally, we used a grounded theory approach to analyze the open response portions of interviews [10].Two authors open-coded each interview transcript independently, then compared codes and merged them.For example, one code was "Capability to respond to suicidality depends on situation."We then uploaded codes to Mural (www.mural.co),where each author organized a portion of codes into groups around potential themes.The four first authors then met to solidify themes, arriving at the subsections presented in Section 4.3.

Results
In Section 4.3.1,we confirm prior speculation [88] that people seek support for suicidality on sites which prohibit suicidal talk, which leaves volunteers on these sites unprepared.Although participants accurately assessed suicidality in mock chats (Section 4.3.2),they responded differently to similar degrees of suicidality (Section 4. burden they wanted to take on (Section 4.3.4).We discuss design opportunities that arose from these findings in Section 4.4.

4.3.1
Despite site prohibitions, people sought support for suicidality, leaving volunteers unprepared.Although prior experience with suicidality on the platform was not a requirement to participate in our study, all participants said they had chatted with suicidal people.Although some participants (P4, P6, P10) said chatting with someone who mentioned suicide was rare, others (P3, P7, P8, P11) said it was frequent, e.g., P11 said, "it's pretty common… probably once a week." This is notable, since the site prohibits all suicidal talk and encourages people experiencing suicidality to seek support elsewhere.Participants also said they encountered people expressing a range of suicidality, mostly passive ideation but sometimes crisis.Most participants said they often met people who they suspected may be suicidal, but the person did not explicitly say it.For example, P11 said "some people are pretty blunt and they'll be like, "I'm gonna hurt myself" or "I'm having thoughts, "" but others "say certain things that kind of allude to it so you can… kind of get an idea what's going on." Participants said some support seekers did not explicitly say they were suicidal because they could not express themselves clearly, e.g., "they're so emotionally overwhelmed that they don't know how to describe the situation" (P2).Yet, participants said most support seekers hid their feelings because they knew that the site "censors a lot of what you say, " including the word "suicide" (P10), so support seekers would get kicked out of a chat or temporarily suspended from the site if they talked about suicide: Instead, they described their feelings ambiguously (e.g., 'I am feeling so overwhelmed with life right now.I can't do this anymore'), making jokes, or using lexical variation [8] to bypass automated suicide language filters (e.g., 'su1c1de' with 1's instead of i's).Furthermore, many participants said talking with suicidal people was one of the most frequent things that volunteer counselors sought help with.This was why participants said volunteers needed more training on suicidality.For example, P3 said: "I'm [in a volunteer counselor support chat room on the site] for two hours every day.Out of all of those two hours, at least like 10 to 15 questions are about, "Oh, if this person is suicidal, what do I do?" Like why, why was this not part of your training?"4.3.2Despite lack of training, participants assessed suicidal severity accurately, but they felt uncomfortable doing so. Figure 1 shows how severe participants judged each mock chat to be.Planned comparisons show that participants reliably (p < .001)distinguished the severity of each pair of adjacent mock chats except for Borderline and Low severity mock chats (p = .11).This indicates that participants could accurately distinguish between different severities of suicidality.Some participants used their personal experience or professional training with suicidality to help assess how severe each mock chat was, e.g., pointing out suicidality using more clinical language.However, most participants did not have formal training or personal experience with suicidality beyond chatting with suicidal people on the site and were still able to accurately assess suicidality severity, albeit without using clinical language.For example, P10 described a crisis mock chat as someone "throwing a bottle in the sea with a last warning that, uh, you know, she, she or he might be gone in 15 minutes." Furthermore, when participants recognized that a support seeker said something that seemed suicidal, most participants asked follow-up questions to confirm, e.g., "Do you feel like life is not worth living anymore?" (P3).Participants especially asked clarifying questions for less severe or more ambiguous suicidality (see Section 4.3.1).For example, P11 said, "some people use like dark humor that make you… not sure, but you have to ask, you know?" Thus, participants were not only able to recognize clearer suicidal talk, they were also able to recognize warning signs for suicidality and follow-up with direct questions to better assess suicidality, which most clinical and non-clinical suicidality support resources identify as best practice when one suspects someone may be suicidal [50,52,72].
Although participants could accurately assess suicidality in mock chats, they lacked confidence when doing so.One reason was that participants lacked feedback on how well they identified suicidality, so they did not know if they were doing it correctly.Another reason was that participants generally knew the site's policy prohibited them from talking about suicidality, but thought this offered them little guidance for actual conversations which were more nuanced, since suicidal talk was often borderline, vague or in the form of a joke: Participants were uncertain whether these ambiguous or less severe forms of suicidal talk were prohibited under the site's policy.For example, P7 said, "I think I'm not confident [because] there's degrees of being suicidal… To what degree of suicidalness are we [supposed to be] considering being suicidal?"Many participants said this uncertainty and lack of clear guidance stressed them out, since incorrectly identifying someone as suicidal (or not) could lead to a harmful intervention or failure to provide a useful one.For example, a few participants worried that talking too explicitly about suicidality to support seekers who were not actively suicidal could "trigger [them] into thinking that they're suicidal" (P3). 4articipants wanted more help identifying and assessing different kinds of suicidality, if for nothing else than to lessen the emotional burdens placed on them when identifying suicidality.Participants said this was especially the case for new volunteers who had not yet become prepared or desensitized to conversations with suicidal support seekers.

4.3.3
Participants responded differently to similar levels of suicidality.Although participants described the severity of each mock chat similarly (and accurately) and knew the site's policy was to end the conversation and recommend external resources when someone mentioned suicide, participants still differed in how they responded to the same mock chat.Many disregarded the site's prohibition on talking to people who mention suicide.Participants especially differed around three important decisions: when to end the chat, how to talk about suicidal feelings, and what kinds of resources to recommend, if any.In this section, we detail how participants responded during mock chats; but for sake of clarity, we leave further explanation of why participants responded as such to Sections 4.3.4 and 4.3.5.
When to end the chat.Despite the site's policy, no participants ended mock chats with borderline or low severity suicidal ideation (although P3 and P5 said they considered doing so).Participants were more likely to end chats with more severe suicidality, e.g., in a mock chat with less severe ideation, P1 continued to talk, while in a crisis mock chat, P1 ended the chat immediately.Only about half of all participants ended the highest severity mock chats.Some participants continued conversations depicting active suicidal attempts without ever suggesting to call a hotline or an ambulance.
How to offer emotional support to the support seeker.Instead of ending the chat, many participants continued to talk with the support seeker about how their suicidal feelings to make them feel heard, validated, to help them better understand the root cause of the suicidal distress, to deescalate their distress, and to alleviate some of the stigma of sharing those feelings with others.For example, P9 said that often people who talk about suicide are "having a hard time, like, communicating their feelings and feeling validated in those feelings,… they just want someone to listen to them, you know?" Participants were especially open to continue the conversation if they felt the support seeker was not at imminent risk of attempting suicide, or if the support seeker asked them to continue talking, e.g., P9 said "I'm a sucker for people telling me it helps to talk." Participants also tried other techniques to to deescalate the situation and eventually "talk them out of it" (P9).For example, some tried to distract support seekers from their suicidality.P8 said they "ask [suicidal support seekers] questions about themselves and try to make this a more lighthearted conversation." What kinds of other resources to recommend.Although three participants did not refer support seekers to external resources in any mock chat, most recommended resources in more severe mock chats, including a therapist or crisis resources.Most sent a link to resources on the site; but, two participants recommended off-site resources, such as a link to Trevor Project [67] or the 988 hotline [44].Furthermore, seven participants said they would stay with suicidal support seekers until they connected with the recommended resources (e.g., a loved one or a hotline).For example, P7 said, "I have a hard time with boundaries, but I will stay with them until they get connected to the resources that they need.I don't want someone to feel alone." 4.3.4Many participants opposed the site's prohibition on suicidal talk because they thought ending a chat and sending crisis resources would harm support seekers.Participants worried that sending a hotline link and ending the chat would abandon support seekers in a time of need, since "it's almost like [telling them], 'you're done"' (P11).The National Institute of Mental Health recommends that when someone "is suicidal, do not leave him or her alone" [60].Thus, participants' fears of ending a conversation with someone who is suicidal seems to align with best practice around suicide prevention.Participants said that sending information about a hotline is often ineffective because "it's not an easy transition [from the site to] a crisis counselor' (P3): Some support seekers "might not want to click on the [hotline] link and just be taken to a random web page and have to sort of start the process all over again" (P5); others might "feel that the hotlines don't really help" (P11).Ultimately, most participants worried that they were one of the last lines of support that a suicidal person may have, so ending a chat would push them further towards suicide.To illustrate all of these points, P3 said: "You are shooing [suicidal people] away like they don't matter when this is the one time that you really need to be there for them.… [They think,] 'no one wants to talk to me when I'm feeling this way, ' and then they leave and they don't… connect to any crisis hotline, and then you don't know what's happening behind the scenes.Like it's, it's bad.Like, ugh.It's weird thinking about it.Like it makes me really sad." Furthermore, many participants worried that sending crisis resources without a support seeker's consent could not only fail to help them, but actively harm them.Many participants recounted negative reactions they received after sending a hotline link or trying to end a chat with a suicidal support seeker.For example, P6 said they once recommended a hotline after a support seeker mentioned suicidal ideation, which "made them very, very angry" and "they told [P6 to] 'call the hotline link yourself!"Some participants refrained from sending other resources and continued offering emotional support themselves because they have encountered support seekers who come to the site specifically to avoid talking to hotlines, family members, or counselors.For example, P2 once talked to a support seeker who came to the site just to talk with someone while they were going through a suicidal crisis and and explicitly said "they were not comfortable with going through helplines" because helplines could lead to unwanted intervention, involving police or involuntary treatment.Similarly, P3 said many suicidal teens come to the site to avoid going to a hotline or a school counselor, which could "get the parents involved" against their will.Some participants worried about harming suicidal support seekers because of their own personal experiences with suicide.For example, P6 said their family member died by suicide after they could not find support and faced stigma when talking about their suicidal feelings to loved ones who "weren't able to empathize [and] would react very negatively, and they would say 'Why would you think that?"'As a result, P6 said that when they encountered suicidal people on the site, they "talk to them more about this in detail" and "won't really assume or react in a negative way or tell them stop." Thus, most participants opposed the site's policy prohibiting suicidal talk, because they thought it would harm or abandon suicidal support seekers: Instead, participants continued to talk to suicidal support seekers and refrained from sending information about a crisis hotline or other external resources.

4.3.5
Participants took on individual responsibility and emotional burden when responding to suicidal support seekers.Many participants expressed feeling individually responsible for the well-being of suicidal support seekers.For example, P3 said, "I could easily say something wrong that could trigger a worse situation for them, and they will actually go ahead and, you know, [attempt suicide]… Or I could… actually bring them out of this." Participants actively wanted to help support seekers and thought that simply sending a link to a hotline is "not like an active thing to do" (P5).Instead, many participants continued talking with support seekers after they mentioned suicidality (even crisis) to continue to provide emotional support or to make sure that the support seekers actually connected with a hotline or other resources.Because they felt responsible, participants said they often felt distressed after conversations about suicidality, especially when they worried that a support seeker would harm themself after their conversation.Some participants put up boundaries to alleviate these emotional burdens, e.g., P11 ended mock chats with high severity ideation and crisis because "it's important to have those boundaries and… take care of yourself." However, a number of participants said it was often hard to maintain boundaries because of the severity of the situation and the strong emotional connections they often felt for support seekers.For example, P7 said, "somebody who I've been talking to for months and months and months, I know this person's history and their life and like become a friend to them, somebody they trust, and it's definitely hard… taking that step back and being like, the best thing for you right now is to seek somebody else's help, knowing that it's out of your hands afterwards." Participants said they took on personal responsibility for suicidal support seekers primarily because of the site's prohibition on suicidal talk.Participants speculated that the site's hardline policy prohibiting suicidal talk is likely easier for the site,5 but saddles responsibility for suicidal support seekers entirely on volunteer counselors.For example, P3 said, "end of the day, if someone comes suicidal and I continue to support them, that's on me." Participants said the site's approach to suicidality does not adequately take into account the ambiguity, "blurriness" (P7), or context of the conversation, especially when the support seeker wants to talk about suicidal ideation, but does not seem immediately at-risk of killing themself.
Beyond the site's policy, some participants also mentioned personal reasons for feeling this responsibility, e.g., P9 said, "I know several people who have had people close to them commit suicide and I know, like, how that affects people, so I guess I kind of have a soft spot,… I'm going to try to talk to [suicidal people on the site]."

Design Opportunities
Based on these results from Phase 1, our research team identified design opportunities surfaced in the semi-structured interviews.We asked about potential challenges that volunteers face, e.g., "What makes you feel more or less prepared to respond to suicidal support seekers?"We also prompted participants to give design suggestions, e.g., "Think of two ways you would change the platform to help volunteers with conversations about suicide." Some participants gave general suggestions, e.g., P5 wanted "more guidance on how to tell if it's a [suicidal] chat or not." Others gave specific design suggestions, such as creating technology to detect suicidality or using a chatbot to train new volunteers.We compiled suggestions, challenges, needs, and wants that participants expressed and organized them into four major design opportunities themes: • Training.In Section 4.3.1,participants said that volunteer counselors often do not know how to respond when they encounter suicidal support seekers and some explicitly asked for more training.Thus, one design opportunity is improving training for volunteer counselors about how to respond to suicidal support seekers.• Real-Time Task Support.In Sections 4.3.1 and 4.3.4,participants felt uncertain identifying and responding to support seekers who talked about suicidality and some, e.g., P3 and P9, wanted the ability to reach out to specialized volunteers to help during chats about suicidality.Volunteer counselors may need real-time guidance while chatting with potentially suicidal support seekers.• Emotional Preparation & Support.In Sections 4.3.2 and 4.3.5, participants felt conflicted when responding to suicidal support seekers and individually responsible for their wellbeing.Volunteer counselors may need to be better prepared to handle the emotional burdens of talking with suicidal support seekers, as well as more emotional support after difficult conversations.• Confidence Identifying Suicidality.In Section 4.3.2,participants said they felt unconfident and stressed when assessing whether support seekers were suicidal.Thus, another design opportunity we recognized was to help volunteer counselors feel more confident or alleviate some burden when identifying suicidality.

PHASE 2: DESIGN 5.1 Methods
To understand online volunteer counselors' challenges, wants, and needs, we asked participants to weigh in on up to eighteen design concepts via Speed Dating, a method to get rapid feedback on new technologies and provocative alternative futures [17].We generated the design concepts from participants' suggestions and from our research team's ideas to address participants' needs identified in Section 4.4.As a volunteer-run platform, many participants expressed concerns about resource constraints when discussing potential design solutions during Phase 1 of the study.So, we generated only design concepts which could be implemented on the site without significant additional labor or resources, e.g., design ideas on training volunteers built upon the site's currently-available training resources.Furthermore, since one of our study aims was to evaluate the efficacy of prominent technologies proposed for suicide prevention in an online context, most of our design concepts incorporated computational or AI-based technologies, such as suicide prediction models or chatbots.We chose Speed Dating as a design method in part because it is well-suited to quickly evaluate a wide array of new technologies in a specific context [89].We verbally explained each design concept, since visual storyboards (commonly used for Speed Dating [89]) would have been inaccessible for some of our participants with vision loss or without access to video.The order in which we presented design concepts was randomized per participant.After we presented each design concept, we asked the participant what they thought was good and bad about the idea, how they would react if it was actually implemented on the site, how they thought the concept would help or hurt with different kinds of suicidality, and other optional questions (see Appendix A for details).We asked about both good and bad aspects of each design concept since many of our design concepts were created to provoke negative reactions to identify participants' boundaries, as well as positive reactions to uncover design opportunities (this kind of provocative design elicitation is a core component of Speed Dating [89]).

How we created the design concepts
We created the design concepts via a combination of participants' suggestions in Phase 1 and ideas from our research team to fulfill design opportunities identified in Section 4.4.We incorporated suicide detection algorithms [1], conversational agents [32,55], automatic response suggestions [34], and matching algorithms [26,45] to make our design concepts implementable and to evaluate these computational technologies in our study context.

5.2.1
Training.Participants wanted more, better training (Section 4.4).P4 suggested using conversational agents "for training purposes" (Suicidality Training Chatbot), similar to [22].We proposed using predictive models to Identify Volunteers for Training.The site uses a model to estimate the frequency of suicidal chats and informs new volunteers, so that they can emotionally prepare for them in general [78] Before the chat, a model predicts if a support seeker is suicidal and notifies the volunteer, so they can emotionally prepare before chats about suicide [78] New volunteer counselors are barred from chatting with support seekers that a model labels suicidal, since they may be unprepared for these chats [78] A matching algorithm evaluates volunteers and only matches support seekers labeled as suicidal with volunteers who are good at responding to suicidality After being notified a support seeker is labeled as suicidal, a volunteer can choose to decline a chat, limiting exposure to a potentially difficult conversation [78] Suicide Detection  [78] suggestions to "fully-disclose risks of exposure [to emotionally difficult content] up-front" and give online workers more control over their engagement with this content.Furthermore, participants suggested that new and inexperienced volunteers took on the most emotional burden when talking with suicidal support seekers (Section 4.3.2).We created Restrict New Volunteers and Match Volunteers with Suicidal People following Steiger et al. 's [78] suggestion to limit exposure to distressing content.

Suicide Detection.
Participants suggested Real-time Suicidality Prediction, e.g., P3 recommended using a technology that "tells you how, at what point to, you know, refer someone [to a suicide hotline]".Since this concept was suggested by multiple participants, addresses participants' desire to build confidence identifying suicidality (Section 4.4), and prediction models are one of the most prominent technologies proposed for suicide prevention in prior literature (see Section 2), we expanded this concept into seven specific implementations around four design variables: First, we created Choose to See Predictions and Suicide prediction always visible to see what participants thought of controlling the visibility of a prediction, since some participants said that new volunteers need help identifying suicidality, but more experienced volunteers may not (Section 4.3.2).Second, we created Show Prediction to Volunteer and Show Prediction to Seeker to vary who the suicidality prediction should be shown to.Third, we created the concepts More Sensitive Prediction and Less Sensitive Prediction to explore the harms of false positives versus false negatives.Fourth, we created Predict Level of Suicidality because participants encountered a spectrum of suicidal thoughts and behaviors (See Section 4.3.1).

Data Analysis.
The lead author first organized participants' reactions to each Speed Dating idea in a spreadsheet, with the participants' initial reaction each design concept and their explanation of their reaction in separate cells.To produce numerical ratings for the Speed Dating concepts, the first four authors independently read participants' reactions to each design concept and rated it from 1 (disliked a lot) to 5 (liked a lot).These numerical ratings were not directly given by the participants, but represent the authors' translation of the participants verbal, evaluative statements to a numeric scale.The high internal consistency among the authors (Cronbach's alpha = 0.86) strongly suggest, however, that our ratings reliably indicate which design concepts participants most and least preferred.Figure 2 presents these mean ratings.This rank-ordering of the design concepts provides a framework for discussing our primary findings in this section, which are qualitative.To surface qualitative findings, the first four authors together analyzed participants' explanations for their preferences, looking for underlying reasons why they thought a design concept would be helpful or harmful.

Design Results
In this section, we present findings from Phase 2, which focused on our Speed Dating design activity.See Table 3 for descriptions of each design concept presented and Figure 2 for participants' reactions to design concepts.Participants wanted emotional preparation and support, training, guidance, but worried about AI-based technologies due to concerns about autonomy, privacy, and errors.Here, we discuss each of these design concepts in order of how participants rated them (best to worst).Participants almost unanimously wanted emotional support, but strongly preferred not to use an algorithm to warn volunteer counselors about suicidal members, due to concerns around privacy and prejudice (Section 5.4.1).Participants wanted more training on suicide, including from a chatbot (Section 5.4.2).They wanted real-time guidance from trained people, but not from a chatbot (Section 5.4.3).Finally, participants liked suicide prediction models to alleviate pressure and boost confidence, but also worried that suicide prediction models would limit volunteer counselors' discretion and false positives would harm support seekers (Section 5.4.4).

Emotional Support and Preparation.
Participants wanted more emotional preparation for suicidal chats.Participants said the site currently offers peer support for volunteer counselors to talk with other volunteer counselors after they go through emotionally difficult conversations.Yet, the onus is on volunteers to find and connect with a peer support volunteer, which can be prohibitively difficult, especially when the volunteer counselor seeking support is already in distress.Thus, participants wanted a more direct path, especially an automatic one, to connect with emotional support resources.Second, participants liked the idea of estimating the population base rate of suicidality on the platform because, at a macro level, seeing the frequency of suicidal chats could encourage the site to provide more resources to deal with this type of distress, and, at a micro level, giving this information to new volunteers during their initial training could encourage them to better prepare for these conversations.For example, P2 said had they seen this information when they started, they would have copied down helpful resources "to have them at [their] fingertips" when they encountered their first suicidal support seeker, so that first encounter may not have been so difficult.
Participants wanted more emotional preparation, but worried about prejudging support seekers and violating their privacy.Participants were conflicted about the design ideas of predicting specific support seekers' suicidality and allowing volunteer counselors to choose whether to talk with a person that the model labels as a suicidal.On the one hand, participants said that being randomly thrown into chats about suicide is emotionally jarring, so they liked the design concept because they wanted more choice over who they chat with.For example, P2 said, "there are a lot of [volunteer counselors] that could be triggered from having that particular conversation, so if they can choose not to have it, it could save that trouble for them." On the other hand, participants disliked these two design concepts because "if you give [a volunteer counselor] an idea that, you know, this person has been suicidal in the past, then [the chat will not] be as natural" and the volunteer counselor's "preconceived notions" (P3) may inhibit them from fully listening to the support seeker.Furthermore, some participants thought that automatically telling volunteer counselors about support seekers' suicidality violates their confidentiality, since it is sharing their potentially sensitive health information without consent.Participants also did not like most design concepts which used an algorithm to read support seekers' historical chats and predict whether they were generally suicidal, because they believed that chats were private and should not be read by anyone else, including an algorithm, because it "feels like a little bit of an invasion of privacy" (P9).

Training.
Most participants wanted more training, including a chatbot for suicidal training, to practically and emotionally prepare for chatting with suicidal people.Similar to Section 4.3, we found that almost all of our design study participants thought that significantly more training was necessary to practically and emotionally prepare volunteers for conversations around suicidality.However, participants differed around: (1) whether suicidality training should be required of all volunteers, and (2) how to implement this training.First, some participants thought suicidality trainings should be optional, because some volunteers may not be comfortable talking about suicide and because additional training may be a hurdle that would decrease the number of new volunteer counselors on the site.Yet, most thought that suicidality training should be required because suicidal chats happen frequently and few volunteer counselors know to respond (see Section 4.3.1);and additional training "will improve the quality of [volunteers], which is more important than the number of [volunteers] you have" (P3).Second, for specific   3 for descriptions of design concepts.
suggestions around implementation, most participants asked for more extensive training around suicidality during volunteers' initial training; in addition, some asked for training "not only before you become a [volunteer counselor, but also] a refresher every, like, one or two months" (P8).Some suggested trainings to differentiate between different kinds of suicidality, e.g., passive ideation vs. suicidal crisis, and how to respond to each kind, so as to assuage some confusion and lack of confidence around identifying and responding to suicidal support seekers (see Section 4.3.2).
Almost all participants liked the idea of creating a suicidality training chatbot to emulate a suicidal support seeker to have mock chats with, to give volunteers more exposure and practice around having conversations with suicidal support seekers without having to practice on real people.Participants said that almost all volunteer counselors currently get exposure and practice around talking about suicidality through real conversations with the first couple suicidal support seekers they meet.Many participants said that "it was definitely, definitely hard the first time and the second time [they encountered a suicidal support seeker], " but "the third time wasn't as bad" (P2).Thus, participants wanted mock chats with a chatbot to give volunteers exposure to what conversations about suicidality may look like without having to practice on real suicidal support seekers, so that volunteers can practice what to say and not "freak out" (P7) or get stressed when they first encounter suicidal support seekers.Participants said the site currently offers mock chats with more experienced volunteers role-playing as support seekers, but these mock chats are only available to a select few volunteers -since they require labor from experienced volunteersand there are currently no mock chats about suicidality.Although participants thought mock chats with real people may be higher quality than those with a chatbot, participants ultimately liked using a chatbot because "it would save a lot of manpower that goes in training [volunteer counselors]" (P2) and would allow many more volunteers to have mock chats, which many participants thought was the best way to train new volunteers.Finally, some participants doubted that mock chats with a chatbot would be as effective as mock chats with a real person, because they were unsure of how volunteers would get feedback on their performance: Participants disliked the idea of creating a second chatbot to automatically generate feedback because a "person would be a better judge of how the [volunteer counselor's] abilities are than a chatbot" (P6).

Real-Time Task Support.
Participants wanted more real-time guidance, but disagreed on professional support.Participants liked the idea of having trained staff on hand because they felt it could help support seekers who are suicidal and would not connect with crisis resources outside of the site.P3 said, "we lose a lot of [support seekers] in that transition [from the site to external crisis resources, so] trained staff who can direct them in the right location so they know that they're not alone… would be a great idea." P2 said, "I think it would be able to save a lot of lives." Furthermore, participants also liked support staff to help alleviate volunteer counselors' own emotionally burdens: For one, it would create a clear process to respond to suicidal talk, i.e., to have the support staff guide the conversation, which would allay confusion about how to respond when someone mentions suicide (see Section 4.3.2).For another, participants said that directing a suicidal support seeker to support staff would not feel like abandoning them like simply ending the conversation does, which would alleviate much of the individual responsibility that volunteer counselors feel and allow them to respect their own emotional boundaries better (see Section 4.3.5).
Participants were conflicted over whether they wanted professional support staff, such as licensed counselors or crisis hotline workers, or whether they should simply be volunteers with additional training.Some worried that allowing professional support staff onto the platform would signal a shift in the website's core approach to suicidality, and would change how suicidal support seekers engage with the site.Similar to Section 4.3.3,some thought it would betray many suicidal support seekers who come to the site to avoid medicalized or professionalized resources: For example, P2 said, "maybe they weren't reaching out to emergency hotlines for a reason, but I just directly connected them to something that they were trying to avoid in the first place." Others thought that "when you start offering crisis resources, you're going to see people coming to [the site] to access those crisis resources" (P7) and that the site would be too underresourced and unprepared to respond to this influx of suicidal support seekers.Some participants thought that it would violate the support seeker's privacy and trust if volunteer counselors handed off conversations about suicidality to professional support staff.For example, P10 said, "if this person who's been confiding personal stuff to me suddenly sees someone taking over, it will be a huge betrayal." However, others said that professional support may be acceptable as a resource strictly for volunteer counselors or with the explicit consent of the support seeker, i.e., "if there were actual trained people in that sort of role that you could go to as a [volunteer counselor] and ask for guidance… or if you could say to the [support seeker], like, 'do you want to be referred on to a professional and then sort of refer them on to them in that way?"' (P5).
Most participants disliked real-time automated response suggestions since it lacked a "human aspect."Participants disliked the idea of a chatbot recommending suggestions about how to respond to suicidal support seekers in real time.Many participants worried that volunteer counselors would rely on automated response suggestions, which would take the "human aspect" (P8) out of the site and make volunteers' responses too robotic -lacking emotion, empathy, or contextual specificity.For example, P8 said using a chatbot to recommend responses "doesn't really let you think on your own [since volunteer counselors would] basically just copy-paste what [the chatbot] recommends." Participants said that support seekers come to the site to talk with real people, i.e., "they want that human approach" (P8); so, if support seekers knew they were getting responses from a bot, this would make them angry or less deserving of care.Prior work reinforces the importance of this "human element" for increasing rapport between counselors and patients [48,83].

Suicide Detection.
Participants were conflicted about how much they wanted a suicide detection model to influence volunteers' decisionmaking when identifying and responding to suicidal support seekers.Most participants thought that a suicide prediction model "could help [the site] in enforcing their policy" (P2), i.e., to pressure volunteer counselors to end chats and send hotline information when support seekers mention suicidality.Participants were conflicted about using prediction models like this.Some participants liked it, because they thought that volunteers could cause more harm when continuing conversations with suicidal people (see Section 4.3).For example, P3 said, "a lot of [volunteer counselors] don't really encourage people to get better help… [a suicide detection model] is like, nice to remind them that [the site 's] policy is what it is, you cannot be taking a crisis chat, you're not trained for it." Similarly, a few participants did not like the design concept of a suicide prediction technology that could distinguish between severity levels, because they thought it would embolden volunteers to continue less severe suicidal conversations.Participants also liked the general idea of suicidality prediction technology to take "take pressure off" (P9) when responding to suicidality.Participants especially liked the design concept where the suicide prediction is automatically shown to the support seeker, because they said it takes the burden off volunteers to say something when they think a support seeker is suicidal.For example, P3 said support seekers often get mad at them for suggesting a therapist or a hotline; so, they'd prefer if support seekers "get mad at a robot" instead.
On the other hand, most participants disliked suicide prediction models because they worried these models would not properly incorporate nuanced context into their predictions and would restrict volunteers' discretion.For example, P9 said she disliked predictive models because, in chats with potentially suicidal talk, volunteer counselors "have to take in the context of the conversation and kind of make that decision [about how to respond] on [their] own." Participants thought suicide predictive models would be especially erroneous, because suicidal chats often included nuanced speech like jokes, ambiguity, or borderline suicidal ideation (see Section 4.3.1).Some participants also feared predictive models incorrectly labeling support seekers as suicidal, which could agitate them or convince that they were more suicidal than they actually were, echoing concerns in Section 4.3.2.This runs counter to prior work which suggests that false positives are relatively harmless when arguing to raise the base rates of suicide prediction models [35,36].
Despite being conflicted about using suicide prediction models to pressure volunteers to follow the site's policy, many participants liked using predictive models as informational tools when identifying suicidality.Participants also felt pressure when identifying suicidality (Section 4.3.2),so they liked the idea of a prediction model to help them "gauge the conversation" (P11) and feel more confident in their decisions, which they said "might be like a relief" (P2).

DISCUSSION 6.1 Theoretical Implication: Online Suicidality Support as a Potential Alternative to Medical Systems
Medical resources like hospitals, psychiatrists, therapists, and hotlines have traditionally been the primary form of care for people experiencing suicidal distress.Yet, our findings in Section 4.3.4suggest that some suicidal people may seek support online specifically to avoid hotlines, therapists, or even family members, for fear that these resources may cause harm, be ineffective, stigmatizing, or prompt unwanted intervention (cf.[64]).We envision that online sites could potentially offer safe spaces for people experiencing suicidality beyond the medical system. 6However, as we find in Section 4.3, insufficient guidance or resources for online support providers and restrictions on suicidal talk also restrict the possibility of good demedicalized care for suicidal people online.Human infrastructures for mental health care offer some of the most lifesaving, yet overlooked, forms of care for suicidal people [63].Thus, HCI and related computational fields should work to achieve these ends, through two research directions following from our work: First, our work begins to understand what online suicidality support looks like from the perspective of support providers.Future work should continue in this direction, especially exploring sites beyond those dedicated to suicidality (e.g., Reddit's r/SuicideWatch [19]).Second, prior work has focused on creating technologies like suicide prediction models or chatbots to automate suicide prevention work (see Section 2).Instead, our work begins to evaluate and design technologies which support, rather than supplant, online human care infrastructures for suicidality.Below, we offer some initial design suggestions to help online suicidality support providers.However, there is much work to be done to understand and improve how people seek and give support for suicidality online, especially for people outside the U.S. and marginalized people who face high rates of suicidality, like trans, queer, and Indigenous people [79,85].
6.2 Design Suggestions 6.2.1 Use conversational agents for training, not for realtime suggestions.Recent work has used large language models, such as GPT, to give live suggestions to peer supporters [32,75].
Our participants had positive initial reactions to real-time suggestions, because they thought volunteer counselors needed more guidance about how to respond to suicidality.However, upon further consideration, participants ultimately disliked this technology for the following reasons: First, they worried that support seekers, who came to the site to talk to a real person, would feel invalidated or as if nobody cared about them if they knew they were just talking to a chatbot (Section 5.4.3).Second, participants worried that AI-based technologies would produce errors, which could be life-threatening in such high-stakes contexts as suicide prevention, cf.[87] (Section 5.4.4).Third, participants worried about privacy violations when creating technologies with chat data from people experiencing mental distress or suicidality (Section 5.4.1).Finally, participants worried automated response suggestions would encourage more robotic and apathetic responses (Section 5.4.3).This aligns with recent work, which suggests that people do not like receiving therapy from a chatbot or an automated digital technology that "feels weird [and] empty" [32] and may actually increase risk of suicide [77].Instead, prior work has proposed using chatbots to simulate conversations with suicidal support seekers to train suicide hotline volunteers [22].Our participants liked the idea of using similar technology on the platform, because the site did not train volunteers about suicidality, beyond informing them about the policy to end the chat when a support seeker mentions suicidality.As a result of this lack of training, participants relied on their personal experiences or folk knowledge built up through trial and error in conversations with real suicidal people on the site (Section 4.3.4).This led volunteers to feel overwhelmed, guilty, and underprepared when encountering someone experiencing suicidality, especially for the first time (Section 5.4.2).It also led participants to respond very differently to similar kinds of suicidality (Section 4.3.3).Some responded per best practice, e.g., by asking direct questions to assess suicidality (Section 4.3.2).However, others responded in ways that put suicidal support seekers at risk, e.g., by failing to send crisis resources during mock chats depicting active suicidal crisis and instead trying to "distract" the suicidal person or "talk them out of it" on their own (Section 4.3.3).A training chatbot would help to address many of these problems, by allowing the site to provide experiential training for volunteers to improve emotionally preparation and task-oriented suicide response skills, without experimenting on real suicidal people.Prior work has shown that even minimal (1-hour) suicide gatekeeper training with role-playing can help non-clinicians, like family and friends, support suicidal people in their lives [56].Similar training could be implemented with a chatbot designed emulate a suicidal support seeker.Some participants liked mandating additional training, because it would increase the overall quality of suicide support on the site, even if more rigorous training raised barriers for recruiting new volunteers.6.2.2 Suicide detection technology could alleviate stress, but may not significantly improve volunteers' abilities to identify suicidality and could be used to impose harmful policies.Prior work has proposed using suicide detection technology to help suicide gatekeeping on social media [25,71] and to assist clinicians and helpline workers when responding to suicidal support seekers [31,36].Our work is novel in that we explore suicide detection technology in an online mental health support site.Our participants liked suicide detection technology because it would help them feel more confident, alleviating some of the pressure of identifying and responding to suicidal people (Section 5.4.4).However, results in Section 4.3.2suggest that participants could accurately distinguish between degrees of suicidality without a predictive model.If our participants are representative of most volunteers on the site, then suicide prediction models may not significantly improve their ability to assess suicidality.However, we speculate that our participants may be more experienced than average, and less experienced volunteers could benefit from predictive technologies.We discuss potential limitations around representativeness in Section 7.
Participants were also concerned with potential negative uses of suicide detection technology.Most participants thought that the site would use suicide detection algorithms to impose their policy on volunteers, forcing them to end chats and send crisis resources to suicidal support seekers.A few participants liked this, because they worried that untrained volunteers could harm suicidal support seekers if they tried to support them.This concern is important: For example, some participants did not suggest crisis resources to support seekers in mock chats depicting suicidal crisis, which is likely harmful.However, most participants disliked using suicide detection technology as moderation tools because they thought that the site's policy harms support seekers by leaving them abandoned.They thought volunteers should have more discretion to offer support.
In sum, our findings suggest that suicide detection technology may not be aligned with the needs of online volunteer counselors and could introduce additional harms when used as moderation tools.We suggest that clearer policies that are guided by best practice and aligned with our suggestions around safe spaces below, along with improved training and real-time guidance, could substitute for, or be more effective than, suicide detection technology.That being said, if sites decide to use suicide detection technologies, we suggest that they only use them to provide information to human decision-makers and not to automate or mandate predefined ways to respond to suicidality.This would help volunteer counselors feel more confident in their decisions and alleviate some pressure, while preventing possible harms of automated decisions.

6.2.3
Sites should reconsider restrictions on talking about suicidality and allow more safe spaces for support.Most sites, including the one we studied, restrict how users talk about suicide (see Section 2).Some participants liked these restrictions, since they protect support seekers from being harmed by untrained volunteers and protects volunteer counselors from potentially difficult conversations.Most participants, however, disliked the policy, since it puts volunteers in a difficult position: Either volunteers could follow the site's policy and abandon support seekers when they are experiencing suicidality, or they could disregard the site's policy and take on the responsibility and emotional burden of supporting the suicidal person on their own (Section 4.3.5).The National Institute of Mental Health recommends that when someone "is suicidal, do not leave [them] alone" [50,60,72].Yet, many online platforms, including our research site, encourage exactly that.
Despite the site's policy, many participants actively listened to support seekers with less severe suicidal ideation and stayed with people in crisis until they connected with a hotline, because they wanted to create a "safe space to be in" (P7) for people to talk about their suicidal feelings.Online safe spaces have been invaluable for marginalized people, like transfolk [73], to share information and social support.Yet, barriers like lack of training and restrictions on suicidal talk preclude these safe spaces for suicidal people.Thus, we suggest that online platforms, especially sites where people come to talk about mental health, should reconsider their restrictions on suicidal talk and allow for more safe spaces for people to talk about suicidality.
To clarify, we do not suggest that platforms allow people to talk openly about suicidality in any venue.Certain suicidal talk can be harmful, e.g., by encouraging people to self-harm or creating contagion effects that worsen others' suicidality [49,59].Rather, safe spaces should be carefully crafted and supported to allow for discussions of suicide while limiting potential harms.We suggest a few preventative considerations, based on our findings: First, participants said that the site's policies were too vague and not applicable to real chats on the site, which caused volunteers more stress (Section 4.3.5).Instead, online sites should clarify what kinds of suicidal talk are allowed or not, including specific examples of what severities of suicidality can or cannot be discussed and whether potentially dangerous suicidal talk is allowed.For example, r/SuicideWatch prohibits encouraging suicide and discussing methods, but allows all other suicidal talk [70].Second, volunteers should be adequately trained.See Section 6.2.1 above for further discussion on how to improve training.Third, sites should have trained staff specifically for suicidality support, as discussed in Section 5.4.3.Some participants suggested including professional support staff on the platform.However, most disagreed, since this could marginalize support seekers who came to the site to avoid medicalized resources.In addition, given the sporadic occurrence of suicidal talk, it might not be practical to always have external staff available when they are needed.We suggest training specific volunteers in-house, rather than bringing on professionalized staff from elsewhere.Finally, online volunteers take on emotional burdens when supporting suicidal people, echoing prior work on helpline volunteers [63].So, sites should improve emotional preparation and support for volunteers.Our research site offered emotional support resources already; however, participants offered design suggestions to make these more accessible and widely-advertised (Section 5.4.1).Furthermore, participants thought that sufficient experiential training and exposure to suicidality in simulated environments could help volunteer prepare emotionally for chats with real suicidal people, e.g., via a suicidality training chatbot (Section 6.2.1).

LIMITATIONS
First, our results may be limited by selection bias and small sample size.Although our study was open to all volunteers on the site, our recruitment materials stipulated that the study would be about suicidality.It is likely that people who answered the call may have more interest and experience around suicidality than average volunteers on the site.Although our results in Section 4.3.2suggest that volunteers could accurately distinguish degrees of suicidality, larger-scale, more representative studies may be necessary to make more definitive claims.Second, instead of using open-ended codesign methods that would allow participants to generate design ideas on their own, our research team took a more hands-on role in generating design concepts.These research methods may deprive participants of some agency, and our results may not fully reflect our participants' firsthand perceptions and preferences.Third, participants did not quantitatively rate the design concepts in Section 5; we authors used participants' descriptions to assign ratings.These ratings may not reflect participants' exact preferences about each concept.They are intended to roughly compare participants' preferences and guide qualitative findings in Section 5.4.Finally, we intentionally incorporated the lived experience of the lead author throughout our study (see Section 3.2).Suicidality is a complex phenomenon influenced by each person's identities and stresses they face.The author's identity as a white, queer, U.S. citizen influenced how we conceptualized and depicted suicidality in this work.Future work should include the lived experience of other suicidal people, including Black, Indigenous, and non-U.S.-based people.

CONCLUSION
In this work, we explored how online volunteers provide support to people experiencing suicidal distress.Through semi-structured interviews and mock chats with 11 volunteer counselors, we found that online volunteers navigated restrictive policies and limited support when providing care to suicidal people, taking on responsibilities and emotional burdens in the process.Through design activities, we found that prominent technologies proposed for suicide prevention in prior work may be misaligned with online support providers' needs.We suggest ways to redesign these technologies to be better aligned, and advocate for future work in HCI and computational fields to improve technological infrastructures for people who provide suicidality support online.
• Choose to Talk with a Suicidal Person: After being informed that a support seeker is suicidal, the volunteer counselor can choose to converse with that person or decline the chat.This idea and the next two were based on Steiger et al. 's suggestion to limit exposure to emotionally difficult content, especially for new workers [78].

Figure 1 :
Figure 1: Participants' Evaluations of the Severity of Mock Chats.The x-axis represents the intended severity of each mock chat, and the y-axis represents the mean of participants' evaluations of the severity of each mock chat as rated by the authors.Means with different superscripts differ from each other (p < .001).

Figure 2 :
Figure 2: Participants' Rank-ordering of Speed Dating Ideas.Authors' mean numeric ratings of participants' verbal reactions to each Speed Dating idea they were shown, ranked from most to least preferred, where 5 = like a lot, 4 = like somewhat, 3 = like and dislike equally, 2 = dislike somewhat, 1 = dislike a lot.See Table3for descriptions of design concepts.

B
MOCK CHAT FULL SCRIPTS B.1 Low severity non-crisis ideation Mock Chat 1a (Low severity non-crisis ideation) Hi Ever since daylight savings time I keep waking up in the middle of the night I stopped going out cuz I'm too tired But then I just feel lonely I just feel like I wanna fall asleep and not wake up Sometimes I listen to music I journal Most of the time I just scroll on Instagram tho tbh When I dont sleep well for long enough my brain gets all foggy And I get these thoughts like I wanna d1e u know?Not like I actually wanna do that, I just get those thoughts But it kinda helps to talk actually Mock Chat 1b (Low severity non-crisis ideation) Hi I've been feeling bad lately I feel like I wanna fall into a void I told my boyfriend I think about dying sometimes and when I was younger I used to cut But I don't do that anymore He was like surprised and kind of upset about it In the moment, I tried to comfort him But now I feel like I'm the one who has these feelings, so why should I comfort him Does that make sense?It's not like I actually wanna do that, I just get those thoughts I just told him because it helps to talk sometimes B.2 High severity non-crisis ideation Mock Chat 2a (High severity non-crisis ideation) Hi I'm Pierce I'm having trouble with my friends Yesterday I told my friend sometimes I feel like I wanna d1e and they got worried But then the next day, it was like nothing ever happened So It feels like they weren't listening It's just hard to talk about I have thoughts about overdosing, u know?but I never make a specific plan for how I would actually do it And I would never go through with it anyways I don't know I hate feeling like this Sometimes I just hate my life Mock Chat 2b (High severity non-crisis ideation) Hi I don't like my life I'm trying to make it but like barely I recently started listening to Green Day I love them I wish I was born in that era instead Like I wish I wasn't alive now I think I would've been genuinely happier then Recently I've been thinking of ending it all But when I listen to music I feel a lot better I don't want to talk to a therapist I've had really bad experiences with them Idk what do you think I should do?B.3 Crisis (Severe content warning.)Mock Chat 3a (Crisis) hi I'm Pierce I dont really know why im here Im in a bad spot right now and I wanna talk to someone but I also don't I dont know why im feeling like this I have a test tomorrow but life is too much I've been waiting for my roommate to go to bed so i can go upstairs and end it all But he's still up I wanna get it over with But I also feel bad about how my family and my roommates will feel after I just don't wanna feel like this anymore I dont know what to do Mock Chat 3b (Crisis) hi I'm Pierce I'm planning on ending my life But I'm really worried about screwing it up I have all the stuff I need I just don't wanna end up paralyzed or in pain I'm in a lot of pain right now Emotionally and physically And I just don't wanna feel like this anymore I dont really see whats wrong with ending it if Im gonna feel like this forever

Table 1 :
[39]e the Mock Chats.To combine biomedical perspectives and lived experience[39], we used examples from the Participants' Self-disclosed Demographics, Training, and Personal Experience with Suicide

Table 3 :
Design concepts shown during the Speed Dating activity of Phase 2, grouped by theme: training, real-time task support, emotional preparation and support, or suicide detection.For extended descriptions of each design concept, see Appendix A. psychological well-being of moderators.Specifically, we proposed to Predict a Seeker's Suicidality A Priori and Choose to Talk with a Suicidal Person following Steiger et al. 's • Restrict New Volunteers: New volunteer counselors are barred from chatting with support seekers that a model predicts are suicidal.• Match Volunteers with Suicidal People: Volunteer counselors are automatically evaluated on how good they are at responding to suicidal support seekers.A matching algorithm automatically pairs good volunteer counselors with support seekers that a model predicts are suicidal.A.1.4Suicide Detection.• Real-time Suicidality Prediction: A machine learning model uses chat data to give real-time predictions about whether a chat contains suicidal talk.The following design variations describe specific ways to implement this general concept and how predictions are presented to participants: -Choose to See Predictions: Volunteer counselors can choose to see the suicide prediction model's prediction, but it is hidden otherwise.-Suicide prediction always visible: Volunteer counselors cannot choose to see the suicide prediction model's prediction, it is always visible.-Show Prediction to Volunteer: When the suicide prediction model labels a support seeker as suicidal, the volunteer counselor is notified, encouraged to refer the support seeker to crisis resources, and end the chat.-Show Prediction to Seeker: When the suicide prediction model labels a support seeker as suicidal, the support seeker is notified, encouraged to go to crisis resources, and end the chat.-More Sensitive Prediction: The decision threshold for the suicide prediction model is turned down so that it rarely misses support seekers who are truly suicidal, but often labels support seekers as suicidal when they are not.-Less Sensitive Prediction: The decision threshold for the suicide prediction model is turned up so that it rarely labels support seekers as suicidal when they are not, but often misses support seekers who are truly suicidal.-Predict Level of Suicidality: A machine learning model uses chat data to give real-time predictions about the severity of suicidality present in a conversation, e.g., non-crisis suicidal ideation, crisis, etc.