Negotiating Sociotechnical Boundaries: Moderation Work to Counter Racist Attacks in Online Communities

Online communities are susceptible to racist attacks, even when community policies explicitly prohibit racism. Drawing on the concept of symbolic boundary, we explored how community members sustained their communities against the perpetuation of racist logics and practices on Reddit. We drew on trace ethnography to analyze conversations about crime in two city subreddits (i.e., r/baltimore and r/chicago). The findings illustrate that the fragility of community boundaries was contributed by race baiting posts, covert racism, and racist brigading. At the same time, our research highlights that moderation efforts maintained and established institutional, cultural, and geographical boundaries to combat racist attacks. We discuss boundary as a design technique for building safe spaces for community members. Content warning: This work contains racist quotes that can upset or harm some readers.


INTRODUCTION
Racism, broadly construed, is defned as prejudice against others based on racial or ethnic categories [60].In the contemporary United States (US), racism is understood as a combination of "arrangements, mechanisms, and practices" that sustain White privilege across various societal levels, such as housing, healthcare, and education [7].Additionally, these racist logics are embedded within Internet-enabled and digital platforms, signifcantly mediating people's routine everyday experiences.From facial recognition platforms that do not see dark skin to the technologies used by the justice system that are biased against People of Color, we continue to see technology perpetuate racism and reinforce racist logics [4].More specifc to the context of our work, sociotechnical systems like online communities normalize and perpetuate racism and racist structures as "acceptable web-based knowledge" [29,33].
Our work focuses on a specifc kind of sociotechnical system: online communities.Online communities were meant to serve as spaces whereby people could commune with others around their multiple and intersecting identities, interests, and more [15,32,34].They have even been found to provide people with social support in navigating life changes [27,62] and receiving social support when experiencing traumatic events [3,57].Yet, these same spaces have also served as sources of harm in how they perpetuate and sustain racist logics [29,44].Findings demonstrate that the boundaries of online communities, as collectively constructed spaces by in-group members, are vulnerable to attacks and disruptions from out-group members [15].Here, the boundary of an online community is a social concept that community members collectively construct through interaction; community members use the symbolic boundary as an identity to diferentiate "we" and "others" [71].For example, an online community's boundary could be the governance rules or policies defned by its members, which characterizes what should or should not be done in the community.Racist attacks can break the community policies and dominate the culture there.We wanted to understand how community members negotiate the boundaries against racist attacks.
In online communities, moderation is often charged with maintaining the boundary of these spaces.Content moderators, in collaboration with members of their communities, are working to maintain the symbolic community boundaries of these spaces they are responsible for or members of.For example, moderation work deals with racist content generated by users, such as through posts or links to external resources, as these comments can disrupt the identity of the spaces they govern.In the HCI community, several lines of inquiry focus on how communities collectively build their identities through discourse framing or technology-supported strategies, such as community identity re-claiming (e.g., [15,18]), misinformation gatekeeping (e.g., [30,35,63], narrative framing in social movements(e.g., [64,74]) and moderation work for problematic content (e.g., [5,11,54,56].Building on the prior knowledge, our work aims to understand how moderation work of Reddit negotiates the boundaries against racist attacks.
We collected conversations about crime in two city subreddits on Reddit-r/baltimore and r/chicago.We chose crime as the context because people engage in conversations about crime on a daily basis.It is important to note that the research initially explored how online community spaces contribute to people's fear of crime.Racialized narratives emerged as a salient topic during the data analysis process.Through a trace ethnography study, we found that the boundaries of the subreddits were fragile due to race baiting posts, covert racism, and racist brigading.In response to combat racist attacks, moderators and community members employed diferent moderation strategies to negotiate the subreddits' institutional, cultural, and geographical boundaries, such as locking race baiting posts, re-enforcing culture-based norms, and building geo-fences to constrain brigading.Ultimately, our discussion centers on the concept of boundary as a technique to counter inter-community invasions.This research contributes to the feld by conceptualizing three distinct boundary negotiation approaches in moderation work: institutional, cultural, and geographical.
Content warning: This work contains racist quotes that can upset or harm some readers.

RELATED WORK
To address how moderation work negotiates community boundaries against attacks, we situate this work in the context of sociotechnical systems and discuss how racism perpetuates in online communities, creating unsafe spaces for People of Color.We draw on the symbolic boundary concept, defned as shared identity among community members to diferentiate themselves and others.This concept provides a lens to understand how moderation work diferentiates racist logics from the community culture and combat racist invasions.

Understanding Platformed Racism Through the Lens of Symbolic Boundary
Matamoros-Fernández demonstrates that racist narratives and behaviors dualistically shape our ofine and online worlds through "platformed racism" [45].That is, sociotechnical systems enable the construction and propagation of racist ideologies that impact people's lived experiences [45].This work refects the spirit of sociomateriality scholarship in highlighting how online experiences have real, material impact [50].In an attempt to understand platformed racism, scholars have started to explore how platforms' sociotechnical mechanisms, such as culture, policies, business models, and design, enable and amplify racist narratives [13,29,44,46].For example, Hokka found that the design of YouTube's fan groups created intimate sub-communities for racist creators and their "like-minded" followers; YouTube's business model, driven by capital, created a "neoliberal" culture that normalized racism as free speech [29].Moreover, the governance of social media platforms (i.e., Twitter, Facebook, and YouTube) normalized racist humor and abuse, explored through an online racist incident in Australia [45].Altogether, racial bias can be embedded in sociotechnical systems, reifying established power structures and furthering the oppression of marginalized and underrepresented minority groups [28].Through various sociotechnical mechanisms, racists can establish legitimacy and dominate the logics of online spaces.They can disrupt the boundaries between online communities and make these spaces unsafe for others [43], such as through brigading [17].
The boundaries of communities are established through symbolic rhetoric.Lamont and Molnár defned symbolic boundaries as the distinctions drawn by individuals or groups to organize and diferentiate phenomena, such as "objects, people, practices, and even time and space" [38].Symbolic boundaries for online communities are "people attribute to their participation in digital formations and how these meanings result in a shared sense of 'we-ness' " [71].Thus, the boundary of an online community refects a shared identity that community members collectively and dynamically construct through interaction and is used to diferentiate themselves from others.Lamont and Molnár argued that symbolic boundaries help maintain "cultural, institutional and social" diferences between a group and the other [38].For example, on one sub-forum, punk music fans kept posting typical threads and socializing with others to construct the boundary of their community, empower their identity, and represent how they see themselves as physical and social beings [75].The creation of symbolic boundaries can cause conficts and violations between groups, highlighting distinctions between them [67].
In the HCI community, closer to boundary work, a body of studies has explored how group members collectively construct shared identity through sense-making and norm enforcement [15,17,18].According to social identity theory, social groups function through individuals' defning their self-concept based on the characteristics of a shared social identity [55].Thus, they try to maintain the status of the social groups they belong to to maintain their selfconcept [43].In this sense, social identity and symbolic boundary share the meaning of defning "in-group" members and diferentiating them from "out-group" members.Nevertheless, diferent from social identity, symbolic boundaries substantiate distinctions between diferent groups or communities.As Williams argued, "using symbolic boundaries in this way allows us to focus on how meanings are created, activated and difused among community members, rather than on some vague, ephemeral sense of a shared something (be it a straightedge or gamer identity, or whatever)" [71].Also, symbolic boundary highlights conficts between groups.It helps understand how "out-group" members break through a group's boundary during racist attacks.Thus, we draw on the concept of symbolic boundary to explain how people perpetuate or combat racism in online communities.

Moderation As Community Boundary Negotiation to Combat Racism
In online communities, moderation work plays a role in defning boundaries by building community norms and eliminating antinorm content/behaviors [25].More importantly, while moderation is often a formalized activity, the actual work of moderating content and maintaining community boundaries can be a collaborative activity by a social group-in this case, the members of a particular community.Leavitt et al. studied how Reddit users integrated authentic information and delicately employed upvotes to increase the visibility of crucial information during crisis events [41,42].
Geiger described a collective action to develop blockbots to counter harassers [23], where users could subscribe to diferent blockbots.Scholars have also made eforts to develop and test moderation tools to better serve vulnerable users.For example, the work of Blackwell et al. highlighted the mechanisms of online harassment, arguing that most tools for mitigating online harassment did not take into account the experiences of vulnerable groups [5,6].To address this, Blackwell et al. developed HeartMob, a tool for tagging and classifying the language of harassment in online contexts [6].Jhaver et al. found that blocklists were perceived as unfair by some users because the criteria for identifying spamming harassers varied [31], so the blocklists could not meet every user's needs.Brewer et al. designed a moderation intervention tool, the GLHF pledge, on Twitch, which could assign specifc rules to individual users [9].
In addition to moderation work, prior literature has investigated that community members collectively constructed safety related to identity [16,18,44], misinformation [30,63], and social rights [64,65,74] through discourse framing or sociotechnical solutions.During the 2013 Boston Marathon Bombings, the spread of misinformation became a signifcant problem when people tried to make sense of real-time and incomplete information on social media [30].To deal with misinformation, public users reported that they refected upon their past information-sharing experiences and applied diferent strategies, such as retracting or deleting [30].During the Black Lives Matter movement, in investigating the frame articulations of anti-BLM and pro-BLM groups, Stewart et al. examined how hashtags served a gatekeeping function, namely, how they played a role in access to the movement [65].Taken together, these works outlined collective strategies that pushed back against dominant narratives and made online spaces safer.
Our work builds on prior work by examining how people are working to navigate and mitigate those harms, in this case, by conceptualizing the ways that members work to maintain their boundaries against racist attacks.

A BRIEF INTRODUCTION OF REDDIT
The work focuses on how moderation work negotiates symbolic boundaries against racist attacks.We chose Reddit to examine this.Today, Reddit is one of the largest and most frequently visited online community platforms.Users (Redditors) can create discourse communities (subreddits) on myriad topics.Within subreddits, Redditors can engage in various conversations through text, hyperlinks, and images.Content produced can be upvoted or downvoted by Redditors, which often dictates what becomes visible through different post order views and fltering mechanisms aforded by the platform, e.g., what is popular and what is new.
According to self-reported surveys, the gender and ethnicity distributions on Reddit skewed male and White [59].In 2017, about two-thirds of US Reddit users were male, and in 2016, about 70% were White non-Hispanic.Several studies have examined the White masculine culture on Reddit.For example, Adrienne Massanari described the environment and culture of Reddit as "toxic technocultures" that supported anti-feminist and misogynistic activism [44].
Reddit creates policies and rules to regulate user behaviors.Fiesler et al. described Reddit's rules as three-tiered: Reddit policies, Reddiquette, and subreddit rules [22].Reddit policies contain the user agreement, the privacy policy, the content policy, moderator guidelines, etc.They dictate that the rules of subreddits should not violate site-level policy.More specifcally, the content policy lists eight rules that address forbidden behaviors on Reddit, and it describes several ways to enforce these rules (e.g., asking users to delete ofending content, suspending or removing accounts, removing content, adding restrictions to or banning Reddit communities) [52].Reddiquette is "an informal expression of the values of many Redditors, " and it is written by Redditors rather than site operators [53].Reddiquette contains two types of guidelines: what to encourage users to do and what to encourage users not to do.The third level involves the specifc rules of subreddits, which are created and modifed by subreddit moderators.These subreddit rules should ft the site policies and regulate user behaviors within subreddits.
There is a range of mechanisms to assist with moderation work on Reddit.First, Reddit has volunteer moderators who regulate its subreddits [47].Second, Reddit uses automated moderators to help human moderators.Moderators can adopt multiple automatic moderation tools [10].The most popular is AutoMod, which scans for words and phrases input by human moderators.On Reddit, users themselves can moderate content by voting on posts and comments [39].These voting mechanisms grant the power to moderate content on users [32].Via voting, users, posts, and comments receive scores, which are called "karma scores" on Reddit.
A karma score is the number of positive votes (upvotes) a post or comment receives minus the number of negative votes (downvotes) it receives.A high karma score can increase the visibility of a post or comment, as Reddit's sorting algorithm prioritizes the visibility of a post or comment.Therefore, content with higher karma scores receives more visibility.Furthermore, karma scores are a measure of the dominant perspectives and views that shape these spaces.
Users employ a screen name on Reddit that functions as "semianonymous identity" or "pseudonymous identity." Reddit suggests that users give a name like "Throwaway" to an account they will not be using for long.On the one hand, having a pseudonymous identity enables Redditors to contribute without disclosing their personal information [1,2,40].On the other hand, pseudonymous identities on Reddit contributed to more hate speech [19].
The interaction format on the platform was designed as a postand-comment thread format.That is, Redditors can create a post, and under the post, other Redditors can submit comments and comments on comments.A post is composed of a title and a description, and the body of the description can be in the form of a text, image, video, or a link to an external website.

METHOD 4.1 Data Collection
We targeted city subreddits as the research site as users actively engage in discourses related to various issues, one of which is crime.We initiated our inquiry by drawing from a list that ranked the most popular city subreddits on Reddit by the number of users and frequency of posts [69].In selecting the city subreddits, we followed two criteria to obtain diverse samples: (1) the number of conversations about crime and (2) the geo-locations of the cities the subreddits represented.To check the number of conversations about crime, we conducted an exploratory search within these subreddits using general keywords often associated with crime, such as crime, criminal, assault, robbery, and theft.
After going through this selection process, we chose fve city subreddits: r/baltimore, r/chicago, r/LA, r/Atlanta, and r/boston because the exploratory search returned a substantial number of conversations about crime in the fve subreddits.Moreover, the cities represented were located in diferent parts of the US and had diferent histories of crime.
Crime can refer to a broad range of illegal activities.To narrow the scope of our dataset, we reviewed previous work that studied the "fear of crime" and that mainly considered people's fear concerning violent crime (e.g., murder, assault) and property crime (e.g., burglary) [12,58,76].Thus, we did not include other crimes like tax fraud, speeding, or wage theft.For defnitions of violent and property crime, we referred to the FBI's Unifed Crime Reporting (UCR) [20].The UCR is a crime categorization scheme that categorizes crime across four primary violent crimes (murder and non-negligent manslaughter, rape, robbery, and aggravated assault) and four primary property crimes (burglary, larceny-theft, motor vehicle theft, and arson) in the US (See Table 1).Upon reviewing the FBI's defnitions of the eight types of crime, we generated an initial keyword list.To obtain an exhaustive sample, we developed a broad range of synonyms for each initial keyword from a thesaurus1 and compared the synonyms' meanings with the FBI's defnitions.Then, we generated a comprehensive list of keywords and collected posts and comments by searching the fve selected subreddits on the Reddit PRAW API.We removed the synonyms that returned no search results.We fetched all posts and comments that met our search criteria from January 2018 to August 2019.

Data Analysis
To address how moderation work negotiates symbolic boundaries against attacks, we conducted a trace ethnography study.Trace ethnography has been employed to understand interactions within digital documents and the documentary traces in technologicallymediated systems [24,49].This approach seeks to integrate the insights gained from participant observation with the log data (e.g., Wikipedia editing records [24]).It can generate "thick descriptions" by reconstructing user patterns and practices within distributed sociotechnical systems.In this study, we employed trace ethnography as an observational approach to examine historical records on Reddit.This approach provided a means to analyze posts, comments, and other community-generated content and practices, such as the rules and moderation work and the discussions around these.
The study was derived from a content analysis study, which has been published in [73].To provide more context on how prior data analysis guided the ethnography study, we present the data analysis in three phases: (1) an inductive and deductive coding for the data of r/baltimore, (2) a deductive coding for the data of the other four subreddits (i.e., r/chicago, r/LA, r/atlanta, and r/boston), and (3) trace ethnography in r/baltimore and r/chicago.The frst phase of the research is in [73] and the second and third phases were conducted for this study.
First, we randomly selected 20 posts with 1,221 comments from the subreddit r/baltimore.The frst author and a research assistant conducted an inductive coding process on the posts and comments derived from the grounded theory method [66].This method was selected as it provides a procedure for developing categories (open coding), linking these categories (axial coding), and crafting a coherent narrative that integrates these categories (selective coding) [66].This method was particularly benefcial in our study as it allowed for the dynamic identifcation of complex patterns of social interaction and narrative construction within the subreddits.It minimized researcher bias and facilitated the creation of a coherent story.This analysis resulted in the generation of eleven primary codes.The naming ontology used to defne these eleven codes, along with their defnitions and examples of sub-codes for each larger code, can be found in Appendix A. Through a process of axial coding, we synthesized these codes into three main themes: (1) de-stigmatizing Baltimore's image, (2) perpetuating racism and stereotypes of Black people, and (3) constructing counter-narratives to fght against online racism.
We found that both overt and covert racism were presented in the second broad category focused on "perpetuating racism and stereotypes of Black people."To diferentiate overt and covert racism narratives, we utilized the new racism theory developed by Bonilla-Silva.This theory distinguishes between pre-and post-Civil Rights racism, termed as old and new racism, respectively [8].Old racism is characterized by the overt racial oppression devised by White people to subordinate People of Color since the colonial era.In contrast, contemporary America has shifted from this blatant form of Jim Crow racism to what Bonilla-Silva describes as new racism.This modern form of racism is a complex system of "arrangements, mechanisms, and practices" that perpetuates White privilege across various societal levels, sustaining racial inequality and oppression against Black people and other minorities over generations.An example of this is the opposition to afrmative action, where liberalism is invoked to advocate for equal opportunity, yet it fails to address the disparities in resources available to Black people.New racism, therefore, is characterized by its subtlety, institutionalization, and appearance as "non-racial" racism [8].We found that Redditors in r/baltimore were using old racism frames to perpetuate racially coded language rife with anti-Black stereotypes in 30% of posts.Additionally, they applied more covert, new racism frames to conceal their racism and discrimination against Black people in 40% of posts.The percentages demonstrated that conversations about crime in r/baltimore were inundated with racist narratives.We incorporated the "old racism" and "new racism" codes into the codebook.
In the second phase, we randomly selected 20 posts from the other four subreddits.The frst and second authors conducted deductive coding on the posts along with the associated comments using the codes developed in the frst phase.The purpose was to understand whether racist ideologies perpetuated in conversations about crime across all fve subreddits.We found the subreddits r/baltimore and r/chicago contained substantially more racist content than the other three subreddits, facilitating the generation of thick descriptions related to racist attacks and boundary negotiation practices.
In the third phase, we focused on the subreddits r/baltimore and r/chicago and conducted trace ethnography.Our analysis data included two parts.The frst part was looking at the 40 posts along with the 3,293 comments.The post information can be seen in Appendix D. Specifcally, we tracked the moderators and key Redditors who frequently participated in the conversations and analyzed their behavior patterns, for example, whether some Redditors consistently participated in racist (e.g., brigading) or anti-racist conversations, and what actions moderators took to regulate the conversations (e.g.locking posts).We also checked situations where moderators removed comments, for example, whether they removed racist comments or not and whether they explained the reason.Besides the conversations generated by users, we also checked Reddit's features.For example, we observed that the Reddit algorithm automatically sorted or folded comments.We analyzed which types of comments were pushed to the bottom or hidden, focusing on the topics they covered and the votes they received.We compared the topics of comments that received high upvotes and downvotes, for example, on whether they were related to racism or anti-racism.In the second part of our analysis, beyond the sampled posts and comments, we reviewed content related to racism and crime throughout the subreddits.This included examining the rules of the subreddits and the information provided on their Wiki pages.We took diaries and notes on these observations.The frst author primarily did the trace ethnography.All researchers met weekly to discuss the analysis results.
Through the analysis, we identifed the dynamics of community interactions and moderator interventions within the subreddits.We carefully examined the connections between initial codes, detailed in Appendix A, and those identifed through trace ethnography.The examination focused on how moderators and community members engaged with each other in the context of racism.Two signifcant themes emerged: "fragile boundary," highlighting the challenges between fostering open dialogue and protecting the community from racist attacks, and "boundary negotiation," highlighting the intentional eforts of moderators and community members to defne acceptable behaviors and content.These themes illustrated the development of "boundary" as a critical element in the governance of online communities.The "fragile boundary" theme contained codes such as "racialized posting agenda," "covert racism comments, " "brigading from racist subreddits, " and "racialized voting"; the "boundary negotation" theme contained codes such as "lock down," "moderation: discretion," and "cultural norms."The researchers worked together and categorized them into three boundary categories-"institutional," "cultural," and "geographical."The results are structured by the three categories.

Researcher Positionality Statement
The frst researcher is an Asian woman, not an American citizen.The second researcher is a frst-generation Latina woman growing up in Boston.The third researcher is an Iraqi-American, cisgender, heterosexual man from a minority group within Iraq.While this diversity brought varied perspectives and insights, we recognized the potential for biases inherent to our diferent backgrounds.To mitigate these, we engaged in collaborative data analysis and continuous refexivity.The frst and second researchers served as the primary coder of the data.They developed deep conversations about the data by situating narratives related to race into the fve cities' cultural and historical contexts.They provided their knowledge around microaggressions, racial stereotypes, hate crimes, and other violent or non-violent transgressions regarding racial or ethnic backgrounds in communities such as Boston and Chicago.The third researcher served a dual mentorship and collaboration role, helping shape the research ideas through the analysis and paper writing.However, we acknowledge that understanding the balance between open, democratic online platforms and the necessary moderation strategies, particularly in discussions on racism, is complex.This intricate balance presents a challenge in efectively managing online communities while respecting diverse viewpoints and expressions.

Limitations
This work had several limitations.One notable constraint was the sample size; our study focused on an in-depth analysis of 40 posts and their associated 3,293 comments from only two city subreddits.While this provided rich insights into the dynamics within these specifc communities, it might limit the generalizability of our fndings to other online communities or cities with diferent socio-cultural contexts.Additionally, as non-moderators, we lacked access to backend data and could not observe posts and comments removed by moderators or deleted by users before our data collection.This could have led to missing key perspectives or instances of moderation.To address these gaps, future research could involve a virtual ethnographic study conducted from a moderator's standpoint, including interviews with moderators to understand their decision-making processes.Moreover, the moderation strategies observed may be infuenced by the individual experiences and cultural backgrounds of the moderators.Given these factors, our fndings should be interpreted with an understanding of their context-specifc nature and the limitations inherent in the study's scope and methodology.

RACIALIZED DISCOURSE IN THE BALTIMORE AND CHICAGO SUBREDDITS
Baltimore, Maryland, and Chicago, Illinois, prominent American cities, are frequently (historically) portrayed as crime hot spots in media, a depiction deeply intertwined with racial stereotypes [61].
In Baltimore, media coverage frequently focuses on crime, overshadowing the city's resilience and complex urban challenges like poverty and segregation [21,70].Similarly, Chicago, with its diverse Black, Hispanic, and White communities, is often highlighted for gun violence and gang activity in mainstream media narratives [48].In a similar vein, this portrayal tends to obscure the city's history of segregation and the resulting disparities in wealth and opportunity [48].Such media representations simplify the complex socio-economic and racial dynamics at play, reducing residents' diverse individual and collective experiences to mere stereotypes.These racialized dynamics are refected in their respective online communities, where the discourse there perpetuates and even exemplifes racial stereotypes and racist narratives.We selected the subreddits r/baltimore and r/chicago as primary sites for investigating this phenomenon, with each having hundreds of thousands of subscribers.Importantly, they serve as digital microcosms of Baltimore and Chicago, refecting the diverse interests and concerns of its community members.Notably, discussions within these subreddits align signifcantly with issues of race and racism, a pattern not observed to the same extent in other city-focused subreddits (e.g., r/boston, r/atlanta).For instance, a post in r/baltimore questioning why individuals with racist views choose to live in a predominantly Black city. 2 This thread evolved into a rich dialogue, where community members shared a range of perspectives.Some linked crime to racial issues, while others countered these views with personal stories, revealing their own feelings about safety and community in Baltimore.This example highlights the depth and complexity of discussions around race, crime, and societal issues in the subreddit.During the study period, despite the absence of signifcant incidents sparking widespread discussions in the two subreddits, conversations about crime covered a spectrum from violent to property crimes.
The prevalence of these racialized discussions has led to more clearly defned rules and actions against racism and crime-related posts in the two subreddits compared to others.For example, r/chicago had implemented moderation strategies to balance crime discussions, including banning low-efort crime posts and creating a wiki page 3 to address concerns about the subreddit being dominated by crime-related content.Similarly, r/baltimore enforced rules that target racism, sexism, and other forms of discrimination, especially in discussions related to crime and the city's residents.These rules are aimed at managing sensitive topics like crime (see details in Appendix B and Appendix C).The approaches from both subreddits provided a rich context for our trace ethnography.They ofered

Race baiting
Posts that highlight racial issues to provoke debate, escalating tensions and circulating stereotypes and racist commentary

Covert racism
Subtle expressions of racism embedded in comments and voting patterns that reinforced stereotypes without explicit racial language

Racist brigading
Coordinated attacks from groups targeting subreddit discussions to disrupt and dominate with overt and covert racist messages

Institutional boundary
Moderation actions such as post locking and enforcing rules against edited titles to maintain neutrality and prevent race baiting

Cultural boundary
Community-driven standards developed from members' experiences and city history to identify and counteract covert racism

Geographical boundary
Strategies to establish virtual geo-fences within subreddits to prevent participation from noncommunity members and reduce brigading

Racist Attacks Boundaries Constructed
Figure 1: Mapping racist attacks and constructed boundaries: The relationship between types of racist attacks and the sociotechnical boundaries constructed to combat racist attacks in r/baltimore and r/chicago.
potentially unique insights into how online communities address racism.

FINDINGS: MODERATION AS BOUNDARY NEGOTIATION TO COUNTER RACIST THREATS
Through trace ethnography, we found racist attacks were enabled by (1) race baiting posts, (2) covert racism and racialized voting, and (3) racist brigading and contributed to the fragile boundaries in r/baltimore and r/chicago.In response to these threats, we observed that moderators and some community members employed various moderation strategies to negotiate the institutional, cultural, and geographical boundaries of their communities.The interplay between the racist attacks and the corresponding boundaries is illustrated in Figure 1 and further explained in the sections that follow.For anonymity, we have paraphrased quotes from non-moderator users in our analysis.

Maintaining Institutional Boundaries to Limit Race Baiting
One feature of Reddit is the opportunity for users to repost external information to the platform.In this way, Redditors often migrate content generated from other digital spaces (e.g., news websites) to the platform.While reposting content from other platforms can be perceived as a neutral activity, certain Redditors posted content that intentionally drew attention to racial issues and had the impact of inciting or perpetuating racial discourse.We refer to these as race baiting posts.In response, moderators and certain Redditors were found to create or modify subreddit rules as institutional boundaries against racist attacks.
6.1.1Fragility Threat: Race Baiting Posts.Race baiting posts, as observed in our study, were those crafted or shared to highlight racial issues, which often incite or exacerbate racial tensions or discussions, leading to the propagation of stereotypes, prejudice, or racist commentary within the community.In our analysis, we found that over 25% of posts in the sampled datasets were baiting racist comments.For instance, our analysis revealed an over-representation of Black criminals or suspects in violent crime reports in r/baltimore, with ten out of twenty sampled posts mentioning Black criminals or suspects in a negative tone, while there was no explicit discussion of White criminals or suspects.Six of these posts led to rampant racist comments.Similarly, in r/chicago, six out of twenty sampled posts resulted in rampant covert racism comments, with four posts identifed by moderators as race baiting.These posts challenged the institutional boundaries of the subreddits, which explicitly prohibit racist behaviors.
The data indicated instances where original posters revised words or added information in posts, which led to racialized discussions.As one comment indicated about a post, some information was not needed for a news report; however, the original poster manipulated the post, purposely adding information or editing the wording to racialize the news: Most new articles don't have to contain information such as race.They [the posters] picked up incident details or words to create particular views and then triggered racism.(23 points 4 ) For example, in r/chicago, a news article originally titled "More than 1,500 Chicago cops to hit the streets to prevent Fourth of July weekend violence" was reposted with the title altered to include the word traditional.The new title read "...to prevent traditional surge of shootings on Fourth of July weekend."This addition of traditional was observed to subtly shift the focus of the discussion.While the term frstly reinforced the notion of persistent gun violence in Chicago, it also, secondly, and more subtly, laid the groundwork for racial stereotyping.This is evidenced in the conversations where some Redditors used ambiguous phrases like certain people, indirectly linking gun violence to Black communities in Chicago, despite the absence of explicit racial references in the news article.The impact of such nuanced language was further highlighted by a community response, where one Redditor noted, "Interestingly, that comment (and various chains) were deleted by mods...I'm sure whichever mod removed it thought it was racist, although it doesn't mention or imply race anywhere." This example illustrates how nuanced language choices can transform a general discussion about crime into one that perpetuates racial narratives.
Moreover, we found that people were purposefully migrating popular race baiting posts from other subreddits into these city subreddits.For example, posts about Carrying Concealed Weapon (CCW) policies and violent crime were reposted in the two subreddits after they had already become popular in other subreddits (e.g., r/illinois) and triggered racist conversations against People of Color.Through the feature of crossposting, we observed how the institutional boundaries of these subreddits were repeatedly challenged.

Moderation Strategy: Locking and Unbiased Posting Rules.
In response to race baiting posts, we observed that moderators and some Redditors created policies or rules, such as locking race baiting posts and calling for unbiased posting behavior to establish clear institutional boundaries. 4The comment score is upvotes minus downvotes.
In response to the fracturing of community boundaries by race baiting, four of the fve subreddits established rules to restrict "editorialized news titles." These subreddits had clear rules that disallowed original posters from editing the titles of news articles and other material that were migrated to their communities from external sources.The rules are listed in Appendix B.
We also observed that, as a mechanism for limiting the disproportionate number of crime-related race baiting posts, in r/chicago, moderators articulated their moderation policy regarding crime to restrain race baiting posts about crime and produce more meaningful discussions for the community in the wiki page as aforementioned: For a long time, we have allowed posts about shootings, carjackings, assaults, etc., on r/chicago.However, as of late, we have seen that these types of posts tend not to generate meaningful discussion.Instead, they tend to rehash the same talking points and arguments in every thread and do not add anything new to the conversation.At the same time, we have heard from you, our community members, that our homepage feels overrun with these crime posts full of unproductive conversation to the detriment of the tone of our subreddit.
Later, the moderators posted another announcement 5 to articulate what crime-related posts are still allowed at moderator discretion.
In instances of rampant race baiting, it was noted that moderators in r/chicago locked certain posts.Once a post was locked, comments would no longer be posted to that conversational thread.Four of the 20 sampled posts were locked down by moderators.The moderator u/KrispyKayak in r/chicago particularly took charge of managing race baiting posts, as illustrated by Figure 2. In our data, the moderators in r/baltimore did not take action to lock posts.
We noted that moderators and some Redditors engaged in conversations, emphasizing the unbiased posting rule.In r/baltimore, a Redditor posted a short video that showed two Black teenagers stealing an electric scooter in public.The moderator, u/z3mcs, commented in the discussion, pointing out how the video itself was biased and how such videos could lead to racist comments: There is a diference between talking about a crime that has been committed.It is another thing to have the attitude of "There goes those crazy coloreds again, " and "They are always doing something wrong, and I wish they would all just go away so I can just live my life in peace." And that poster clearly has chosen not to understand, even though it has been pointed out many many times to them.(-1 point) Similarly, in r/chicago, Redditors acknowledged unbiased posting behaviors.One post cited a news article that reported that a homeowner shot a 14-year-old teenager during an attempted burglary with fve other teenagers.The fve other teenagers were charged with murder due to his death.One Redditor complimented that the news was framed neutrally, not including "unnecessary information such as race or the brand of the gun": I appreciate that this post makes no attempt to politicize this incident.There was no indication of race or the type of weapon used by the homeowner.Simply the facts.News should be reported in this manner.(506 points)

Defning Cultural Boundaries to Counter Covert Racism
Previous studies have indicated instances of white supremacist culture being present on Reddit [73].Many conversations and practices on Reddit are mediated by covert racist ideology, which is implicit and makes moderation work challenging.In response to covert racism, our analysis found that moderators and some Redditors defned the cultural boundaries by drawing contextual and historical knowledge about the subreddit and condemned covert racism as a violation of community values and principles.
6.2.1 Fragility Threat: Covert Racism and Racialized Voting.Our study identifed manifestations of covert racism in subtle and institutionalized forms within the two subreddits.When discussing crime-related reports, some comments contained racial stereotypes about the cities or neighborhoods and the people living there.Additionally, some comments cited crime statistics or described personal impressions, which added credibility to these racial stereotypes.Due to their implicit characteristics, it was challenging for moderators and other Redditors to address the covert racism embedded in the narratives.It was observed that moderators frequently adopted a light moderation approach, often resulting in covert racism going unmoderated.Some moderators stated that they preferred mild moderation, allowing communities themselves to collectively decide whether a comment was acceptable or not through voting.
The mild moderation strategy was intended to provide more "transparency" to the public: My stance is that I tend to get rid of low hanging ofenders.The places where there is a gray area, and they're highly downvoted, I'll tend to leave it.(It gives a bit more transparency, and the community has already responded).Although it is better to remove it earlier than to let it sit and collect more downvotes.
(The individual poster may turn around and try to retaliate due to the unpopular opinion).(3 points) Moderating posts and comments exhibiting covert racism required cultural knowledge about the city or the neighborhood.The moderator of r/baltimore, u/z3mcs, emphasized the importance of having a comprehensive understanding of the city's history in their comments.They were concerned that newcomers might form biased opinions about the city by engaging in racial stereotypes: ... [B]ut I do hate that you have a lot of people moving to Bmore that don't actually know the history, and they get a slanted view on things if they keep visiting the sub daily.(4 points) To make matters worse, we observed that comments containing covert racism often received more upvotes than those opposing them.The perpetuation of covert racism and racialized voting contributed to the fragility of these subreddits' boundaries.

Moderation Strategy:
Culture-Based Norms.In r/baltimore and r/chicago, our study observed moderators and some Redditors engaging in eforts to address racialized stereotypes and covert racism by establishing culture-based norms.These individuals utilized their personal experiences and historical knowledge of the cities to identify and articulate instances of covert racism in those contexts.These norms served as cultural boundaries for the subreddits, protecting them against covert racist threats.
We refer to the moderation work as culture-based norms since it was not explicitly outlined in the subreddits' rules.While Reddit and the fve subreddits had rules addressing racism, our analysis noted that these rules often lacked specifc defnitions or articulations of covert racism (see Appendix C).For example, Reddit's Content Policy provides only two examples of racist posts: one being a "[p]ost describing a racial minority as sub-human and inferior to the racial majority," and the other a "[m]eme declaring that it is sickening that People of Color have the right to vote." 6 Neither of these examples helps the reader understand how covert racism operates.
It was observed that some moderators and Redditors actively engaged in conversations and reinforced the norms, thereby strengthening the anti-racism culture within their respective subreddits.In r/chicago, a post titled "Got called a racist for protecting my property" serves as an example of how culture-based norms were navigated in discussions of covert racism.The original poster narrated a personal incident where they were accused of racism.The comments section featured various Redditors sharing their own experiences of being labeled as racists.Our analysis indicates that some of these narratives, albeit subtly, contained elements of racist ideology.
For example, one Redditor shared that they lived in a condo building that included 10% Section 8 housing.Section 8 housing, also known as Section 8 of the Housing Act of 1937, "authorizes the payment of rental housing assistance to private landlords on behalf of low-income households in the United States." 7The Redditor reported that most residents of the condo building were African American, and the African American residents' guests often asked the Redditor to open the apartment door for them.When the writer refused to do so, the writer was called "racist" by the visitors.The Redditor also compared the diferent reactions of White visitors and African American visitors to highlight perceived diferences in treatment by the African American visitors: You wouldn't believe how frequently I've been accused of being racist for not being comfortable with opening/holding the door for some visitors and asking them to instead buzz in with a resident they know.Most White guests reply, "Yes, no problem, " whereas African American visitors accuse me of being racist.(87 points) Another Redditor echoed the complaint, commenting that living in a building with Section 8 housing did not decrease the rent for unsubsidized residents.
In response to covert racist comments, it was noted that some Redditors actively participated in discussions, challenging the racialized framing within the personal stories.For instance, one Redditor shared their personal experience of living in Section 8 housing as evidence and pointed out that these complaints refected racist attitudes toward African Americans.However, this comment received 14 downvotes: I have been a resident of Section 8 housing for more than ten years.I do not see why you associate all these negative traits with an entire group of individuals.I also do not like the way you treat the poor with such a terrible attitude.(-14 points) We also observed that one Redditor posted the same comment under each "being called racist" story.The comment contained eight external links, demonstrating how White people have privilege over People of Color in employment, the criminal justice system, housing, dating, and other daily contexts.The Redditor aimed to inform people that these "being called racist" stories may involve White supremacy and covert racism.However, these comments received more downvotes than upvotes.

Establishing Geographical Boundaries to Defend Against Brigading
Brigading, which originated on Reddit, refers to a coordinated attack from a group of users by one subreddit against members of an antagonistic subreddit [51].These coordinated attacks can take various forms, such as downvoting specifc posts, comments, or users.Brigading can manipulate the karma system to elevate or diminish a topic's prominence.We observed instances where brigaders, with racist intent, disrupted community boundaries and infuenced 7 Section 8 (housing).https://en.wikipedia.org/wiki/Section_8_(housing) the subreddit culture.The subreddits were initially designed to facilitate a broad spectrum of local conversations.Racist brigadings disrupted the intended discourse.In response, moderators and some Redditors were seen employing virtual geo-fences to delineate geographical boundaries against such brigading.
6.3.1 Fragility Threat: Racist Brigading.In cases where racist narratives became pervasive, it was noted that moderators and Redditors suspected brigading from external subreddits.We observed that during brigading incidents, there was a noticeable disorder in the voting systems.Racist, overt and covert comments received high numbers of upvotes, while well-intentioned and anti-racist comments were heavily downvoted.
In addition to manipulating voting systems, brigaders also posted racist content, which had the potential to completely overwhelm the voices and alter the culture of the subreddits.As one Redditor in r/chicago explained, a brigading group from pro-gun subreddits fooded their subreddit with numerous pro-gun comments.This brigading activity misrepresented the typical views expressed on r/chicago during regular times: When will non-Chicagoans who support guns stop imposing their political views on us?They are well aware that most users on this subreddit and residents of Chicago are liberal.DO NOT IMPOSE YOUR IDEALS ON US.Stay away from us.(-39 points) We note that racist brigaders might create race baiting posts and covert racist comments, as mentioned earlier.What difers is that racist brigading involves large-scale attacks from outside groups.More importantly, brigading often shifts the boundaries within which community conversations take place.This happens because brigading frequently moves a post to the front page of Reddit (also known as subreddit r/all) due to a high volume of comments and votes.As a result, more Redditors from outside the subreddits may join the discussion, extending the conversation beyond the subreddit's original scope.As one Redditor noted: These posts appear on the frst page once they reach a certain threshold for upvotes.That draws a wide variety of people.(25 points) Based on our observations, several posts in r/chicago reached the front page at some point.When this occurred, a signifcant number of Redditors poured into r/chicago and began following the brigade's voting and commenting patterns.This further amplifed the brigading efect.
The examples highlight how racist brigading could disrupt the internal boundaries of a subreddit.The actions of brigaders can be likened to invaders crossing the boundaries of a subreddit, impacting its values and culture.

Moderation Strategy:
Virtual Geo-Fencing.Our study observed that moderators in the two subreddits recognized the potential of brigading to introduce racist content and disrupt community dynamics.In r/baltimore, our analysis found that specifc rules were implemented to address brigading, connecting it with racism and other forms of hate speech.One of r/baltimore's rules stated that when moderators found brigading, they would remove problematic content, ban brigaders, and lock the post, preventing further voting or comments.Additionally, moderators would ban Redditors suspected of taking part in brigading (See Appendix C).
Regarding brigading, r/chicago's rules do not explicitly address this issue.However, one moderator's discussion on the topic was observed during our analysis.The moderator believed that using rules to curb brigading was challenging, as rules can only regulate "those people [who] are here to participate in good faith and are distorting conversations which may be unique to Chicago."The primary issue here is semi-anonymity: one Redditor can own multiple accounts, so if one account is banned, the Redditor can quickly switch to another one or create a new account for brigading.
In response to the issue of semi-anonymity for brigading, practice observed among moderators and some Redditors, involved establishing virtual boundaries within the subreddit, which we term geo-fencing.A geo-fence is "a virtual perimeter for a real-world geographic area" 8 .In-group members worked together to establish and reinforce these virtual geo-fences within subreddits, identifying invaders and preventing them from participating in the community.We use this term to describe the geographical boundaries of local subreddits, collectively defned by their members and subject to constant change.
One strategy observed was an attempt by moderators and some Redditors to characterize subreddit members and determine who could authentically represent their community.One moderator pointed out that, unlike other subreddits created for a specifc topic or theme, the only common interest among Redditors in r/chicago is geography, specifcally the city of Chicago.People discussed using geo-location as a criterion to defne who could participate.The idea was that people living in Chicago possess a better understanding of the city and can participate in discussions based on their unique city experiences.However, some Redditors argued that pure geographic location alone might not be appropriate for defning the boundary of the subreddit.One Redditor used themselves as an example; they were not from the city of Chicago but were interested in the subreddit.The argument was that relying solely on geo-location criteria could discourage non-local Redditors from participating: I am not from or a resident of Chicago.However, I subscribed to this sub because I adore the city of Chicago.To subscribe to the sub, you do not have to be a resident of Chicago.If you believe you must, that is small-minded thinking.(14 points) Second, In r/chicago, a 'Minimum Karma/Account Age Policy' was noted as being implemented to mitigate the impact of new accounts potentially involved in brigading in their wiki page, 9which establishes a minimum karma requirement for accounts to create posts or comments.The policy intentionally does not specify the exact minimum karma number to prevent potential attackers from artifcially infating their karma to meet the requirement.However, it is essential to note that this rule may inadvertently hinder new users from participating in the subreddit.
Our study observed that moderators and Redditors utilized tools and techniques to analyze accounts suspected of racist activity by reviewing profle information and using tagging tools.When a Redditor was suspected of being racist, moderators and Redditors could manually examine the account's profle.Moderators heavily relied on historical profle information provided by Reddit for moderation purposes.In addition to reviewing historical profles, Redditors and moderators also experimented with self-developed tools for tagging problematic users, including those identifed as racists.For example, one Redditor mentioned using the Reddit Enhancement Suite (RES) tool to tag Redditors posting problematic content.RES is a community-driven unofcial plug-in tool for Reddit.Users can employ it to tag specifc accounts with diferent colors and add notes.These color tags are displayed next to the account screen name, serving as a notifcation and reminder to the user that the account is associated with problematic content.

DISCUSSION: BOUNDARY AS A TECHNIQUE TO COUNTER INTER-COMMUNITY INVASIONS
In this study, city subreddit boundaries faced challenges from racist ideologies through race baiting posts, prevailing covert racism, and racist brigading.To address racism, moderators and some Redditors developed various strategies to navigate the symbolic meaning of boundaries for their communities.The fndings highlight how boundaries can serve as a technique to address conficts between diferent communities.We start by discussing the characteristics of diferent boundaries in moderation work and then delve into a broader discussion of boundaries in online communities.

Refecting Institutional, Cultural, and Geographical Boundaries in Moderation Work
In this research, moderation work was referred to as boundary negotiation to address racist threats.We classifed various moderation strategies and mechanisms into institutional, cultural, and geographical boundaries.We refect on the characteristics of the three boundaries and provide practical applications.The three boundaries difered in their ease of enforcement.Institutional boundaries were primarily expressed as rules and were visible to community members.They were easy to implement and enforce.Geographical boundaries were sometimes outlined on the wiki page and were less transparent to all community members.Moreover, they relied on technical tools for implementation.Cultural boundaries typically took the form of community norms and were the most challenging to enforce.Implementing cultural boundaries involved drawing upon historical, cultural, and contextual knowledge and understanding the needs of community members.
Due to cultural boundaries carrying more historical and contextual knowledge, they have a more signifcant degree of complexity than the other two boundaries.As Williams and Copes contended, "[c]ultural boundaries online are often porous, with little or no restrictions placed on who may register or interact within a community" [72].Institutional boundaries bore the most minor complexity and were vague in their meanings.For example, the subreddit rules prohibited racism in both subreddits, but neither of them explicitly articulated covert racism.Identity construction is a highly nuanced process for members of diferent groups [15,68,73].Adding complexity to boundaries is essential to diferentiate racist behaviors and establish anti-racism values.
The boundary of an online community is dynamically constructed and is constantly mediated by the power of in-group and out-group members.Our study demonstrated that these boundaries were fragile because non-members could repeatedly brigade a subreddit.Moderators negotiated the implicit and contextual meaning of the boundary, such as situating covert racist narratives within the context of urban culture or the culture of the city subreddit, and explored the use of geo-fences to exclude racist users.They also employed new moderation tools and initiated discussions to deal with new attacks.These examples demonstrated some degree of adaptability in cultural and geographical boundaries.Institutional boundaries had lower adaptability than the other two boundaries.For instance, the institutional boundary negotiation process was slow to be refected in the rules of the subreddits.
In summary, institutional boundaries involve the implementation and enforcement of explicit rules and guidelines, cultural boundaries are shaped through the development and reinforcement of community norms and values, and geographical boundaries pertain to the physical or virtual space of the community and its management.To further bridge the gap between our research fndings and practical application, especially for subreddit moderators, we have summarized key takeaways and suggested actions in Table 2.This table aims to provide a concise reference for moderators to understand and apply these concepts in their community management eforts.

Boundary Work in Broader Contexts:
Insights from r/baltimore and r/chicago Online community boundaries are a symbolic notion that defnes the identity of in-group and out-group members, maintains the social status of the group, and defends against external invasions.Our research provides an in-depth examination of online community boundaries within the unique contexts of r/baltimore and r/chicago.While these insights are rooted in the specifc dynamics of these two subreddits, they ofer a lens through which to understand the nuanced ways online communities negotiate identity, status, and external threats.
In exploring these subreddits, we observed how moderators established new boundaries or redefned existing ones in response to racist attacks.Similar strategies are employed on Reddit, where moderators have developed comprehensive posting guidelines and wikis to ofer clear interpretations of the rules.These moderators consulted external resources to accurately defne racism and adjust their guidelines accordingly.Additionally, both moderators and some Redditors utilized unofcial plug-in tools to identify users engaging in racist behavior.However, the resources available to moderators and Redditors for combating racism were found to be limited.Discussions around establishing geographical boundaries for city subreddits mainly centered on the concept of geo-fencing.
Online community boundaries are often manifested through various design features, like subscription and membership mechanisms.Subscription mechanisms determine community members and thus set the community's boundaries.Prior works also employed social network analysis tools to measure boundaries and conficts between online communities, as evidenced in [14,26,36,37].In these studies, community boundaries were defned and measured to detect inter-community interactions.For instance, Datta et al. constructed inter-community confict graphs using aggregation and normalization techniques [14].Kumar et al. analyzed intercommunity interactions and conficts on Reddit.They discovered the "colonization" efect, where the original members of subreddits reduced their participation when their subreddit was targeted, and attackers became more active [37].Such studies ofer diferent approaches and tools for exploring online community boundaries.
Our exploration of r/baltimore and r/chicago aims to inspire further research into the dynamics of symbolic boundaries and community governance.This is particularly pertinent in the context of empowering marginalized groups to create and maintain safe spaces within digital environments.Future research could expand on this study by exploring boundary negotiation mechanisms in a broader range of online communities.

CONCLUSION
The study sought to understand the sociotechnical mechanisms that constrain racism in r/baltimore and r/chicago.Drawing on the concept of symbolic boundaries, we saw how the boundaries of communities became fragile under racist attacks and how moderation work negotiated the meaning of boundaries to defend against racists.In this study, moderators and some Redditors in r/baltimore and r/chicago developed moderation strategies for maintaining and establishing institutional, cultural, and geographical boundaries.The results suggest that future work could be done to understand the meaning of boundaries better and to design mechanisms that can strengthen boundaries to empower communities and help members maintain their space.

Figure 2 :
Figure 2: Screenshot of a post in r/Chicago titled "Five teenagers charged with murder," with 354 comments; locked by moderator u/KrispyKayak for race baiting, including pinned announcement.

Table 1 :
Types of crime and keywords used to collect conversations about crime on Reddit

Table 2 :
(70)ary of boundary types, characteristics, and practical applications for Reddit moderators Stereotypes of marginalized groups(39): stereotypes about people of color, panhandlers, LGBTQ, and other marginalized groups, e.g., profling Black suspects • Severe punishment of Black criminals(65): supporting severe punishment and opposing temperate sentencing to Black criminals, e.g., heavy sentencing to prevent repeat ofenses, tempering the justice system • Distrust of police(70): questioning police misconducts or showing negative emotion to police, e.g., argue law enforcement culture leads to less change, list police misconducting behaviors • Crime trends or facts in Baltimore (17): providing crime statistics, crime incident facts, or other factual information about crime in Baltimore, e.g., add details to the video evidence, mass shooting cases in Baltimore • Destigmatizing Baltimore (49): providing reasoning or facts to destigmatize the crime stereotypes of the city (but not stereotypes of people of color), e.g., benefts of Baltimore outweigh negatives, people outside of Baltimore have wrong impressions • Drawing attention to racism (50): pointing out racist or racism tone and content in the posts or comments, e.g., suggest justice system be demographics/race blinded, prove the comment is racism • Root societal causes of crime (20): explaining the root, societal causes of crime such as systemic racism and poverty, e.g., why wealth inequality led to violent crime, long-term solution: rehabilitate • Explaining the criminal justice system (68): providing knowledge about the US criminal justice system such as laws, sentencing, and bail, e.g., argue 2-degree assault is a misdemeanor, explain penalty diference between juvenile and adult • Misinformation in social media (37): discussing, correcting or criticizing misinformation about crime disseminated on Reddit, Facebook, and other social media platforms, e.g., argue gang initiation news is fake, be critical about info on social media like Facebook • Information or emotional support (27): answering questions about crime or providing emotional support for victims or vulnerable people, e.g., against victim blame, provide precautionary tips • Others (117) /baltimore Rule 4: News Articles and Editorials With particular regard to News Articles and Editorials, threads with editorialized titles may be removed.Keep your views on the topic in the comments.Do not lead a thread title with "BREAKING NEWS" or anything similar.In terms of stylistic considerations alone, this looks especially awkward for a thread that is hours old and is still prefaced with "BREAKING NEWS".r/chicago Rule 5: No Editorialized or Sensationalized News Titles News article posts must use the actual headline or lede (frst sentence) from the source.Sensationalized or editorialized retitling is not allowed, especially in an attempt to bait or shame.Excessive abuse of this rule may result in a ban.r/chicago Rule 10: Posts about crimes that are low-efort, or do not have a wider impact on the city, will be removed In an efort to foster a more engaged and positive community, the following crime-related posts are not allowed: "Crime Recap" posts (e.g.articles with titles such as "10 People Shot Across Chicago Last Weekend") Posts about a violent or petty crime targeting private individual(s) without greater impact on the Chicago area Crime-related posts that are vague or generalized Posts that use crime news to rile-up users r/LosAngeles Rule 6: Do not editorialize your titles.Do not editorialize your post title when linking a news article/tweet/post/etc.Posts should closely match the article title, unless the article itself has an editorialized title.Still follow rules regarding capitalization.r/boston Rule 7: No URL shorteners or misleading titles.No sites that require accounts.Posts should have URL of clearest permalink known; use of URLs from publications with ofcially-generated links permitted.Misleading, editorialized, or sensationalized post titles discouraged when suggested or objective title will do./baltimore Rule 1: Follow Wheaton's Law Keep it civil.Racism, sexism, homophobia, transphobia, dehumanization, discrimination, insulting, trolling, slap-fghting, bullying, etc. are not allowed.Constructive criticism is okay.Personal attacks are not.Posts that are found to have been subjected to brigading or which generate an inordinate number of reports may be subject to locking.Additionally, users who frequently demonstrate a pattern of engaging in uncivil conduct are subject to warnings, timeouts, or bans.r/chicago Rule 4: No Racism, Bigotry or Baiting There is zero tolerance for racial slurs, baiting, and bigotry.Users who violate this rule will be permanently banned from the sub.Constructive conversations about race are allowed, though it's understandable that these conversations can get heated.When a comment falls into a gray area, moderators will do their best to make consistent judgment calls.r/LosAngeles Posting Guidelines: Harassment and Hate Racist, homophobic, and generally hateful remarks will not be tolerated and will result in a ban.r/Atlanta Rule 1: Racism/Homophobia/Name Calling/Attacks No racism, homophobia, or general attacks on others.No name calling, doxing, or harassment.We're mostly all adults, act like it!r/boston Rule 4: Don't harass other users.This includes doxxing, trolling, witch hunting, brigading, shitstirring, uncivil behavior, insults, and/or user impersonation.If you think it could break the rules, it probably does.If someone is harassing you in a thread, tell the mods and provide a link.If someone is harassing you in private, tell the Reddit admins.Egregious documented harassing of other users (especially in racist, sexist, and/or homophobic manner) may lead to being banned.
• rrD SAMPLED POST INFORMATION IN R/BALTIMORE AND R/CHICAGO