skip to main content
research-article
Public Access

Do Platform Migrations Compromise Content Moderation? Evidence from r/The_Donald and r/Incels

Published:18 October 2021Publication History
Skip Abstract Section

Abstract

When toxic online communities on mainstream platforms face moderation measures, such as bans, they may migrate to other platforms with laxer policies or set up their own dedicated websites. Previous work suggests that within mainstream platforms, community-level moderation is effective in mitigating the harm caused by the moderated communities. It is, however, unclear whether these results also hold when considering the broader Web ecosystem. Do toxic communities continue to grow in terms of their user base and activity on the new platforms? Do their members become more toxic and ideologically radicalized? In this paper, we report the results of a large-scale observational study of how problematic online communities progress following community-level moderation measures. We analyze data from r/The_Donald and r/Incels, two communities that were banned from Reddit and subsequently migrated to their own standalone websites. Our results suggest that, in both cases, moderation measures significantly decreased posting activity on the new platform, reducing the number of posts, active users, and newcomers. In spite of that, users in one of the studied communities (r/The_Donald) showed increases in signals associated with toxicity and radicalization, which justifies concerns that the reduction in activity may come at the expense of a more toxic and radical community. Overall, our results paint a nuanced portrait of the consequences of community-level moderation and can inform their design and deployment.

References

  1. Almeida, V., Filgueiras, F., and Gaetani, F. Digital governance and the tragedy of the commons. IEEE Internet Computing (2020).Google ScholarGoogle ScholarCross RefCross Ref
  2. Baumgartner, J., Zannettou, S., Keegan, B., Squire, M., and Blackburn, J. The Pushshift Reddit dataset. In Proceedings of the International Conference on Web and Social Media (ICWSM) (2020).Google ScholarGoogle Scholar
  3. Blackwell, L., Dimond, J. P., Schoenebeck, S., and Lampe, C. Classification and its consequences for online harassment: design insights from HeartMob. In Proceedings of the ACM on Human-Computer Interaction (CSCW) (2017).Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Borah, P. Does it matter where you read the news story? Interaction of incivility and news frames in the political blogosphere. Communication Research (2014).Google ScholarGoogle Scholar
  5. Brewster, J. Forbes - The extremists, conspiracy theorists, and conservative stars banned from social media following the capitol takeover. https://bit.ly/3xPvSLJ, 2020.Google ScholarGoogle Scholar
  6. Carillo, K. D. A., and Marsan, J. The dose makes the poison: exploring the toxicity phenomenon in online communities. In ICIS (2016).Google ScholarGoogle Scholar
  7. Chandrasekharan, E., Pavalanathan, U., Srinivasan, A., Glynn, A., Eisenstein, J., and Gilbert, E. You can't stay here: the efficacy of Reddit's 2015 ban examined through hate speech. In Proceedings of the ACM on Human-Computer Interaction (CSCW) (2017).Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Cheng, J., Bernstein, M., Danescu-Niculescu-Mizil, C., and Leskovec, J. Anyone can become a troll: causes of trolling behavior in online discussions. In Proceedings of the ACM on Human-Computer Interaction (CSCW) (2017).Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Cheng, J., Danescu-Niculescu-Mizil, C., and Leskovec, J. Antisocial behavior in online discussion communities. In Proceedings of the International Conference on Web and Social Media (ICWSM) (2015).Google ScholarGoogle Scholar
  10. Clegg, N. Facebook Newsroom - Welcoming the Oversight Board. https://bit.ly/2VVgHU7, 2020.Google ScholarGoogle Scholar
  11. Cohen, K., Johansson, F., Kaati, L., and Mork, J. C. Detecting linguistic markers for radical violence in social media. Terrorism and Political Violence (2014).Google ScholarGoogle Scholar
  12. Davidson, T., Warmsley, D., Macy, M., and Weber, I. Automated hate speech detection and the problem of offensive language. In Proceedings of the International Conference on Web and Social Media (ICWSM) (2017).Google ScholarGoogle Scholar
  13. Dewey, C. Washington Post - These are the 5 subreddits Reddit banned under its game-changing anti-harassment policy, and why it banned them. https://wapo.st/3AO7pbl, 2016.Google ScholarGoogle Scholar
  14. Flores-Saviaga, C. I., Keegan, B. C., and Savage, S. Mobilizing the Trump Train: understanding collective action in a political trolling community. In Proceedings of the International Conference on Web and Social Media (ICWSM) (2018).Google ScholarGoogle Scholar
  15. Gillespie, T. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media . Yale University Press, 2018.Google ScholarGoogle Scholar
  16. Gilmour, D. The Daily Dot - 4chan and Reddit users set out to prove Seth Rich murder conspiracy. https://bit.ly/2VWJ4B8, 2017.Google ScholarGoogle Scholar
  17. Ging, D. Alphas, betas, and Incels: Theorizing the masculinities of the Manosphere. Men and Masculinities (2019).Google ScholarGoogle Scholar
  18. Grover, T., and Mark, G. Detecting potential warning behaviors of ideological radicalization in an Alt-Right subreddit. In Proceedings of the International Conference on Web and Social Media (ICWSM) (2019).Google ScholarGoogle Scholar
  19. Hatebase. Hatebase. https://www.hatebase.org/, 2018.Google ScholarGoogle Scholar
  20. Hines, A. The Cut - How many bones would you break to get laid? https://bit.ly/2VSTrpD, 2019.Google ScholarGoogle Scholar
  21. Hoffman, B., Ware, J., and Shapiro, E. Assessing the threat of Incel violence. Studies in Conflict & Terrorism (2020).Google ScholarGoogle Scholar
  22. incels.co . Rules. https://bit.ly/3yY8wF2, 2018.Google ScholarGoogle Scholar
  23. Jhaver, S., Ghoshal, S., Bruckman, A., and Gilbert, E. Online harassment and content moderation: the case of blocklists. ACM Transactions on Computer-Human Interaction (TOCHI) (2018).Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Johnson, N. F., Leahy, R., Restrepo, N. J., Velasquez, N., Zheng, M., Manrique, P., Devkota, P., and Wuchty, S. Hidden resilience and adaptive dynamics of the global online hate ecology. Nature (2019).Google ScholarGoogle Scholar
  25. Kates, N. Statusmaxxing Admincel. https://open.spotify.com/episode/0MsycI8DgqpcX2NJEouTWl, 2019.Google ScholarGoogle Scholar
  26. Kwak, H., Blackburn, J., and Han, S. Exploring cyberbullying and other toxic behavior in team competition online games. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (2015).Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Lewis, R. Alternative influence: broadcasting the reactionary right on YouTube. Data & Society (2018).Google ScholarGoogle Scholar
  28. Lilly, M. 'The World is Not a Safe Place for Men': The Representational Politics Of The Manosphere . PhD thesis, Université d'Ottawa/University of Ottawa, 2016.Google ScholarGoogle Scholar
  29. Lyons, M. N. Ctrl-alt-delete: the origins and ideology of the alternative right. Political Research Associates (2017).Google ScholarGoogle Scholar
  30. Massanari, A. #Gamergate and The Fappening: how Reddit's algorithm, governance, and culture support toxic technocultures. New Media & Society (2017).Google ScholarGoogle Scholar
  31. Mathew, B., Illendula, A., Saha, P., Sarkar, S., Goyal, P., and Mukherjee, A. Hate begets hate: a temporal study of hate speech. In Proceedings of the ACM on Human-Computer Interaction (CSCW) (2020).Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Munger, K., and Phillips, J. Right-wing YouTube: a supply and demand perspective. The International Journal of Press/Politics (2020).Google ScholarGoogle Scholar
  33. Newell, E., Jurgens, D., Saleem, H. M., Vala, H., Sassine, J., Armstrong, C., and Ruths, D. User migration in online social networks: a case study on Reddit during a period of community unrest. In Proceedings of the International Conference on Web and Social Media (ICWSM) (2016).Google ScholarGoogle Scholar
  34. Ohlheiser, A. The Washington Post - Fearing yet another witch hunt, reddit bans pizzagate. https://wapo.st/2Xvbvae, 2016.Google ScholarGoogle Scholar
  35. Pennebaker, J. W., and Chung, C. K. Computerized text analysis of Al-Qaeda transcripts. A content analysis reader (2007).Google ScholarGoogle Scholar
  36. Perspective API . https://www.perspectiveapi.com/, 2018.Google ScholarGoogle Scholar
  37. Project, C. Media Coral open source commenting platform. https://docs.coralproject.net/talk/toxic-comments/.Google ScholarGoogle Scholar
  38. Rajadesingan, A., Resnick, P., and Budak, C. Quick, community-specific learning: how distinctive toxicity norms are maintained in political subreddits. In Proceedings of the International Conference on Web and Social Media (ICWSM) (2020).Google ScholarGoogle Scholar
  39. Raman, N., Cao, M., Tsvetkov, Y., K"astner, C., and Vasilescu, B. Stress and burnout in open source: toward finding, understanding, and mitigating unhealthy interactions. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering: New Ideas and Emerging Results (2020).Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Ribeiro, M. H., Blackburn, J., Bradlyn, B., De Cristofaro, E., Stringhini, G., Long, S., Greenberg, S., and Zannettou, S. The evolution of the Manosphere across the Web. In Proceedings of the International Conference on Web and Social Media (ICWSM) (2021).Google ScholarGoogle Scholar
  41. Ribeiro, M. H., Calais, P. H., Santos, Y. A., Almeida, V. A., and Meira Jr, W. Characterizing and detecting hateful users on Twitter. In Proceedings of the International Conference on Web and Social Media (ICWSM) (2018).Google ScholarGoogle Scholar
  42. Ribeiro, M. H., Ottoni, R., West, R., Almeida, V. A., and Meira Jr, W. Auditing radicalization pathways on YouTube. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (2020).Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. r/Incels . Rules. https://bit.ly/3iXIA77, 2018.Google ScholarGoogle Scholar
  44. Roberts, S. T. Behind the screen: content moderation in the shadows of social media. Yale University Press, 2019.Google ScholarGoogle ScholarCross RefCross Ref
  45. r/OutOfTheLoop . Post: “What's up with /r/The_Donald leaving Reddit?”. https://www.reddit.com/r/OutOfTheLoop/comments/6bzv8v/, 2017.Google ScholarGoogle Scholar
  46. r/The_Donald . Rules. https://bit.ly/3k47MIp, 2018.Google ScholarGoogle Scholar
  47. r/The_Donald. Post: “Bookmark this site”. https://bit.ly/30brHfm, 2020.Google ScholarGoogle Scholar
  48. Saleem, H. M., and Ruths, D. The aftermath of disbanding an online hateful community. arXiv:1804.07354 (2018).Google ScholarGoogle Scholar
  49. Sap, M., Card, D., Gabriel, S., Choi, Y., and Smith, N. A. The risk of racial bias in hate speech detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (2019).Google ScholarGoogle ScholarCross RefCross Ref
  50. Shen, Q., and Rose, C. The discourse of online content moderation: investigating polarized user responses to changes in Reddit's quarantine policy. In Proceedings of the Third Workshop on Abusive Language Online (2019).Google ScholarGoogle ScholarCross RefCross Ref
  51. Solon, O. The Guardian - Reddit bans misogynist men's group blaming women for their celibacy. https://bit.ly/2W2M0fq, 2017.Google ScholarGoogle Scholar
  52. Tausczik, Y. R., and Pennebaker, J. W. The psychological meaning of words: LIWC and computerized text analysis methods. Journal of language and social psychology (2010).Google ScholarGoogle Scholar
  53. thedonald.win . Rules. https://bit.ly/3g7IKH9, 2018.Google ScholarGoogle Scholar
  54. thedonald.win . Post: “I hope if you came from T_D you reserved your reddit username even if you don't plan to use it”. thedonald.win/p/FMA6trrU/, 2020.Google ScholarGoogle Scholar
  55. Timberg, C., and Dwoskin, E. Washington Post - Reddit closes long-running forum supporting President Trump after years of policy violations. https://wapo.st/3ySp2Gv, 2020.Google ScholarGoogle Scholar
  56. Zannettou, S., ElSherief, M., Belding, E., Nilizadeh, S., and Stringhini, G. Measuring and characterizing hate speech on news websites. In ACM Conference on Web Science (2020).Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Zhang, J., Chang, J., Danescu-Niculescu-Mizil, C., Dixon, L., Hua, Y., Taraborelli, D., and Thain, N. Conversations gone awry: detecting early signs of conversational failure. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (2018).Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Do Platform Migrations Compromise Content Moderation? Evidence from r/The_Donald and r/Incels

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!