skip to main content
10.1145/3491102.3502040acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article
Open access

Birds of a feather don’t fact-check each other: Partisanship and the evaluation of news in Twitter’s Birdwatch crowdsourced fact-checking program

Published: 29 April 2022 Publication History

Abstract

There is a great deal of interest in the role that partisanship, and cross-party animosity in particular, plays in interactions on social media. Most prior research, however, must infer users’ judgments of others’ posts from engagement data. Here, we leverage data from Birdwatch, Twitter’s crowdsourced fact-checking pilot program, to directly measure judgments of whether other users’ tweets are misleading, and whether other users’ free-text evaluations of third-party tweets are helpful. For both sets of judgments, we find that contextual features – in particular, the partisanship of the users – are far more predictive of judgments than the content of the tweets and evaluations themselves. Specifically, users are more likely to write negative evaluations of tweets from counter-partisans; and are more likely to rate evaluations from counter-partisans as unhelpful. Our findings provide clear evidence that Birdwatch users preferentially challenge content from those with whom they disagree politically. While not necessarily indicating that Birdwatch is ineffective for identifying misleading content, these results demonstrate the important role that partisanship can play in content evaluation. Platform designers must consider the ramifications of partisanship when implementing crowdsourcing programs.

Supplemental Material

MP4 File
Talk Video
Transcript for: Talk Video
ZIP File
Supplemental Materials

References

[1]
Hunt Allcott, Luca Braghieri, Sarah Eichmeyer, and Matthew Gentzkow. 2020. The welfare effects of social media. American Economic Review 110, 3 (2020), 629–76.
[2]
Jennifer Allen, Antonio A Arechar, Gordon Pennycook, and David G Rand. 2020. Scaling up fact-checking using the wisdom of crowds. Preprint at https://doi. org/10.31234/osf. io/9qdza (2020).
[3]
Christopher Avery, Paul Resnick, and Richard Zeckhauser. 1999. The market for evaluations. American economic review 89, 3 (1999), 564–584.
[4]
Christopher A Bail, Lisa P Argyle, Taylor W Brown, John P Bumpus, Haohan Chen, M B Fallin Hunzaker, Jaemin Lee, Marcus Mann, Friedolin Merhout, and Alexander Volfovsky. 2018. Exposure to opposing views on social media can increase political polarization. Proc. Natl. Acad. Sci. U. S. A. 115, 37 (Sept. 2018), 9216–9221.
[5]
Eytan Bakshy, Solomon Messing, and Lada A Adamic. 2015. Exposure to ideologically diverse news and opinion on Facebook. Science 348, 6239 (2015), 1130–1132.
[6]
Pablo Barberá. 2015. Birds of the same feather tweet together: Bayesian ideal point estimation using Twitter data. Political analysis 23, 1 (2015), 76–91.
[7]
Pablo Barberá, John T Jost, Jonathan Nagler, Joshua A Tucker, and Richard Bonneau. 2015. Tweeting from left to right: Is online political communication more than an echo chamber?Psychological science 26, 10 (2015), 1531–1542.
[8]
Pablo Barberá. 2015. Birds of the same feather tweet together: Bayesian ideal point estimation using Twitter data. Political analysis 23, 1 (2015), 76–91. Publisher: Cambridge University Press.
[9]
Joshua Becker, Ethan Porter, and Damon Centola. 2019. The wisdom of partisan crowds. Proc. Natl. Acad. Sci. U. S. A. 116, 22 (May 2019), 10717–10722. Publisher: National Acad Sciences.
[10]
Md Momen Bhuiyan, Amy X Zhang, Connie Moon Sehat, and Tanushree Mitra. 2020. Investigating differences in crowdsourced news credibility assessment: Raters, tasks, and expert criteria. Proceedings of the ACM on Human-Computer Interaction 4, CSCW2(2020), 1–26.
[11]
Birdwatch. 2021. Today, we’re updating how notes are elevated in Birdwatch! This change will give more weight to contributors whose notes and ratings are consistently found helpful by others.https://twitter.com/birdwatch/status/1404519791394758657
[12]
Antoine Boutet, Hyoungshick Kim, and Eiko Yoneki. 2013. What’s in Twitter, I know what parties are popular and who you are supporting now!Social network analysis and mining 3, 4 (2013), 1379–1391.
[13]
Rich Caruana and Alexandru Niculescu-Mizil. 2006. An empirical comparison of supervised learning algorithms. In Proceedings of the 23rd international conference on Machine learning. 161–168.
[14]
Keith Coleman. 2021. Introducing Birdwatch, a community-based approach to misinformation. https://blog.twitter.com/en_us/topics/product/2021/introducing-birdwatch-a-community-based-approach-to-misinformation.html
[15]
Elanor Colleoni, Alessandro Rozza, and Adam Arvidsson. 2014. Echo chamber or public sphere? Predicting political orientation and measuring political homophily in Twitter using big data. Journal of communication 64, 2 (2014), 317–332.
[16]
Michael D Conover, Jacob Ratkiewicz, Matthew Francisco, Bruno Gonçalves, Filippo Menczer, and Alessandro Flammini. 2011. Political polarization on twitter. In Fifth international AAAI conference on weblogs and social media.
[17]
Mitali Desai and Mayuri A. Mehta. 2016. Techniques for sentiment analysis of Twitter data: A comprehensive survey. In 2016 International Conference on Computing, Communication and Automation (ICCCA). 149–154. https://doi.org/10.1109/CCAA.2016.7813707
[18]
Nicholas Dias, Gordon Pennycook, and David G Rand. 2020. Emphasizing publishers does not effectively reduce susceptibility to misinformation on social media. Harvard Kennedy School Misinformation Review 1, 1 (2020).
[19]
Gerardo Ocampo Diaz and Vincent Ng. 2018. Modeling and prediction of online product review helpfulness: a survey. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 698–708.
[20]
Ziv Epstein, Gordon Pennycook, and David Rand. 2020. Will the crowd game the algorithm? Using layperson judgments to combat misinformation on social media by downranking distrusted sources. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1–11.
[21]
Ziv Epstein, Nathaniel Sirlin, Antonio Alonso Arechar, Gordon Pennycook, and David Rand. 2021. Social Media Sharing Reduces Truth Discernment. (2021).
[22]
Alexandra Eveleigh, Charlene Jennett, Ann Blandford, Philip Brohan, and Anna L Cox. 2014. Designing for dabblers and deterring drop-outs in citizen science. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2985–2994.
[23]
Seth Flaxman, Sharad Goel, and Justin M Rao. 2013. Ideological segregation and the effects of social media on news consumption. Available at SSRN 2363701 (2013).
[24]
Matthew Gentzkow and Jesse M Shapiro. 2011. Ideological segregation online and offline. The Quarterly Journal of Economics 126, 4 (2011), 1799–1839.
[25]
William Godel, Zeve Sanderson, Kevin Aslett, Jonathan Nagler, Richard Bonneau, Nathaniel Persily, and Joshua A Tucker. 2021. Moderating with the Mob: Evaluating the Efficacy of Real-Time Crowdsourced Fact-Checking. Journal of Online Trust and Safety 1, 1 (2021).
[26]
Nir Grinberg, Kenneth Joseph, Lisa Friedland, Briony Swire-Thompson, and David Lazer. 2019. Fake news on Twitter during the 2016 U.S. presidential election. Science 363, 6425 (Jan. 2019), 374–378.
[27]
Andrew Guess, Brendan Nyhan, Benjamin Lyons, and Jason Reifler. 2018. Avoiding the echo chamber about echo chambers. Knight Foundation 2(2018).
[28]
Andrew M Guess. 2021. (Almost) Everything in Moderation: New Evidence on Americans’ Online Media Diets. American Journal of Political Science(2021).
[29]
Marek Hlavac. 2018. stargazer: Well-Formatted Regression and Summary Statistics Tables. (2018). https://CRAN.R-project.org/package=stargazer R package version 5.2.2.
[30]
Yiqing Hua, Mor Naaman, and Thomas Ristenpart. 2020. Characterizing Twitter Users Who Engage in Adversarial Interactions against Political Candidates. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, Honolulu HI USA, 1–13. https://doi.org/10.1145/3313831.3376548
[31]
Yiqing Hua, Thomas Ristenpart, and Mor Naaman. 2020. Towards measuring adversarial twitter interactions against candidates in the us midterm elections. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 14. 272–282.
[32]
Leonie Huddy, Lilliana Mason, and Lene Aarøe. 2015. Expressive partisanship: Campaign involvement, political emotion, and partisan identity. American Political Science Review 109, 1 (2015), 1–17.
[33]
Clayton Hutto and Eric Gilbert. 2014. Vader: A parsimonious rule-based model for sentiment analysis of social media text. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 8.
[34]
Shanto Iyengar, Kyu S Hahn, Jon A Krosnick, and John Walker. 2008. Selective exposure to campaign communication: The role of anticipated agreement and issue public membership. The Journal of Politics 70, 1 (2008), 186–200.
[35]
Shanto Iyengar, Yphtach Lelkes, Matthew Levendusky, Neil Malhotra, and Sean J Westwood. 2019. The origins and consequences of affective polarization in the United States. Annual Review of Political Science 22 (2019), 129–146.
[36]
Dan M Kahan. 2012. Ideology, motivated reasoning, and cognitive reflection: An experimental study. Judgment and Decision making 8 (2012), 407–24.
[37]
Dan M Kahan. 2016. The politically motivated reasoning paradigm, part 1: What politically motivated reasoning is and how to measure it. Emerging trends in the social and behavioral sciences 29 (2016).
[38]
Jooyeon Kim, Behzad Tabibian, Alice Oh, Bernhard Schölkopf, and Manuel Gomez-Rodriguez. 2018. Leveraging the crowd to detect and reduce the spread of fake news and misinformation. In Proceedings of the eleventh ACM international conference on web search and data mining. 324–332.
[39]
Jin Woo Kim, Andrew Guess, Brendan Nyhan, and Jason Reifler. 2020. The Distorting Prism of Social Media: How Self-Selection and Exposure to Incivility Fuel Online Comment Toxicity. Journal of Communication(2020).
[40]
Silvia Knobloch-Westerwick and Jingbo Meng. 2009. Looking the other way: Selective exposure to attitude-consistent and counterattitudinal political information. Communication Research 36, 3 (2009), 426–448.
[41]
Ziva Kunda. 1990. The case for motivated reasoning.Psychological bulletin 108, 3 (1990), 480.
[42]
Anne M Land-Zandstra, Jeroen LA Devilee, Frans Snik, Franka Buurmeijer, and Jos M van den Broek. 2016. Citizen science on a smartphone: Participants’ motivations and learning. Public Understanding of Science 25, 1 (2016), 45–60.
[43]
Solomon Messing and Sean J Westwood. 2014. Selective exposure in the age of social media: Endorsements trump partisan source affiliation when selecting news online. Communication research 41, 8 (2014), 1042–1063.
[44]
Mohsen Mosleh, Cameron Martel, Dean Eckles, and David Rand. 2021. Perverse Downstream Consequences of Debunking: Being Corrected by Another User for Posting False Political News Increases Subsequent Sharing of Low Quality, Partisan, and Toxic Content in a Twitter Field Experiment. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–13. https://doi.org/10.1145/3411764.3445642
[45]
Mohsen Mosleh, Cameron Martel, Dean Eckles, and David G. Rand. 2021. Shared partisanship dramatically increases social tie formation in a Twitter field experiment. Proceedings of the National Academy of Sciences 118, 7 (2021). Publisher: National Acad Sciences.
[46]
Subhayan Mukerjee, Kokil Jaidka, and Yphtach Lelkes. 2020. The Political Landscape of the US Twitterverse. (2020).
[47]
Brendan Nyhan and Jason Reifler. 2010. When corrections fail: The persistence of political misperceptions. Political Behavior 32, 2 (2010), 303–330.
[48]
MATHIAS OSMUNDSEN, ALEXANDER BOR, PETER BJERREGAARD VAHLSTRUP, ANJA BECHMANN, and MICHAEL BANG PETERSEN. 2021. Partisan polarization is the primary psychological motivation behind political fake news sharing on Twitter. American Political Science Review(2021), 1–17.
[49]
Gordon Pennycook, Tyrone D Cannon, and David G Rand. 2018. Prior exposure increases perceived accuracy of fake news.Journal of experimental psychology: general 147, 12 (2018), 1865.
[50]
Gordon Pennycook, Ziv Epstein, Mohsen Mosleh, Antonio A Arechar, Dean Eckles, and David G Rand. 2021. Shifting attention to accuracy can reduce misinformation online. Nature 592, 7855 (2021), 590–595.
[51]
Gordon Pennycook, Jonathon McPhetres, Yunhao Zhang, Jackson G Lu, and David G Rand. 2020. Fighting COVID-19 misinformation on social media: Experimental evidence for a scalable accuracy-nudge intervention. Psychological science 31, 7 (2020), 770–780.
[52]
Gordon Pennycook and David G Rand. 2019. Fighting misinformation on social media using crowdsourced judgments of news source quality. Proceedings of the National Academy of Sciences 116, 7 (2019), 2521–2526.
[53]
Gordon Pennycook and David G Rand. 2019. Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition 188(2019), 39–50.
[54]
Gordon Pennycook and David G Rand. 2021. The psychology of fake news. Trends in cognitive sciences(2021).
[55]
Markus Prior. 2013. Media and political polarization. Annual Review of Political Science 16 (2013), 101–127.
[56]
M Jordan Raddick, Georgia Bracey, Pamela L Gay, Chris J Lintott, Phil Murray, Kevin Schawinski, Alexander S Szalay, and Jan Vandenberg. 2009. Galaxy zoo: Exploring the motivations of citizen science volunteers. arXiv preprint arXiv:0909.2925(2009).
[57]
Steve Rathje, Jay J Van Bavel, and Sander van der Linden. 2021. Out-group animosity drives engagement on social media. Proceedings of the National Academy of Sciences 118, 26(2021).
[58]
Paul Resnick, Aljohara Alfayez, Jane Im, and Eric Gilbert. 2021. Informed crowds can effectively identify misinformation. arXiv preprint arXiv:2108.07898(2021).
[59]
Feng Shi, Misha Teplitskiy, Eamon Duede, and James A Evans. 2019. The wisdom of polarized crowds. Nat Hum Behav 3, 4 (April 2019), 329–336. Publisher: nature.com.
[60]
Jieun Shin and Kjerstin Thorson. 2017. Partisan Selective Sharing: The Biased Diffusion of Fact-Checking Messages on Social Media: Sharing Fact-Checking Messages on Social Media. Journal of Communication 67, 2 (April 2017), 233–255. https://doi.org/10.1111/jcom.12284
[61]
Michael Siering, Jan Muntermann, and Balaji Rajagopalan. 2018. Explaining and predicting online review helpfulness: The role of content and reviewer-related signals. Decision Support Systems 108 (2018), 1–12.
[62]
Craig Silverman. 2016. Here are 50 of the biggest fake news hits on Facebook from 2016. Buzzfeed News (2016), 1–12.
[63]
Nathaniel Sirlin, Ziv Epstein, Antonio A Arechar, and David G Rand. 2021. Digital literacy is associated with more discerning accuracy judgments but not sharing intentions. Harvard Kennedy School Misinformation Review (2021).
[64]
Natalie Jomini Stroud. 2010. Polarization and partisan selective exposure. Journal of communication 60, 3 (2010), 556–576.
[65]
Cass Sunstein and Cass R Sunstein. 2018. # Republic. Princeton university press.
[66]
Briony Swire-Thompson, Joseph DeGutis, and David Lazer. 2020. Searching for the Backfire Effect: Measurement and Design Considerations. Journal of applied research in memory and cognition 9, 3 (Sept. 2020), 286–299. https://doi.org/10.1016/j.jarmac.2020.06.006 Edition: 2020/09/02 Publisher: The Authors. Published by Elsevier Inc. on behalf of Society for Applied Research in Memory and Cognition.
[67]
Charles S Taber and Milton Lodge. 2006. Motivated skepticism in the evaluation of political beliefs. American journal of political science 50, 3 (2006), 755–769.
[68]
Ben M Tappin, Gordon Pennycook, and David G Rand. 2020. Bayesian or biased? Analytic thinking and political belief updating. Cognition 204(2020), 104375.
[69]
Ben M Tappin, Gordon Pennycook, and David G Rand. 2020. Rethinking the link between cognitive sophistication and politically motivated reasoning.Journal of Experimental Psychology: General(2020).
[70]
Ben M Tappin, Gordon Pennycook, and David G Rand. 2020. Thinking clearly about causal inferences of politically motivated reasoning: Why paradigmatic study designs often undermine causal inference. Current Opinion in Behavioral Sciences 34 (2020), 81–87.
[71]
Sebastian Tschiatschek, Adish Singla, Manuel Gomez Rodriguez, Arpit Merchant, and Andreas Krause. 2018. Fake news detection in social networks via crowd signals. In Companion Proceedings of the The Web Conference 2018. 517–524.
[72]
Joshua A. Tucker, Andrew Guess, Pablo Barberá, Cristian Vaccari, Alexandra Siegel, Sergey Sanovich, Denis Stukal, and Brendan Nyhan. 2018. Social media, political polarization, and political disinformation: A review of the scientific literature. Political polarization, and political disinformation: a review of the scientific literature (March 19, 2018)(2018).
[73]
Zijian Wang, Scott Hale, David Ifeoluwa Adelani, Przemyslaw Grabowicz, Timo Hartman, Fabian Flöck, and David Jurgens. 2019. Demographic inference and representative population estimates from multilingual social media data. In The World Wide Web Conference. ACM, 2056–2067.
[74]
Stefan Wojcik and Adam Hughes. 2019. Sizing up Twitter users. Pew Research Center 24(2019).
[75]
Thomas Wood and Ethan Porter. 2019. The elusive backfire effect: Mass attitudes’ steadfast factual adherence. Polit. Behav. 41, 1 (March 2019), 135–163. Publisher: Springer Science and Business Media LLC.
[76]
Taha Yasseri and Filippo Menczer. 2021. Can the Wikipedia moderation model rescue the social marketplace of ideas?arXiv preprint arXiv:2104.13754(2021).
[77]
Xudong Yu, Magdalena Wojcieszak, and Andreu Casas. 2021. Affective polarization on social media: In-party love among American politicians, greater engagement with out-party hate among ordinary users. (2021).

Cited By

View all
  • (2025)Affordances-in-Practice: How Social Norm Dynamics in Climate Change Publics Are Shaped on Instagram and TwitterSocial Media + Society10.1177/2056305125131906611:1Online publication date: 10-Feb-2025
  • (2025)What Knowledge Do We Produce from Social Media Data and How?Proceedings of the ACM on Human-Computer Interaction10.1145/37012169:1(1-45)Online publication date: 10-Jan-2025
  • (2024)Readable and neutral? Reliability of crowdsourced misinformation debunking through linguistic and psycholinguistic cuesFrontiers in Psychology10.3389/fpsyg.2024.147817615Online publication date: 13-Nov-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CHI '22: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems
April 2022
10459 pages
ISBN:9781450391573
DOI:10.1145/3491102
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 29 April 2022

Check for updates

Badges

  • Honorable Mention

Author Tags

  1. crowdsourcing
  2. fact-checking
  3. misinformation

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

CHI '22
Sponsor:
CHI '22: CHI Conference on Human Factors in Computing Systems
April 29 - May 5, 2022
LA, New Orleans, USA

Acceptance Rates

Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

Upcoming Conference

CHI 2025
ACM CHI Conference on Human Factors in Computing Systems
April 26 - May 1, 2025
Yokohama , Japan

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,529
  • Downloads (Last 6 weeks)270
Reflects downloads up to 03 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Affordances-in-Practice: How Social Norm Dynamics in Climate Change Publics Are Shaped on Instagram and TwitterSocial Media + Society10.1177/2056305125131906611:1Online publication date: 10-Feb-2025
  • (2025)What Knowledge Do We Produce from Social Media Data and How?Proceedings of the ACM on Human-Computer Interaction10.1145/37012169:1(1-45)Online publication date: 10-Jan-2025
  • (2024)Readable and neutral? Reliability of crowdsourced misinformation debunking through linguistic and psycholinguistic cuesFrontiers in Psychology10.3389/fpsyg.2024.147817615Online publication date: 13-Nov-2024
  • (2024)Conspiracy beliefs and social media: Addressing a systemic riskOpen Research Europe10.12688/openreseurope.17043.14(53)Online publication date: 8-Mar-2024
  • (2024)The Economics of Social MediaJournal of Economic Literature10.1257/jel.2024174362:4(1422-1474)Online publication date: 1-Dec-2024
  • (2024)A Survey on the Role of Crowds in Combating Online Misinformation: Annotators, Evaluators, and CreatorsACM Transactions on Knowledge Discovery from Data10.1145/369498019:1(1-30)Online publication date: 29-Nov-2024
  • (2024)"Here's Your Evidence": False Consensus in Public Twitter Discussions of COVID-19 ScienceProceedings of the ACM on Human-Computer Interaction10.1145/36870108:CSCW2(1-33)Online publication date: 8-Nov-2024
  • (2024)Did the Roll-Out of Community Notes Reduce Engagement With Misinformation on X/Twitter?Proceedings of the ACM on Human-Computer Interaction10.1145/36869678:CSCW2(1-52)Online publication date: 8-Nov-2024
  • (2024)Modeling the Diffusion of Fake and Real News through the Lens of the Diffusion of Innovations TheoryACM Transactions on Social Computing10.1145/36748827:1-4(1-24)Online publication date: 24-Sep-2024
  • (2024)OSINT Research Studios: A Flexible Crowdsourcing Framework to Scale Up Open Source Intelligence InvestigationsProceedings of the ACM on Human-Computer Interaction10.1145/36373828:CSCW1(1-38)Online publication date: 26-Apr-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media