1 — Prevalence and characteristics of misinformation
📇 Journal of the Royal Society Interface | Nov 2024 | Kailun Zhu, Songtao Peng, Jiaqi Nie, Zhongyuan Ruan, Shanqing Yu, and Qi Xuan
A group of researchers at the Zhejiang University of Technology claim that Reddit threads about false claims tend to have more back-and-forth and be more negative in tone than those about true claims. (They based their analysis on a previously published dataset of Reddit posts tied to fact checks by PolitiFact, Snopes and Emergent.info.)

📇 Science Advances | Oct 2024 | Kevin T. Greene, Nilima Pisharody, Lucas Augusto Meyer, Mayana Pereira, Rahul Dodhia, Juan Lavista Ferres, and Jacob N. Shapiro
Researchers at Princeton and Microsoft studied two large samples of Bing results to try and understand the manner and extent to which the Microsoft search engine returned unreliable news sites.
(The study was published on Science Advances, a highly reputable peer-reviewed journal, but it is worth noting the conflict of interest of the Microsoft co-authors, who assert the company did not have pre-publication approval.)
Across the two samples, the researchers collected a total of almost 14 billion search result pages (SERPs) that included at least one of the 8,000 domains whose reliability has been rated by NewsGuard. The researchers argue their largest sample, dating back to June-August 2022, provides “a representative sampling of heavily searched queries.”
Overall, the study finds that unreliable sites were returned in about ~1% of the SERPs, far less frequently than reliable sites (27% to 41% depending on the sample).

More important still, the likelihood of being exposed to an unreliable site was far higher (20x in sample 1, more in sample 2) for navigational queries, i.e. those that included the website’s name. This is a significant distinction to make because it helps tease out the role a search engine has in discovery versus retrieval of low quality information. Think of it as the difference between getting to infowars dot com from the query [infowars] versus the query [sandy hook].
📇 Nature | October 2024 | Mohsen Mosleh, Qi Yang, Tauhid Zaman, Gordon Pennycook & David G. Rand
This study argues that politically asymmetrical suspensions of social media users may be explainable by an asymmetrical sharing of misinformation by those accounts, rather than by platform bias.
The researchers found that that Twitter “accounts that had shared #Trump2020 during the election were 4.4 times more likely to have been subsequently suspended than those that shared #VoteBidenHarris2020.”
This could have been for a range of reasons, including bot activity or incitement to violence. Still, the pro-Trump accounts were also far more likely to share links to low-quality news sites that may have been flagged for misinformation. Crucially, this discrepancy held even when the news sites were rated by a balanced sample of laypeople rather than by referring to existing lists compiled by fact-checkers and other media monitors.
The researchers also found that this disparity largely held on Facebook, in survey experiments, and across 16 different countries.
📇 Public Opinion Quarterly | July-August 2024 | The Electoral Misinformation Nexus: How News Consumption, Platform Use, and Trust in News Influence Belief in Electoral Misinformation and A Matter of Misunderstanding? Explaining (Mis)Perceptions of Electoral Integrity across 25 Different Nations | Camila Mont’Alverne et al. and Rens Vliegenthart et al.
In this special issue on election misinformation of Public Opinion Quarterly, I was particularly interested in two papers analyzing how consumption of and trust in news media affects belief in misinformation, which the good folks at RQ1 helpfully summarized. Here’s how they present the main takeaways from the two papers:
📇 Science Advances | May 2024 | Jennifer Allen, Duncan J. Watts, and David G. Rand
Researchers at MIT and Penn assessed the impact of COVID-19 vaccine-related headlines on Americans’ propensity to take the shot. Then, they built a dataset of 13,206 vaccine-related public Facebook URLs that were shared more than 100 times between January and March 2021. Finally, they used crowd workers and a machine-learning model to attempt to predict the impact of the 13K URLs on vaccination intent.
That’s a lot to digest, but the graph below does a great job at delivering most of the results. On the left side you can see that the median URL flagged as false by Facebook’s fact-checking partners was predicted to decrease the intention to vaccinate by 1.4 percentage points. That’s significantly worse than the 0.3 decrease from the median unflagged URL.
But there’s a catch. Unflagged articles with headlines suggesting vaccines were harmful had a similarly negative impact on predicted willingness to jab — and were seen a lot more. Whereas flagged misinformation received 8.7 million views, the overall sample of 13K vaccine-related URLs got 2.7 billion views.

There are two takeaways for me here:
For one, it looks like (flagged) misinformation was a relatively small part of COVID-19 vaccine content in the US. Whether this should be interpreted as validation for Facebook’s fact-checking program or an indication that a big chunk of misinformation evaded fact-checker scrutiny would make for a valuable follow-up study.
The second message is that headlines matter. Because vaccine skeptical headlines reached so many more people than flagged misinfo, they are more likely to have depressed vaccination rates. Here’s a notable bit from the study:
a single vaccine-skeptical article published by the Chicago Tribune titled “A healthy doctor died two weeks after getting a COVID vaccine; CDC is investigating why” was seen by >50 million people on Facebook (>20% of Facebook’s US user base) and received more than six times the number of views than all flagged misinformation combined.
I remember this article. Even at the time, there were questions about the framing of an individual case in a way that alluded to causality. A coroner’s investigation was unable to confirm or deny a connection to the vaccine. It now seems likely that the article may have had a non trivial effect on the propensity to vaccinate of US Facebook users.
📇 PLOS One | May 2024 | Matthew R. DeVerna, Rachith Aiyappa, Diogo Pacheco, John Bryden, and Filippo Menczer
The OSoMe crew at Indiana University is behind this paper seeking to define and identify misinformation superspreaders on Twitter. The researchers first isolated almost half a million accounts that shared content from sources on the Iffy+ list. Then, they identified the most influential based on their number of retweets and a repurposed h-index, finding that these are far better predictors of influence than an account’s bot score. They conclude that “just 10 superspreaders (0.003% of accounts) were responsible for originating over 34% of the low-credibility content” between March and October 2020.

📇 arXiv | May 2024 | Nicholas Dufour, Arkanath Pathak, Pouya Samangouei, Nikki Hariri, Shashi Deshetti, Andrew Dudfield, Christopher Guess, Pablo Hernández Escayola, Bobby Tran, Mevan Babakar, Christoph Bregler
Several good humans I used to work with released this preprint taxonomizing media-based misinformation. The primarily Google-based authors trained 83 raters to annotate 135,862 English language fact checks carrying ClaimReview markup. (They are releasing their database under the suitably laborious backronym of AMMeBa.)

The study finds that almost 80% of fact-checked claims are now in some way related to a media item, typically video. This high proportion can’t be ascribed only to Facebook’s money drawing the fact-checking industry away from textual claims given that the trend precedes the program’s launch in 2017.

Unsurprisingly, AI disinformation shot up since the advent of ChatGPT and its ilk.
📇 Nature Human Behavior | March 2020 | Andrew M. Guess, Brendan Nyhan & Jason Reifler
This study tracked the online behavior of a roughly representative sample of 2,525 Americans from 7 October to 14 November 2016. It found that 44% of them visited an “untrustworthy news website” at least once even as these sites represented only 6% of overall news diet in the sample. The consumption of lower quality information was driven by the most conservative 20% of the population, and by use of Facebook. Finally, the researchers found that fewer than one in two Americans who was exposed to an untrustworthy website also visited a fact-checking website in the same period (see right-hand column on the chart below).

Login or upgrade your account to become a member to access content below.