Analyzing the temporal dynamics of linguistic features contained in misinformation
Erik J. Schlicht
This analysis of PolitiFact's fact checks over time posted on arXiv caught my eye. The author collected the ratings assigned by the Pulitzer prize-winning fact-checker and found that there was a significant increase in the rate of false ratings assigned starting in 2020. (Like other fact-checkers, PolitiFact also assigns "True" ratings when they are warranted). The rise in misinformation labels appears to have started in 2016-2017, perhaps informed by PolitiFact's partnership with Meta that financially incentivized targeting fake claims over true ones. But the real surge coincided with the COVID-19 pandemic.

The analysis also found that the average sentiment of the claims that PolitiFact covers has become markedly more emotional and more negative since 2016.

The Diffusion and Reach of (Mis)Information on Facebook During the U.S. 2020 Election
Sandra González-Bailón, David Lazer, Pablo Barberá, et al.
Made possible by access to Meta data negotiated by Talia Stroud and Josh Tucker, this study tries to characterize the spread of misinformation flagged by fact-checkers on Facebook and Instagram during the 2020 US elections. It concludes that while information as a whole primarily spread in a broadcast manner through Pages, misinformation flipped the script and “and relie[d] much more on viral spread, powered by a tiny minority of users who tend to be older and more conservative.”
The study also found a steep decrease in “misinformation trees” on election day (see chart on the right below). Counts then climb back up shortly after the election and until January 6. The researchers suggest but cannot definitively conclude that the dip is due to Meta’s “break glass” measures introduced to reduce viral reach of content on its platforms.

2 — Effects of fact-checking interventions
📇 PsyArXiv | April 2025 | Thomas Renault, Mohsen Mosleh, and David Rand
Someone better not show Mark Zuckerberg or Elon Musk this working paper by Thomas Renault, Mohsen Mosleh, and David Rand. The billionaire owners of Meta and X based their support for community-driven correction labels on the alleged neutrality of the crowd compared to that of pesky professional fact-checkers.
Set aside that this support extends only as long as they agree with the findings of the community. And set aside, too, that a greater focus on right-leaning falsehoods may be a factor of their greater prevalence online.
It now turns out that Community Notes also disproportionately target Republicans.
Renault et al. extracted the 281,382 English-language notes written between January 2023 and June 2024. They then classified the users that these notes were correcting as Democratic or Republican based on the accounts they follow, resolving any uncertainty by getting an LLM to rate 500 of their tweets. (It's possible that this ended up including some users outside the U.S. who follow American politicians.)
The results are lopsided: 60% of proposed notes are on 'Republican' tweets; 40% on 'Democratic' ones. The difference gets starker when looking at notes that are rated helpful, 70% of which appear on Republican posts. This matters because only helpful notes are appended to the offending tweet and shown to all X users.

So not only do Republican tweets get targeted more by notes; those notes are also far more likely to be be considered helpful (10.4% vs 6.8% for those proposed on Democratic tweets). Overall, the researchers claim there's a +64% chance of a note on a Republican user's tweet being deemed helpful when holding stable a user's verified status, follower count, tweeting volume and the topic of the tweet.
Given Community Notes' bridging algorithm, this research is strongly suggestive that right-leaning American social media users post more content that third parties find misleading. Other recent research has found that extremely conservative users are more susceptible to misinformation and less likely to recognize it.
There were a couple of other valuable tidbits in this preprint. For one, a plurality of Community Notes focus on politics (35.1%), significantly ahead of science and health (12.7% and 7.8% respectively) and economics (5.3%).
Additionally, at least until June 2024, there were more Democratic posts on X than Republican ones, even though the share of the latter greatly increased following Musk's takeover of the platform. This evolving user base and the updates to the Community Notes bridging algorithms means a follow-up analysis will be invaluable (Renault told me he and his co-authors are working on it).

📇 arXiv | Feb 2025 | Ronald E. Robertson, Evan M. Williams, Kathleen M. Carley, and David Thiel
This paper “collected 1.4M unique search queries shared on social media to surface Google’s warning banners, examine when and why those banners were applied, and train deep learning models to identify data voids beyond Google’s classifications.” The study found that the three warnings appeared on about 1% of all the queries studied, with the low relevance message being by far the most frequent of the three.
The researchers also stumbled on a bit of a scoop while doing data collection as they found that one of the warnings, applied to searches that primarily yield low quality results, had been quietly discontinued by Google. That is despite the fact that the average domain quality in the searches studied did not improve.

📇 arXiv | March 2025 | Kirill Solovev, Nicolas Pröllochs
This preprint found that Community Notes that link to fact-checkers are deemed most helpful among any of the document types studied.

📇 HKS Misinformation Review | February 2025 | Claire Betzer et al.
Very helpful finding for folks who work to fight misinformation on platforms by a big group of scholars:
state-affiliated media tags on Twitter were less effective at reducing the perceived accuracy of false claims from state media outlets than previous research suggests. On the other hand, our results reinforce the finding that fact-checks are effective at combating misinformation.
The study imitated Twitter's labeling format completely, adding a small "state-affiliated media" under the account handle and/or a "false information: Checked by independent fact-checkers" warning below the post content.

As you can see below, the state media label had essentially no effect on the perceived accuracy of the underlying tweet, while the fact check label reduced it by ~10%. This is not terribly surprising in principle: state-affiliation might even provide a sheen of respectability and resources amid a miasma of creators you've never heard of before. But X and other platforms explicitly used these labels as a halfway solution to avoid fact-checking individual posts while warning users that the content may not be all that trustworthy. In that sense, at least, this study suggests the solution doesn't work.

📇 psyarXiv | February 2025 | Thomas H. Costello, Gordon Pennycook, and David Rand
You may remember the paper published that found that a three-round conversation with an AI chatbot reduced conspiracy beliefs among US study participants. The paper's authors are out with a preprint exploring why the intervention might have been so effective, which the lead author summarized in one word: "facts".
In this study, the researchers tried a range of communication styles, including framing the conversations as a debate or an attempt to change the conspiracy believer's mind. While there was some variation in the efficacy of each intervention, only one of them was significantly worse – the conversations that didn't provide any evidence to contradict the conspiracy belief (see chart below). I'm not surprised the no-evidence interventions didn't work, given the examples are barely even rebuttals. But the point here is that nothing other than omitting the facts appears to prevent the fact-checking exchanges from having a noticeable impact, at least in this experimental setup.

📇 arXiv | October 2024 | Mitchell Linegar, Betsy Sinclair, Sander van der Linden, and R. Michael Alvarez
In a preprint, Linegar et al. test the efficacy of an AI-generated “prebunking” article on five common US election falsehoods. On average, prebunks reduced belief in these myths by 0.5 points on a 10-scale, while also improving confidence in election integrity. Overall, Democrats and Republicans did have differing belief levels in the misinformation — but preemptive corrections worked regardless of party affiliation.

📇 Scientific Reports | September 2024 | Hendrik Bruns, François J. Dessart, Michał Krawczyk, Stephan Lewandowsky, Myrto Pantazi, Gordon Pennycook, Philipp Schmid & Laura Smillie
This paper looked at corrections in the context of misinformation about COVID-19 or climate change in Germany, Greece, Ireland, and Poland. The paper tested two different variables: the timing of the correction (before or after the user was served an article making false claims about climate change) and the correction’s source (either absent, or attributed to the European Commission).

The authors conclude that for most conditions tested and in most locales, corrections worked (see aggregate results below). Overall, debunking had a slightly bigger effect than prebunking. The effect was pretty large, too, reducing strong agreement with the main false claims by almost half. Greater trust in the EU was also associated with an increased acceptance of the correction attributed to the Commission, though the authors caution that study design may have affected this finding.

📇 Nature Human Behavior | September 2024 | Cameron Martel and David G. Rand
Important takeaway in this study on Nature Human Behavior for those designing anti-misinformation interventions in online spaces.

Fact check warning labels reduce Americans’ propensity to believe or share false news despite differential trust in fact-checkers across political persuasions. As you can see in this chart, trust in fact-checkers is higher with Democratic voters:
But even when trust in fact-checkers is lowest (left on the x-axis below), fact checks still reduced the perceived accuracy of the labeled false news.

The chart looks the same for sharing intentions. Even though higher trust does lead on average to a higher reduction in sharing labeled false news, the effect is consistent across the board.

📇 Science | September 2024 | Thomas H. Costello, Gordon Pennycook, and David G. Rand
A three-round conversation with ChatGPT reduced belief in a conspiracy theory of choice by an average of 20%. That’s…pretty good! The effect was observed across a wide range of conspiracy theories and lasted even two months after the intervention.

You can get a good sense of the study design in the graphic below. I am inclined to agree with the authors’ assessment that the reason the intervention was so successful is that the LLM could tailor its responses to the unique reasons each participant had to believe in the conspiracy theory. You can also play around with their AI fact-checker at debunkbot.com.

📇 arXiv | September 2024 | Yuwei Chuai, Moritz Pilarski, Thomas Renault, David Restrepo-Amariles, Aurore Troussel-Clément, Gabriele Lenzini, and Nicolas Pröllochs
This preprint by researchers in Luxembourg, France and Germany claims community notes on X reduced the spread of tweets they were attached to by up to 62 percent and doubled their chance of being deleted. The study also found that the labels typically came too late to affect the overall virality of the post. (This is a bit of a chicken-and-egg problem where a viral fake is more likely to be seen by people who can debunk it.)

📇 Misinformation Review | September 2024 | John C. Blanchar, Catherine J. Norris
This peer-reviewed paper found that the “disputed” labels that (then) Twitter was appending to false claims of election fraud increased belief in the false claim by Trump supporters. It’s worth noting that this was a survey, rather than an analysis of platform data, and that no information beyond the label was provided.

📇 Journal of Online Trust & Safety | September 2023 | Andy Zhao, Mor Naaman
This is a great deep dive into the efficacy of the Taiwanese crowdsourced fact-checking website CoFacts. The study concludes that by and large … it works? While CoFacts covers slightly different topics from professionally-staffed websites MyGoPen and Taiwan FactCheck Center, it does so at much greater scale and speed. Moreover, disagreement on the rating of exactly similar claims were rare.

📇 Perspectives on Psychological Science | August 2023 | Cameron Martel, Jennifer Allen, Gordon Pennycook, and David G. Rand
This review of several studies on the efficacy of crowdsourced fact-checking concludes that “current evidence supports the promise of leveraging the wisdom of crowds to identify misinformation via aggregate layperson evaluations.”
📇 CHI ‘22 | April 2022 | Jennifer Allen, Cameron Martel, David G Rand
This study concludes that users of Birdwatch (now Community Notes) tend to rate counter partisans more negatively than those whose party affiliation they share. While the researchers note that “the preferential flagging of counter-partisan tweets we observe does not necessarily impair Birdwatch’s ability to identify misleading content,” it is nonetheless striking when compared to “other theoretically relevant features, like the number of sources cited in the note.”

📇 PNAS | July 2021 | Ethan Porter and Thomas J. Wood
This study is invaluable in looking at the effect of misinformation and related corrections across different locales. The researchers tested 22 fact checks on at least 1,000 respondents in each of Argentina, Nigeria, South Africa and the United Kingdom. They found that “every fact-check produced more accurate beliefs,” even when the topic of misinformation was politically charged. The study helpfully tested two identical fact checks across the four countries to correct for any confounding factors, and found this remained true (see chart below).

📇 Political Communication | October 2019 | Nathan Walter, Jonathan Cohen, R. Lance Holbert, and Yasmin Morag
This meta-analysis of findings on the impact of fact-checking contains my go-to citation about what we know about this field:
Simply put, the beliefs of the average individual become more accurate and factually consistent, even after a single exposure to a fact-checking message. To be sure, effects are heterogeneous and various contingencies can be applied, but when compared to equivalent control conditions, exposure to fact-checking carries positive influence.
However, the results also raise substantial concerns. In line with the motivated reasoning literature (Kunda, 1990; Nir, 2011), the effects of fact-checking on beliefs are quite weak and gradually become negligible the more the study design resembles a real-world scenario of exposure to fact-checking
📇 Journal of Public Economics | July 2019 | Oscar Barrera Rodríguez, Sergei Guriev, Emeric Henry and Ekaterina Zhuravskaya
[excerpted from an article originally published on Poynter]
This study found that providing factual information on immigration improved French voters’ understanding but didn’t reduce their likelihood to vote for the fact-checked politician. This finding is in line with an earlier study conducted on American voters.
The researchers surveyed 2,480 French individuals online in four regions where the far-right Front National party (FN) had done best in last year’s regional elections. The sample was otherwise representative of the French population in terms of age, gender and population.
Respondents were put into one of four groups. The first group received false claims on immigration made by Marine Le Pen, the FN’s presidential candidate. The second group obtained statistics on the same issues. The other two groups were given both or neither, respectively.
Across all groups, the researchers tested respondents’ understanding of the facts, their support for Le Pen on immigration and their voting intentions.
The variation of factual understanding among these four treatments is immediately clear. In one of the three claims tested, Le Pen used photos from the migration influx into Germany and Hungary to claim that 99 percent of refugees were men. UN stats indicate that the actual share of adult males among migrants coming into Europe from the Mediterranean was 58 percent.
In the graph below, respondents are divided into deciles, and the correct answer marked with a red vertical line. The “informed” group correctly determined the share of men more than 60 percent of the time. Individuals given no information or the false claims by Le Pen were far more likely to suggest a higher percentage.

Overall, knowledge of the facts was negatively affected when respondents only read Le Pen’s claims but improved when they were offered the facts alone or both the facts and Le Pen’s claims.
📇 PNAS | April 2019 | Joshua Becker, Ethan Porter, and Damon Centola
In this study, groups of 35 Democratic and Republican voters recruited on Amazon Turk were asked to respond over three rounds to four factual questions known to elicit partisan responses (e.g. “What was the unemployment rate in the last month of Barack Obama’s presidential administration?”). In some circumstances, they were given the average responses of four other participants, not knowing that these were their co-partisans. Even though significant partisan differences remained, participants in the ‘crowd’ condition showed improved accuracy across the board (see chart). It is important to note that compensation was tied to the accuracy of a final answer, so respondents had a financial incentive to be correct.

📇 Political Behavior | January 2019 | Brendan Nyhan, Ethan Porter, Jason Reifler & Thomas J. Wood
This study looked at the response of US-based respondents to two fact checks about crime and unemployment during the 2016 election. It concludes that “people express more factually accurate beliefs after exposure to fact-checks.” As you can see in the chart below, belief that crime had increased in America (which was not true at the time of Trump’s fact-checked statement) decreased for both Clinton and Trump voters after they were exposed to a fact check. They even held up after respondents were also exposed to denials by the Trump campaign and the candidate himself.

The role of familiarity in correcting inaccurate information
📇 Journal of Experimental Psychology: Learning, Memory, and Cognition | May 2017 | Briony Swire, Ullrich K H Ecker, and Stephan Lewandowsky
[excerpted from an article originally published on Poynter]
This study looked at the influence of different variables on the effectiveness of a correction in light of the familiarity effect. The researchers concluded that more detailed explanations help people remember corrections longer and that individuals over 65 are comparatively worse at holding on to corrective information (i.e. likelier to misremember a myth as a fact).
Researchers used a pilot group to select claims — both true and false — that were “common and at least midrange believable.” They then presented participants with either a brief affirmation/retraction or a more detailed one. The result? Belief in facts shot up while belief in myths tumbled (see chart below).
The effect of the correction on belief in a myth did wear off a little over time, however. “Belief change was more sustained after a fact affirmation compared with myth retraction,” the researchers write, noting that “this asymmetry could be partially explained by familiarity.” It is also worth noting that with over 65-year olds, the regression was significantly higher.

The beneficial effect of a correction was recorded not just when participants were asked explicitly to rate their belief in a claim, but also when these were asked an “inference question.” (For the myth that you can tell if someone is lying through physical tells like eye movement, the inference question was “What percentage of lies can FBI detectives catch just by looking at physical tells?”).
The psychologists believe there are some practical lessons that can be gleaned from their findings.
First, detailed fact checks are more effective. In the study’s case, detailed explanations were a mere three or four sentences long, This is a relatively low bar for fact-checkers to clear. But it does suggest that headlines or tweets alone may not be as effective in correcting a misperception.
Second, fact-checkers may have to repeat their corrections frequently in order to counteract the regression towards thinking a myth is a fact that we experience over time. The study acknowledges this recommendation is “somewhat ironic,” but making a correction more familiar to readers might be worth the drawback of reminding them of the underlying myth.
📇 Political Behavior | January 2018 | Thomas Wood & Ethan Porter
[excerpted from an article originally published on Poynter]
This paper fails to replicate — and ultimately contradicts — one of the most cited findings on the relationship between facts and partisan beliefs, namely the “backfire effect.”
The study showed 8,100 subjects corrections to claims made by political figures on 36 different topics. Only on one of the 36 issues (the misperception that WMD were found in Iraq) did they detect a backfire effect. Even then, a simpler phrasing of the same correction led to no backfire. The paper’s co-authors conclude that “by and large, citizens heed factual information, even when such information challenges their partisan and ideological commitments.”

📇 Royal Society Open Science | March 2017 | Briony Swire, Adam J. Berinsky, Stephan Lewandowsky, and Ullrich K. H. Ecker
[excerpted from an article originally published on Poynter]
This study focused on statements — both factual and inaccurate — made by Donald Trump during the Republican primary campaign. The basic conclusion of the study is that fact-checking changes people’s minds but not their votes.
The authors presented 2,023 participants on Amazon’s Mechanical Turk with four inaccurate statements and four factual statements (see full list here). Misinformation items included Trump’s claim that unemployment was as high as 42 percent and that vaccines cause autism.
These claims were presented two ways: unattributed or clearly indicating that they were uttered by Trump. Participants were asked to rate whether they believed each of them on a scale of zero to 10.
Each falsehood was then corrected (or confirmed) with reference to a nonpartisan source like the Bureau of Labor Statistics. Participants were then asked to rate their belief in that claim again, either immediately or a week later.
The results are clear: Regardless of partisan preference, belief in Trump falsehoods fell after these were corrected (see dotted lines below). The belief score fell significantly for Trump-supporting Republicans, Republicans favoring other candidates and Democrats.

There’s more good news for fact-lovers in the research. Generally speaking, misinformation items were from the start less believed than factual claims. Moreover, the research found no evidence of a “backfire effect,” i.e. an increase in inaccurate beliefs post-correction.
The rub, of course, is that Trump supporters were just as likely to vote for their candidate after being exposed to his inaccuracies. The paper found that voting preferences didn’t vary among Republicans who didn’t support Trump, either. Only Democrats said they were even less likely to vote Trump.
Login or upgrade your account to become a member to access content below.