Logo
Logo
Search
UPGRADE
ABOUT
RESOURCES
REPORTING
LOGIN

Inside a pro-Conservative influence operation on Community Notes

This group of X accounts tried to keep Tory tweets fact check-free

Alexios Mantzarlis
Alexios Mantzarlis

Apr 14, 2026

Inside a pro-Conservative influence operation on Community Notes

A group of X users worked together during the 2024 UK election to remove corrective Community Notes from Conservative Party accounts, according to a new Indicator analysis.

The coordinated effort appears to have resulted in  a higher-than-usual reduction in visible notes on 84 tweets by then-Prime Minister Rishi Sunak and other accounts tied to the Conservative Party and its candidates. It’s unclear who was behind the operation, which may be the first documented case of systematic manipulation of Community Notes for political purposes. 

Touted by X and adopted by Meta and TikTok, Community Notes is a crowdsourced program that allows users to attach context to potentially misleading tweets. The feature has a defense mechanism against partisan coordination in the form of a “bridging” algorithm that requires people who have previously disagreed to rate a note helpful before appending it to tweets.

The pro-Conservative network seems to have determined it could manipulate the consensus requirement in order to hide notes, thereby removing fact checks.

This operation wasn't always incorrect in its arguments, nor was it successful in every effort to remove notes. But the campaign shows that there is a political appetite to suppress Community Notes.

The group was composed of five accounts, two of which were active almost exclusively during the 2024 election. The five accounts regularly posted “No Note Needed” notes on tweets they wanted to protect from fact checks, sometimes using the exact same text. (This type of note signals that a tweet doesn’t need a correction because it is not misleading.) The group also downvoted 97% of corrective notes on the same tweets.

The high proportion of notes targeted by the pro-Tory network that ultimately lost their helpful rating suggests that even a small group can be relatively effective in influencing the Community Notes system.

This aligns with recent findings by researchers at IU Bloomington that used synthetic data to model the effect of bad ratings on visible notes. Their preprint concludes that “a small minority (5–20%) of bad raters can strategically suppress targeted helpful notes, effectively censoring reliable information.”

X’s algorithm is “bad at inferring people’s bias,” said Alexander Stewart, an Associate Professor of Informatics at IU Bloomington and one of the authors of the study.

Steve Nowottny, editor of the British fact-checking website Full Fact, told Indicator that "there have always been fears that crowdsourced systems could be gamed by those seeking to manipulate them. And the fact that Community Notes depend on consensus means there is a risk more polarising political claims may be less likely to be annotated, or that useful notes may be hidden in the face of partisan opposition, whether coordinated or not. That makes them a fragile safeguard at best."

Indicator’s findings suggest that politically-motivated actors might be able to benefit from this shortcoming. Requests for comment to the Conservative and Labour Party press offices, as well as to X, went unanswered by publication time.

logo

Join today to read the rest

"Alexios and Craig have built something exceptional with The Indicator." - Ruben Gomez, Researcher and former Trust & Safety worker at Reddit and Twitter/X

Upgrade

With a membership, you get:

  • Everything we publish
  • All of our workshops
  • Our eternal gratitude

Keep Reading



Indicator is your essential guide to understanding and investigating digital deception.

cursor-click