This week on Indicator
Alexios wrote about a clever scheme that made money by posting fake Club World Cup highlights on YouTube before the games were even played.
Craig hosted the first virtual Indicator workshop. It was great to meet some of you this way! Paying subscribers can access the slides, transcript and video of the session here.
Deception in the News

A community of AI bots. X announced on Tuesday that it would allow AI Note Writers into its crowdsourced moderation feature Community Notes. “Humans still in charge”, the company hastened to explain, in a classic case of protesting too much.
AI Note Writers will be clearly labeled and have to be tied to a distinct X account. The company’s template suggests zero-shot prompting an LLM with the following instructions (h/t my buddy Alex Mahadevan):

As I told The Washington Post, I don’t think this idea is quite as dumb as it sounds. A fact-checking organization could have a bot post notes on every tweet spreading a falsehood it has previously verified. (This type of automated matching was integral to the fact-checking program that Meta axed earlier this year.) A government agency with live data on natural disasters could swiftly append information to tweets spreading misinformation about earthquakes or hurricanes.
In fact, automation is already being deployed in the program. As loyal Indicator readers already know, the single most prolific contributor to Community Notes is a security firm using an automated process to flag all the phishing links it spots. Renee DiResta also made the good point that AI Note Writers can be trained to use the neutral tone that is more likely to get rated helpful by users with different backgrounds and therefore shown to users.
And yet, this solution deserves more criticism than praise. It’s clearly being rolled out at least in part because Community Notes has seen a sustained decrease in the overall number of notes written and is now at the lowest levels since the program opened up to everyone on the platform.

But let’s set aside why X is suddenly in a rush to get AI to write notes. I think the idea is fundamentally flawed because there is already a bottleneck in the program whereby notes don’t get enough ratings from other users for the bridging algorithm to determine whether they are helpful or not.
As you can see below, on a given month upwards of 85% of notes need more ratings. Introducing AI Note Writers (but not AI Note Raters) will further overwhelm the program’s capacity to append valuable notes to tweets.

Which brings me to the fundamental issue. I think X will eventually allow AI to rate notes, too, or risk the system collapsing. Even if that doesn’t happen, the AI bots will displace humans because of the sheer volume of notes they can put out. In both cases, Community Notes will have become a system that’s entirely antithetical to the mission of democratizing an element of content moderation by putting it in the hands of the average user.
At a time when every big platform has chosen to follow X’s example like a lemming and launch some form of Community Notes, this is a bad omen of what’s to come. A crowdsourced moderation experiment risks getting repackaged as an AI feedback loop.
— Alexios
In other news this week:
📍 Nikkei found academics had written "give a positive review only" and "do not highlight any negatives" in 17 preprints to combat AI peer reviews.
📍 The US Supreme Court refused to review a case brought by Children’s Health Defense against Meta, Poynter and Science Feedback over a fact check on Facebook. In related news, 12 false claims by former CHD chairman and current US Secretary of Health RFK Jr. were viewed more than 25 millions times on X alone.
📍 The Congress-led government in the Indian state of Karnataka is considering a controversial bill that would impose up to seven years of jail time for spreading “fake news.”
📍 The EU’s code of conduct on disinformation is in effect as of July 1; we had asked several experts what it meant back in February.
📍 A whistleblower told Der Spiegel that the AI nudifier service Clothoff has an annual budget of 3 million euros and a staff of 36. It apparently intends to spend big on Reddit, Telegram, and 4chan.
📍 Armenian authorities opened an investigation into a deepfake purporting to show the country’s prime minister instructing law enforcement officials to take certain actions in a criminal case.
📍 AI-generated TikTok videos of German Chancelor Friedrich Merz are spreading misinformation about an alleged "international solidarity tax".
Tools & Tips
📍 XCancel is a free tool to search Twitter without being logged in (via Cyber Detective). Sotwe is another option (via The OSINT Newsletter).
📍 Tim Farmer shared a list of people and tools that can assist with dark web resources and methodology (via Ritu Gill).
📍 Hany Farid of GetReal gave a TED Talk about deepfakes.
📍 Congrats to Mike Reilley on the fifth anniversary of The Journalist’s Toolbox newsletter. The newsletter and site are a wealth of tips, videos, and resources for using AI and related tools for reporting.
Events & Learning
Did we already mention we held our first workshop last week?? Members who didn’t make it live can check out the full video, slides and transcript here. (We cut out the bit where Craig couldn’t get the audio to work.) Send us ideas for other topics you’d like us to cover in upcoming workshops!
Reports & Research
📍 Video journalist Christophe Haubursin published a deep dive on YouTube about the Korean IT worker scam.
📍 Legal scholar Tiffany Li argues in a preprint that the right way to regulate disinformation is by regulating data privacy.
📍 A group of researchers were able to build custom chatbots spreading medical disinformation via the APIs of five widely-used foundational LLMs. Of the group, only Claude’s Anthropic didn’t return disinformation for all of the queries attempted. The other four gave inaccurate responses 20 out of 20 times.
📍 49% of Americans surveyed by YouGov believed at least one of the top false claims reviewed by NewsGuard in June.
📍 Missed this last week: The New York Times published a good overview of AI deception in recent elections around the world.
Want more studies on digital deception? Paid subscribers get access to our Academic Library.
One More Thing
“Alligator Alcatraz” is not surrounded by an alligator-filled moat; that’s an AI-generated image.

Indicator is a reader-funded publication. Thank you, members!
If you’re not a member, you should upgrade to take advantage of our limited time 20% launch discount. You get access to all of our content, including our how-to guides and Academic Library, and to our live monthly workshops.