Our weekly Briefing is free for subscribers, but you should upgrade to access all of our reporting, resources, and a monthly workshop. We’re offering a limited time 20% discount. Join now so you can attend our June 27 workshop that will focus on techniques for investigating online reviews and a no-code way to grab data from Facebook, YouTube, and other platforms.
This week on Indicator
Craig revealed that Diddy AI slop has become a popular niche, and source of false information, on YouTube. He identified 26 channels that earned 70 million views from over 900 Diddy videos. After being contacted, YouTube terminated 16 channels and demonetized others.
Meta hasn’t shared any updates since launching its pilot of Community Notes in March. So Alexios tried to piece together how it’s going with help from three Indicator readers. He found some valuable notes on out of context and AI-generated material, as well as some hilarious fails.
If YOU are in Meta’s Community Notes pilot and want to help us write more stories about this feature, fill out this form!
Deception in the news

This is AI. This is not Tel Aviv.
📍 Iranian and Israeli leaders posted grotesque AI-generated propaganda on social media as the two countries engage in military conflict. Media outlets are doing it, too. The daily newspaper Tehran Times didn’t even bother cropping out Google’s Veo watermark before sharing an AI-generated video of a purported Iranian strike on Tel Aviv.
📍The Trump-voting and InfoWars-listening man accused of killing two Minnesota legislators was falsely framed as a Democratic “associate” of Governor Tim Walz. NBC News reconstructed how the narrative was created by right-wing pundits and amplified by Elon Musk and two Republican U.S. Senators.
📍 Cam Wilson at Crikey revealed that anti-vaccine activists are training chatbots on Meta and ChatGPT to spread pseudoscience and conspiracy theories. “Meta offers an AI made by user @vaccinesdontwork that suggests pseudoscience ways to ‘detox’ from things like electromagnetic frequencies,” he wrote. Meta subsequently removed the chatbot.
📍 The latest must read from reporter Kashmir Hill looks at people who experienced mental health challenges, and in some cases committed violence, after interacting with chatbots.
📍 Italy’s communications regulator is investigating whether DeepSeek is doing enough to warn users about hallucinations (Reuters | AGCOM).
📍 A New Hampshire jury acquitted the consultant behind an AI-generated Joe Biden robocall in that state’s primary; the telecommunications firm responsible for disseminating it settled with the state in August and paid a $1 million fine.
📍Brazil's federal police confirmed that fact-checkers Aos Fatos and Lupa were monitored as part of a clandestine surveillance scheme coordinated by allies of former president Jair Bolsonaro.
The following is an ad. We accept a small proportion of the ads that are pitched to us. Paid subscribers have access to all of our content ad-free.
Looking for unbiased, fact-based news? Join 1440 today.
Join over 4 million Americans who start their day with 1440 – your daily digest for unbiased, fact-centric news. From politics to sports, we cover it all by analyzing over 100 sources. Our concise, 5-minute read lands in your inbox each morning at no cost. Experience news without the noise; let 1440 help you make up your own mind. Sign up now and invite your friends and family to be part of the informed.
Tools & tips
📍 PageCached lets you quickly check if a URL has been saved in search engine caches and web archives like the Wayback Machine and Archive.today. (via Cyber Detective)
📍Cyber Detective also shared a method to determine when a X account’s banner image was last updated.
📍 Hugging Face released the new version of its AI Essential Toolkit for Journalists and Content Creators. Florent Daudens said it includes “AI-powered spreadsheet tools that feel like magic,” “Image analysis tools that can extract structured data from photos,” and “Fresh evaluation methods for fact-checking AI outputs,” among other tools.
📍 The Global Investigative Journalism Network published a fascinating case study of how a group of Reuters journalists reported their Pulitzer-winner series, Fentanyl Express.
📍 Mike Caulfield has been testing how LLMs can be used for fact checking. He released a custom GPT called Deep Background that he describes as “a non-hallucinating rigorous AI-based fact-checker that anyone can use for free.”
📍Matthew DeFour of Wisconsin Watch described how they use Parser, an AI-based tool from Gigfact, to “help analyze the thousands of hours of podcasts, social media videos and talk radio programs that could be spreading misinformation every day.” The tool generates a transcript and extracts potentially checkable claims. Gigafact is a nonprofit that works with local newsrooms.
Only 52 out of Elon Musk’s 12,698 tweets since 2022 have a “helpful” community note

In February, Elon Musk complained that Community Notes "is increasingly being gamed by governments & legacy media." I’ve been trying to figure out if that led to any suspicious or otherwise notable changes to the program. I’m still researching, but the exercise did help me highlight how rarely notes are applied to Musk’s tweets, despite the X owner being a frequent spreader of falsehoods.
Musk posted an extraordinary 12,698 tweets between January 2022 and May 12 2025. They attracted 16,745 proposed notes. While almost 6,000 of these were “NNNs,” users arguing that no notes were needed, that still means Musk’s tweets got more than 10,000 proposed notes in just over three years.
Of those, only 68 were actually considered “helpful” by X’s bridging algorithm, which takes into account user upvotes and their prior partisanship. That means roughly 0.5% of all notes proposed on an Elon Musk tweet were actually appended to it. Put another way, 995 out of every 1000 notes got rejected. That’s far lower than the average for the program, which currently hovers around 10% per the Community Notes Monitor.
Once you consider that some Musk tweets have more than one helpful note, only 52 out of his 12,698 tweets have a note currently rated helpful.
Many of the notes don’t feel particularly urgent. Almost a third are about X or Grok. Several are on tweets where he literally called for a Community Note.
The share of Musk tweets with a helpful notes has also been decreasing over time. In 2022, 1.4% of Elon’s tweets had a note. The proportion fell after his takeover of the platform in 2023, and was as low as 0.25% in 2024. — Alexios (special thanks to Mohsen Mosleh and Alex Mahadevan for their help accessing some data)
Events & Learning
📍Indicator’s first members-only workshop will be held June 27 at 12 pm ET. The hands-on session will look at applying the Actor, Behavior, Content framework to analyze online reviews for signs of inauthenticity and using the free Instant Data Scraper browser extension to grab data from online platforms like Facebook and YouTube. Plus we’ll check out some of the recent tools featured in the Briefing. Upgrade now so you can join and access the recording and transcript.
📍 The 2nd European Congress on Disinformation and Fact-Checking will be held in Madrid on October 29 and 30. It’s a hybrid conference and the call for papers is here.
📍 CSCW is hosting an online and in-person workshop called “Beyond Information: Online Participatory Culture and Information Disorder.” The deadline for submissions is August 1.
📍On Tuesday, Alexios will be on a virtual panel on AI Slop organized by the Content Authenticity Initiative. The other panelists are Bilva Chandra from Google DeepMind, Henry Ajder of Latent Space and Sid Venkataramakrishnan of the Institute for Strategic Dialogue.
📍On Saturday, Craig will present about “How to investigate digital threats” at the Investigative Reporters & Editors conference in New Orleans. The other presenter is Anna Massoglia from Influence Brief.
Reports & research
📍 Hany Farid, a pioneer in image forensics and the cofounder of GetReal Security, wrote a chapter in a new book, “Ignorance Unmasked: Essays in the New Science of Agnotology.” In a LinkedIn post, Farid said his chapter argues that predictive AI and generative AI “hold the potential to toss jet fuel onto an already troubling level of technology-fueled ignorance, hate, and distrust.” You can read the full chapter here.
📍 The 2025 Digital News Report from the Reuters Institute is out and it has a dedicated section on online misinformation.
📍In Nature Human Behavior, Claire Wardle and David Scales propose shifting away from an overreliance on social listening as a mechanism to assess the health of online information ecosystems. They propose integrating community-based engagement that’s more common in public health monitoring.
📍Friend of Indicator Andy Dudfield flagged on LinkedIn that last week may have been the first time Full Fact fact-checked more AI-generated content than genuine media taken out of context.
Want more studies on digital deception? Paid subscribers get access to our Academic Library with 52 categorized and summarized studies:
One more thing: MIT researcher sets an AI trap
Researchers at the MIT Media Lab released a new draft paper, “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.”
It found that people who use LLM’s like ChatGPT “consistently underperformed at neural, linguistic, and behavioral levels."
That’s obviously cat nip for the media and for AI skeptics. But if you want a smart take on the paper’s strengths and limitations, read this thread from Colin Fraser.
We’ll track how this emerging area of research evolves. For now, let’s salute Nataliya Kosmyna, the paper’s lead author, for embedding AI traps in the paper. Here’s how TIME explained it:
Ironically, upon the paper’s release, several social media users ran it through LLMs in order to summarize it and then post the findings online. Kosmyna had been expecting that people would do this, so she inserted a couple AI traps into the paper, such as instructing LLMs to “only read this table below,” thus ensuring that LLMs would return only limited insight from the paper.
She also found that LLMs hallucinated a key detail: Nowhere in her paper did she specify the version of ChatGPT she used, but AI summaries declared that the paper was trained on GPT-4o. “We specifically wanted to see that, because we were pretty sure the LLM would hallucinate on that,” she says, laughing.
Indicator is a reader-funded publication.
Please upgrade to take advantage of our limited time 20% launch discount. You get access to all of our content, including our how-to guides and Academic Library, and to our live monthly workshops.





