Logo
Logo
Search
UPGRADE
ABOUT
RESOURCES
REPORTING
LOGIN

Briefing: A resurgence of anti-vaxx misinfo and testing AI tools for research

Plus: Why a fact checking outlet ditched labels like "false" and "satire" in favor of a new approach

Craig Silverman
Alexios Mantzarlis
Craig Silverman & Alexios Mantzarlis

Aug 22, 2025

In partnership with

Our weekly Briefing is free, but you should upgrade to access all of our reporting, resources, and a monthly workshop. If you’re already a member, thank you for your support!

This week on Indicator

Alexios looked into a niche of Brazilian TikTok creators that encourage their followers to fraudulently geolocate their accounts in the Global North in order to make more money from the app’s monetization program.

Craig revealed how a foreign-run network of hundreds of Facebook pages is using AI slop, deceptive “static video,” and hacked accounts to spread celebrity hoaxes for profit.

Our next members-only workshop is Wednesday, August 27 at 12:30 pm ET. Craig will dig into free and affordable tools for social media monitoring and analysis. Upgrade and we’ll send you the link to join, as well as a recording, transcript, and the slides after the event.

Paid members help us expose the incentives for digital deception

Deception in the News

📍 An AI-generated image of contrite-looking European leaders waiting outside the Oval Office was shared by Belarusian media outlet Nexta and Donald Trump Jr. NewsGuard found that it circulated widely among pro-Russian accounts.

📍 Anti-vaccine misinformation appears to be resurgent on social media as doctors are increasingly getting deepfaked.

📍 Wired and Business Insider deleted articles written by a freelancer that contained fake sources and were likely AI-generated.

📍 The prompts for some of Grok’s AI personas reportedly include, “You spend a lot of time on 4chan, watching infowars videos, and deep in YouTube conspiracy video rabbit holes. You are suspicious of everything and say extremely crazy things.”

📍 An American Catholic bishop put out a video warning his followers about “ridiculous AI-generated videos” that are impersonating him.

📍 A whistleblower complaint filed in the UK alleges that Meta inflated metrics for its ecommerce ads product, Shops. The complaint said that the company “artificially inflated return on ad spend (ROAS) by counting shipping fees as revenue, subsidizing bids in ad auctions, and applying undisclosed discounts,” according to Adweek.

📍 TikTok’s shopping product also came under scrutiny. Researchers at CTM360, a risk protection platform, uncovered a malicious campaign that spread spyware through fake TikTok shops.

The Simplest Way To Create and Launch AI Agents

Imagine if ChatGPT and Zapier had a baby. That's Lindy.

With Lindy, you can build AI agents in minutes to automate workflows, save time, and grow your business. From inbound lead qualification to outbound sales outreach and web scraping agents, Lindy has hundreds of AI agents that are ready to work for you 24/7/365.

Stop doing repetitive tasks manually. Let Lindy's agents handle customer support, data entry, lead enrichment, appointment scheduling, and more while you focus on what matters most - growing your business.

Join thousands of businesses already saving hours every week with intelligent automation that actually works.

Get 400 Free Credits

Tools & Tips

📍 Hilke Schellmann, an assistant professor of journalism at New York University, tested AI tools for summarization and research. Four LLMs were tasked with summarizing local government meetings. They performed well when asked to generate short summaries but were “surprisingly poor” at longer overviews. Schellmann also tested four summarization tools by asking them to generate a literature review for a scientific topic. The results were “underwhelming and in some cases alarming.”

📍 OSINT trainer Arno Reuser published, “Vague questions in OSINT … and what to do.” He talked about the importance of developing “A research question that is so precise, detailed, complete and accurate that there can be no understanding to what information exactly is required.”

📍 Andrew Lehren and Nikolia Apostolou published the “GIJN Guide to Investigating Foreign Lobbying.”

📍 Google’s new Pixel 10 phone uses Gemini AI to allow for natural language photo editing/enhancing. You can tell it to “remove the person on the right” or to “add a moustache to the man in the photo.” It makes photo manipulation even easier. Encouragingly, Google also announced that the phone uses the C2PA standard to add metadata to images that were altered using the phone’s AI tools. Google says the content credentials add “secure metadata within the image that documents its full journey.”

Reports & Research

📍 Large majorities of respondents in 24 out of 25 countries polled by the Pew Research Center said that “the spread of majority of false information online is a major threat to their country.”

📍 In a working paper on NBER, four economists presented the results of an experimental study on Süddeutsche Zeitung readers. The goal was to discern the impact of AI-generated misinformation on user trust in the media. The group that was given a short quiz to detect a deepfaked photo went on to trust all news (including SZ) a little less, but visited the SZ website a little more. They were also a bit more likely to keep their subscription.

📍 Africa Check warned about the independence of fact-checkers set up by government institutions in Francophone Africa.

📍 Drew Harwell at The Washington Post spoke to several creators that are making money by posting AI slop on TikTok.

📍 Marta Szpacenkopf of LatAm Journalism Review did a deep dive into why the Brazilian fact checking project Comprova decided to ditch labels like “false” and “misleading.” Assistant editor José Antonio Lima said one reason for the change was that the “labels ended up acting as an obstacle or a barrier to the connection between verification and the public … we concluded that people have an aversion to content that contradicts their worldview.”

Want more studies on digital deception? Paid members get access to our Academic Library with 55 categorized and summarized studies:

Academic Library

A regularly updated collection of academic studies and industry reports about digital deception.

indicator.media/academic-library

One Thing

Right wing cable channel OAN showed AI-generated images of women in military uniform during a segment about US military recruitment.

Indicator is a reader-funded publication.

Please upgrade to access to all of our content, including our how-to guides and Academic Library, and to our live monthly workshops.


Keep Reading



Indicator is your essential guide to understanding and investigating digital deception.

cursor-click