Our weekly Briefing is free, but you should upgrade to access all of our reporting, resources, and a monthly workshop.
This week on Indicator
Alexios checked in on the AI contributors in X’s Community Notes program. Turns out, the eight of them are prolific and more likely to get their notes rated as helpful by other users and X’s algorithm.
Craig exposed a large scale, undisclosed ad campaign on TikTok and Instagram that used female student creators to promote Jenni, an AI study app. The company behind Jenni is also running accounts that post staged video confrontations between students and professors/TAs.
Good news on the AI slop front. No, really!
Two platform updates this week gave me a rare feeling of hope about AI slop.
On Wednesday, TikTok announced it would pilot new controls to let users filter how much AI-generated content they see on their For You feeds. The platform already allows users to dial up and down certain topics like current affairs, dance, or fitness. It’s now promising to test letting people decide how much synthetic video they want to see.
The move follows Pinterest’s lead; Bluesky users can also filter out AI content by subscribing to a third-party labeler. I would bet that more platforms will follow. (Our guide to AI labels outlines down how different platforms handle AI content, and gets continuously updated.)

Obviously, we can't fix the problem of AI slop through relatively hidden toggles that only the most savvy users will know how to find. That said, increasing user agency is an essential element of how platforms can take responsibility for the deluge of synthetic content on their services.
While some AI slop is policy-violative, a lot of it doesn’t pose an acute harm; instead it degrades information quality through the scale of its inanity. Allowing users to opt-out should improve individual feeds — and if enough users vote with their settings, it could help solve the problem at source by reducing the audience (and therefore the incentives) for slop creators.
For AI filters to work, however, we need highly reliable AI detection and labeling mechanisms. Our comprehensive audit of AI labeling last month found that platforms are far behind on their promises on this front.
Which is where the second bit of good news comes in.
Google announced on Thursday that you can now use Gemini to detect if an image has been watermarked with SynthID, its proprietary watermarking tech. SynthID is fairly resistant to cropping, resizing, and other common changes that occur when images are shared and posted online. When I tested 10 slightly edited screenshots of images that we had generated with Gemini as part of our audit, the chatbot consistently confirmed they were generated with Google AI.

SynthID is not a silver bullet. My awesome student Om Kamath (hire him!) generated an image of New York City’s Roosevelt Island with Gemini, then cropped it significantly and added random distortions to the photo’s pixels. This sent Gemini off-course, as you can see in the bot’s response below. And of course, SynthID can only be used to detect images generated with Google tools.
Still, it’s a welcome step towards making better AI detection tools available to everyone.

I’m generally pretty bearish about generative AI and the health of our information ecosystem. I remain so. But I’ll celebrate developments that increase user agency and platform transparency, especially if they eventually become an expectation for all companies. — Alexios
Deception in the News
📍 Grok amplified fabricated details about the Bataclan terrorist attacks in Paris, whose tenth anniversary was this month. A survivor had to correct Elon Musk’s chatbot.
📍 New documents released this month show that Jeffrey Epstein and his associates were actively trying to “clean up” the Google search results and Wikipedia page about the financier and convicted sex offender.
📍 The Guardian has discovered a group of publishers “using cloned websites, AI-generated staff and virtual offices” to lure first-time authors into self-publishing ventures that turn out to be scams. It’s similar to the Pakistan-based book publishing scam network that Craig previously dug into.
📍 A “satirical” AI-generated YouTube video about a family in Germany receiving 6,000 euros a month in welfare benefits was reposted on TikTok as real. It was likely generated with OpenAI’s Sora and got 1.7 million views on the platform — as well as plenty of outraged and racist comments.
📍 Also courtesy of Sora: at least seven viral videos that falsely depicted scenes from a crucial front in the Russia-Ukraine war.
📍 British regulator Ofcom has fined Itai Tech Ltd., a company behind several of the largest AI nudifier websites, for failing to provide age verification measures. It feels a little like getting Al Capone for tax evasion, but it is what it is.
📍In The Local, Nicholas Hune-Brown goes deep on a fraudulent freelancer who got bylines on several publications including The Guardian and Dwell. “Every media era gets the fabulists it deserves,” he writes.
Tools & Tips
Sanjana H on LinkedIn ran some interesting OSINT tests on Gemini 3, Google’s latest model. He found that it performed well at describing visual elements in an image, analyzing a video and pinpointing the moments (down to the second) that Ryan Gosling appears in a film tailer, and identifying a Sri Lankan politician in a video, among other tasks.
The takeaway is that Gemini can be useful in assisting wth visual investigative work. He wrote:
In the human rights domain, this capacity to instantaneously cross-reference visual identities and contextual actions could drastically accelerate the verification of citizen journalism from conflict zones, turning an overwhelming deluge of data into actionable evidence, and more comprehensive archives.
📍 Henk van Ess took Google’s SynthID for a test run to see how well it detects images generated with Gemini. (Remember that SynthID is only used for images generated by Google’s models.) Henk tested three methods to check whether an image is synthetic and identified differences in the amount of details they return.
📍 Derek Bowler of Eurovision News wrote, “Advanced Wayback Machine and archival OSINT.” He takes you through a methodology that includes publicly-accessible archives like the Wayback Machine, along with creating a WACZ/WARC file and hash.
📍 Agence France-Presse published a new, free online course, “Verifying AI-generated content.”
📍 The Follow Money Fight Slavery Summit had a panel about “Intel in Action Using OSINT and the Darknet to Disrupt Human Trafficking.” They posted the video on YouTube. Give a watch below to “how investigators and analysts leverage open-source intelligence (OSINT), blockchain tracing, and darknet monitoring to identify trafficking networks, follow the money, and support law enforcement operations worldwide.”
Events & Learning
📍 OSINT Industries is hosting a free webinar on Dec. 10, “OSINT Investigations: How to Follow Criminal Digital Footprints.” Register here.
Reports & Research

📍 In a peer-reviewed paper out on PNAS, Mohsen Mosleh and colleagues found that lower-quality domains, as defined by Lin et al., were more likely to get higher engagement across all seven platforms reviewed.
📍 The Bureau of Investigative Journalism and the Institute for Strategic Dialogue found that a network operated by a Sri Lankan “Facebook ads king” is behind 128 Facebook pages and groups that target UK audiences with false and hateful anti-immigrant AI slop.
📍 Code for Africa analyzed the online discourse around Tanzania’s recent elections and found that “pro-government and opposition-linked digital actors used coordinated tactics, hijacked narratives, and weaponised online discourse before, during, and after the polls.”
📍 The research team at Open Measures disputed claims that American far-right figure Nick Fuentes has greatly expanded his online reach following Charlie Kirk’s assassination. Their findings, they argue, “raise doubts about whether Fuentes is a uniquely ascendant political figure; in the context of our data, we understand Fuentes as one of many in a growing chorus of online voices who are using their influence to promote hate speech and target vulnerable minority populations in the US.”
Want more studies on digital deception? Paid subscribers get access to our Academic Library with 55 categorized and summarized studies:
One Thing
A US Centers for Disease Control and Prevention webpage that correctly stated that there’s no evidence that vaccines cause autism was changed this week.
It now says: “The claim ‘vaccines do not cause autism’ is not an evidence-based claim because studies have not ruled out the possibility that infant vaccines cause autism.”

A statement from the American Medical Association said:
Vaccination is essential to protect individuals and communities from preventable diseases, making it a fundamental element of public health. The AMA is deeply concerned that perpetuating misleading claims on vaccines will lead to further confusion, distrust, and ultimately, dangerous consequences for individuals and public health.
Indicator is a reader-funded publication.
Please upgrade to access to all of our content, including our how-to guides and Academic Library, and to our live monthly workshops.



