Logo
Logo
Search
UPGRADE
ABOUT
RESOURCES
REPORTING
LOGIN

Briefing: UK gov mulls an AI label mandate

Plus: Meta is still making money from AI nudifiers

Craig Silverman
Alexios Mantzarlis
Craig Silverman & Alexios Mantzarlis

Mar 20, 2026

Briefing: UK gov mulls an AI label mandate

Our weekly Briefing is free, but you should upgrade to access all of our reporting, resources, and a monthly workshop.

This week on Indicator

Craig published the video, slides, transcript and AI-generated notes from his recent members-only workshop. He demonstrated online search tips and tools from on our recent guide, “The state of online search: How to find what you're looking for in the age of AI.”

Alexios published the results of Indicator’s audit of AI labels on social media platforms. Meta in particular lagged behind: Instagram only labeled 15 out of 105 AI images uploaded and the company also failed to add C2PA metadata to content created with its own AI generator.

Also! Indicator was named a finalist in two categories by the Digital Media Awards, which are run by the World Association of News Publishers. We’re up for Best in Countering Disinformation and Best Emerging News Providers. Nice to see several publications we admire on those lists, too.

Paid members help us break news and dig into tools

Meta is still making money from AI nudifiers

Meta’s Adversarial Threat Report published on Mar 11 included some interesting data about the company’s struggle to contain AI nudifiers. The report said that between November and January, Meta removed over 344,000 ads for nudifier apps across Facebook and Instagram.

But the company is still struggling to contain the problem. Meta has been trying to get this situation under control for over a year — and I quickly found dozens of active ads for AI nudifiers on Facebook and Instagram during a review on Thursday.

In the eight days since Meta published its report, the company ran more than 900 new ads for 10 different nudifiers. 176 were still active at the time of writing. It took me about 30 minutes and one burner Instagram account to find highly explicit ads for apps and websites that promise they can “make anyone star in your own fantasy scenes” with “no limits.” (I shared my leads with Meta on Thursday night; the company is reviewing them.)

Meta’s transparency on takedowns is to be welcomed. But its ability to be fooled by known bad actors — many of the domains I found are tied to operations I wrote about many times before — suggests the company is not investing enough in the trust and safety teams that are charged with fighting this nasty trend.

When it comes to fighting AI nudifiers, Meta has a lot more work to do before its commitment can be taken seriously. — Alexios

Deception in the News

📍 The UK is considering requiring labels on AI content as part of a broader copyright law reform (h/t Henry Ajder). And Senator Mark Warner of Virginia sent a letter to a bunch of tech CEOs that called on them to “attach robust and consensus-based content credentials, and other relevant provenance or authenticity signals (including metadata and prominent visible watermarks), to any [synthetic] media created using your products.”

📍 Israeli Prime Minister Benjamin Netanyahu posted a proof-of-life video flashing his five fingers to quash online rumors that a previous video was AI-generated. (Not sure it’s working).

📍 Sony Music requested removal of more than 135,000 AI-generated songs that impersonated musicians including Beyoncé and Harry Styles.

📍 A literal false flag operation appears to have taken place in Hungary, where a Ukrainian flag was reportedly unfurled by provocateurs during an opposition march and then used in social media posts to discredit the event as anti-patriotic (h/t Tommaso Canetta). The Financial Times had earlier reported on a Kremlin-backed plan to bolster President Viktor Orban’s party ahead of next month’s election.

📍 Iranian state-aligned media doctored a graphic by the Special Broadcasting Service to exaggerate the effects of oil disruption on the Australian economy.

📍 US President Donald Trump claimed that the “fake news media” is republishing AI-generated Iranian disinformation. He didn’t name names but his usual targets have been debunking the very videos he is talking about.

📍Representatives for three girls in Tennessee filed a class action lawsuit against xAI for allegedly designing Grok in a way that led to their photos being nudified.

📍 You now have to be a paid X user to ask Grok questions. This development could dramatically affect our Grok-is-this-True dashboard, given that the majority of people that queried the bot for verification assistance were not paying for X Premium. For now, however, many free users are still tagging Grok (presumably because they don’t know about the transition) and misleading tweets are reflected in our tracker. We’ve slightly updated the methodology and will keep updating. — Alexios.

📍 Polymarket gamblers targeted a Times of Israel reporter with death threats and disinformation because they were upset that his article about an Iranian strikes on Israel prevented them from cashing in.

Tools & Tips

The news that American satellite image providers like Planet Labs had delayed the release of high resolution images from the Middle East caused a lot of consternation in the OSINT community. The Economist warned that such actions are “turning the clock back from an era of unprecedented transparency.”

But Manisha Ganguly, an investigations correspondent for The Guardian and a literal PhD in OSINT, correctly pointed out that satellite companies have previously withheld images from conflict zones. And commercial satellite imagery isn’t really open source, anyway:

❝

High-resolution imagery of specific regions is increasingly accessible only to large legacy media organisations with budgets to afford this service at a level that’s actually useful. Most free satellite imagery providers at the start of the Golden Age of OSI (a decade ago) no longer exist in a meaningful way to advance reporting on current events. Most OSINT tools - the useful ones - are controlled by data brokers, require subscriptions eg. PimEyes and a financial investment that freelancers or “citizen journalists” in training can no longer afford.

As OSINT becomes industrialized and productized, the original definition of “open source” sometimes gets lost. — Craig

📍 Henri Beek highlighted some recent updates to Google Programmable Search Engines (aka Custom Search Engines)

📍 WebVetted offers a suit of tools for analyzing Instagram, LinkedIn, X, Threads, Facebook and TikTok profiles. (via CyberSudo)

📍 Nico Dekens wrote, “AI may not replace analysts. It may break them first.”

📍 Paul Wright and Neal Ysart wrote, “The $144m Mistake – Why Verification Must Always Come First.”

📍 Pavel Bannikov wrote, “SynthID and the pitfalls of AI content selection.”

Events & Learning

100% of Indicator’s staff will be speaking at the International Journalism Festival in Perugia next month!

Craig will be leading an interactive workshop on the best free or cheap OSINT tools; Alexios is on a panel about investigating platforms with three awesome speakers.

We’ll also be hosting an aperitivo; if you’re going to be there, make sure to RSVP using the email you use to subscribe to Indicator.

Reports & Research

📍 A clever preprint says that Community Notes suffers from a “lazy raters” problem, whereby helpful ratings are more likely to appear on claims that are easy to fact-check. Such claims are therefore more likely to end up with a visible note, leaving the more challenging misinformation uncorrected. “Community Notes and similar systems may be well-suited for flagging obvious falsehoods but ill-equipped to address the forms of plausible misinformation that pose the greatest threat to informed public discourse,” the authors write.

📍 Mahsa Alimardani and shirin anlen wrote a smart analysis on Tech Policy Press about the weaponization of AI content detection as a way to discredit genuine images coming out of Iran (Craig is also quoted).

📍 For the Bulletin of the Atomic Scientists, Lisa Fazio wrote about the challenge of fighting health disinformation when it is coming from the top, and when there are few or no reputational consequences for lying.

📍 Karolin Schwartz found 35 Instagram videos using AI rabbis to spread antisemitic slopaganda. The largest one has 1.3 million followers.

📍 A consortium led by Science Feedback published an estimate of the prevalence of disinformation about five sensitive topics on six major platforms (Facebook, Instagram, LinkedIn, TikTok, X, YouTube) in four EU countries. By reviewing a reach-weighted random sample of 500 posts, the report concludes that TikTok is the worst performer, with a ~25% prevalence of disinformation.

📍 Code for Africa built on the findings of a recent OpenAI investigation to dig into how a fake journalist using the name Manuel Godsin was able “to infect the African information ecosystem with at least 38 distinct pieces of manipulated information published 73 times across at least 27 separate websites in eight African countries.”

📍 A network of accounts that posted AI-generated videos of female Iranian fighter pilots got 25 million views on TikTok, according to the consulting company Alethea. The propaganda then spread to other platforms.

Want more studies on digital deception? Paid subscribers get access to our Academic Library with 75 categorized and summarized entries:

Academic Library | Indicator

Indicator is your essential guide to understanding and investigating digital deception. Sign up for free

indicator.media/academic-library

One More Thing

An X creator who posted a video of “The Rizzler” bombing Iran that was seen almost 10 million times claims to have been paid almost $30,000 through the platform’s creator revenue sharing program. (In another synthetic video, Nikki Minaj is seen piloting a boat into the Hormuz Strait before getting nuked.) How’s that for an incentive system!

Indicator is a reader-funded publication.

Please upgrade to access to all of our content, including our how-to guides and Academic Library, and to our live monthly workshops.


Keep Reading



Indicator is your essential guide to understanding and investigating digital deception.

cursor-click