Logo
Logo
Search
UPGRADE
ABOUT
RESOURCES
REPORTING
LOGIN

AI labeling is still very much a work in progress

Indicator’s latest audit of three generators and five social media platforms revealed multiple gaps

Alexios Mantzarlis
Alexios Mantzarlis

Mar 18, 2026

AI labeling is still very much a work in progress

AI content is everywhere, and it’s ever harder to spot.

Tech companies have promised to help limit the damage to online truth-seeking by tagging AI content with machine-readable signals that can be used to label the material if it appears on social media feeds. Billions of such labels have already been applied.

This tagging-and-labeling infrastructure has multiple failure points, however, according to a new Indicator audit.

Over the past few weeks, I created more than 200 AI-generated images and videos using Google, Meta, and OpenAI tools. I then posted them on Instagram, LinkedIn, Pinterest, TikTok, and YouTube; each of these platforms has promised to label synthetic content.

The results were inconsistent and often underwhelming. Even the best in class – LinkedIn and Pinterest – only labeled 67% of the AI content I posted. YouTube managed roughly 50% and TikTok about one third. Instagram did worst of all, labeling just 15 of the 105 synthetic images.

Whether a synthetic image or video got labeled came down to a combination of how the content was created, what device was used to upload it, and which platform it was posted on. While these failure modes were not random, the process ended up feeling like a slot machine.

Henry Ajder, a deepfake researcher and AI advisor to tech companies and government institutions, says these results show that while platforms “may be making the right kinds of noises, the unified front we need to see for the entire media lifecycle is just not there yet.”

Indicator’s findings show that despite some industry coordination on AI labels, “we're still very much in need of a sectoral approach,” said Bruna Martins dos Santos of the human rights organization WITNESS.

Some of the platform failures can be attributed to the models used to generate the files. For example, Meta does not seem to be adding Content Credentials metadata on images and videos generated using meta.ai. That’s despite being a steering committee member of the Coalition for Content Provenance and Authenticity (C2PA), the industry group advocating for this standard. 

David Evan Harris, who helped write California’s AI Transparency Act and sits on the EU Working Group on Transparency of AI-Generated Content, told Indicator that “the results of this study are confirmation that voluntary commitments by the tech companies in the study cannot be taken seriously.” Harris, a Chancellor’s Public Scholar at UC Berkeley, says consistent labeling “inherently requires some degree of coordination, which [platforms] seem either unwilling or unable to undertake.”

Regulation on AI labels has been approved or is about to go into effect in several places around the world, including California, the European Union, India, South Korea, and Vietnam.

This was Indicator’s second audit of AI labels. There was some progress compared to the first one conducted in October: Google is now more consistently applying C2PA metadata; all platforms except for Instagram increased the share of AI content they correctly labeled. 

But as a whole, the tech ecosystem is still lagging when it comes to meeting a basic requirement of AI transparency.

And this is just one part of a dizzying combination of challenges that includes true images being discounted as synthetic, bad actors using AI tools that won’t mark their deceptive deepfakes, and ordinary users struggling to properly interpret AI labels. By comparison, automated labeling of known AI content should be relatively straightforward.

“Tech platforms have the talent to implement C2PA tomorrow; they simply need the will to prioritize it,” said Brendan Quinn, managing director of the International Press Telecommunications Council (IPTC), an industry standards body.

Indicator's AI labeling audit

logo

Join today to read the rest

"Alexios and Craig have built something exceptional with The Indicator." - Ruben Gomez, Researcher and former Trust & Safety worker at Reddit and Twitter/X

Upgrade

With a membership, you get:

  • Everything we publish
  • All of our workshops
  • Our eternal gratitude

Keep Reading



Indicator is your essential guide to understanding and investigating digital deception.

cursor-click