“People are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology. So it’s important that we help people know when photorealistic content they’re seeing has been created using AI.” – Nick Clegg, President of Global Affairs, Meta (Feb. 2024)

“We’re committed to fostering a trusted experience so everyone can accurately identify AI-generated images and video.” – Patrick Corrigan, Vice President for Digital Safety, LinkedIn (May 2024)

“As we continue to bring AI to more products and services to help fuel creativity and productivity, we are focused on helping people better understand how a particular piece of content was created and modified over time.” – Laurie Richardson, Vice President of Trust & Safety, Google (Sep. 2024)

For the past two years, major tech companies have promised to help users identify synthetic content by adding labels to images and videos that were created with AI.

But a first-of-its-kind audit by Indicator found that five major platforms with billions of users repeatedly failed to label AI-generated content. Over the course of three weeks, Indicator published 516 posts containing AI images and videos across Instagram, LinkedIn, Pinterest, TikTok, and YouTube. Only 169 posts, or just over 30% of the total, were correctly labeled as AI-generated.

We found that some platforms, like Google and Meta, regularly failed to label content that had been created using their own generative AI tools. TikTok labeled synthetic content that was created using its in-app tool, but didn’t add a disclaimer to any other AI videos. Pinterest was the most effective at labeling AI images, though its success rate was still just 55%.

The results raise serious concerns about Big Tech’s implementation of a signature effort aimed at helping users navigate the modern information environment.

“This investigation proves we need rigorous verification, to ensure real resources and urgency are being applied to AI labeling,” according to Claire Leibowicz, Director of AI, Trust, and Society at the Partnership on AI. “If there are genuine technical blockers that make this harder than expected, we need companies to surface those specifically so the broader community can help solve them collaboratively—not use complexity as a shield for inaction or oversell capabilities.”

“This isn't just about identifying individual deepfakes that might be deceptive or malicious—it's about the erosion of trust in visual truth itself,” said Sam Gregory, executive director of the human rights organization Witness. “Tech executives are not taking seriously the systemic impact of corroding our ability to ascertain authentic from synthesized and the necessity to work together to counter this epistemic threat.”

Indicator’s findings suggest that AI disclosure “is not the highest priority for platforms,” according to Hany Farid, a professor at UC Berkeley’s Information School and co-founder of deepfake detection firm GetReal Security.

Representatives from Meta, Pinterest, and TikTok declined to comment on the record but provided information on background. The companies did not dispute Indicator’s findings. They said that AI labeling is a work in progress that will improve over time. Google, LinkedIn, and OpenAI did not respond to multiple requests for comment.

Claire Wardle, an associate professor of communications at Cornell University, said Indicator’s analysis reveals a “cross-platform breakdown” in the implementation of AI labels. It means researchers and regulators need “to be consistent with regular audits and can’t just assume that because something has been launched, it is working,” according to Wardle. (Disclosure: Alexios also works at Cornell University.)

A central element of the labeling ecosystem is the cross-industry Coalition for Content Provenance and Authenticity (C2PA), which is coordinating the development and deployment of the C2PA standard. Andy Parsons, senior director for content authenticity at Adobe and a driving force of C2PA, acknowledged that “display on platforms is inconsistent. While many platforms read C2PA data, it is sometimes used for internal purposes and not displayed to users. We expect adoption of display to increase in the coming months.”

Trustworthy and precise labeling of AI content is a complex socio-technical challenge with multiple players involved. It’s not surprising that it faces some obstacles. But it is essential that these mechanisms work as intended, and that we verify the progress (or lack thereof) of promises made more than a year ago by some of the world’s most powerful companies.

The current state of AI labeling may reflect the limits of a self-regulatory approach. That will change in the summer of 2026, when regulations that include requirements related to labeling go into effect in California and the European Union.

Table of Contents

logo

Join today to read the rest

"Alexios and Craig have built something exceptional with The Indicator." - Ruben Gomez, Researcher and former Trust & Safety worker at Reddit and Twitter/X

Upgrade

With a membership, you get:

  • Everything we publish
  • All of our workshops
  • Our eternal gratitude

Keep Reading

No posts found