Accurately detecting AI-generated content is generally hard for humans

Developers of generative AI products have thus struck a grand bargain with regulators around the world. They are free(-ish) to make general-purpose AI tools, but they must invest in transparency measures that help users to understand when they are engaging with synthetic content. 

Under Article 50 of the EU’s AI Act, providers of AI systems have to ensure their outputs are watermarked in a machine-readable format. The now-defunct Voluntary AI Commitments sponsored by the Biden White House included a similar commitment to “robust provenance, watermarking, or both, for AI-generated audio or visual content.”

Major AI labs have therefore been developing their watermarking techniques (e.g. Google’s SynthID) and collaborating on industry-wide standards (e.g. C2PA, IPTC).

Regulators have mostly focused on the developers of the AI generators rather than the platforms where AI content circulates. (There are of course several companies like Google and Meta that are both a developer of generative AI tools and a platform. OpenAI joined the club this week.) 

But platforms also have to decide what to do with third-party synthetic material, i.e. AI content of undetermined origin posted on their services. In September, China started requiring platforms to review and label AI content. Labels are also presented as a countermeasure to AI slop

The system is far from perfect. AI detection is technically challenging, especially if faced with adversarial users actively trying to evade labels. In testing, I found that even just saving an AI-generated video on my laptop before uploading it to a different platform stripped the metadata many services rely on to label synthetic content. Cropping also worked. Overall, platforms appear to be accepting relatively low recall rates on labels for third-party AI content while shifting the responsibility to disclose AI content on users themselves.

The rapid improvement in the capabilities of generative AI, challenging technical implementation of deepfake detection, and shifting regulatory pressures means this is a fluid space that can be hard to keep track of. Just last week, Spotify announced it would support a new industry standard for AI disclosures in songs. Earlier this year, Pinterest gave users the option to see fewer AI pins on a specific topic; this type of user agency is unique among the platforms in our sample.

Enter our guide. We will be updating the table below and the rest of this article regularly to add more platforms and accurately reflect the latest state of affairs across the tech industry. If you want to recommend a platform for inclusion or suggest an edit, please reach out to [email protected].

logo

Upgrade to read the rest

Become a paying member of Indicator to access all of our content and our monthly members-only workshop. Support independent media while building your skills.

Upgrade

A membership gets you:

  • Everything we publish, plus archival content, including the Academic Library
  • Detailed resources like, "The Indicator Guide to connecting websites together using OSINT tools and methods"
  • Live monthly workshops and access to all recordings and transcripts

Keep Reading

No posts found