Our weekly Briefing is free, but you should upgrade to access all of our reporting, resources, and a monthly workshop.

This week on Indicator

Alexios and Nasha (our awesome intern) posted hundreds of AI-generated images and videos on Instagram, LinkedIn, Pinterest, TikTok, and YouTube to test whether they were properly labeled by the platforms. For the most part, they weren’t. Our findings raise serious questions about Big Tech’s implementation of a signature effort aimed at helping users navigate the modern information environment.

Alexios also updated our guide to AI labels to reflect the results of the audit.

Craig updated three Indicator Guides to add new tools, features, and techniques. He updated the guide to connecting websites together using OSINT tools and methods, the guide to tools for capturing webpages and social media content, and the guide to investigating online reviews.

The new information is highlighted in an Updates section that we added near the top each guide. Indicator Guides are one of the benefits we offer for paid members.

Deception in the News

📍 X’s Community Notes button was moved from the main menu and is now behind the “more” button. My experience at Google was that putting anything behind an ellipsis is not a great sign (IYKYK), so I contacted Keith Coleman, the X product VP that oversees the Notes program. He told me “that menu changes regularly, and also varies by app. We did not intentionally move it around there, but ordering in those menus does shift along with various app updates.” — Alexios

📍 Let’s take a tour of AI-generated election content. We start in New York City, where mayoral candidate Andrew Cuomo deleted an ad that showed a drug dealer, a domestic abuser, and other synthetic criminals endorsing his rival, Zohran Mamdani. In Ireland, a deepfaked newscast “announced” the withdrawal of left-leaning presidential candidate Catherine Connolly ahead of today’s vote. It got 160,000 views on Facebook before being taken down. And in the Netherlands, 68% of political ads tracked by a group of academics did not disclose the use of AI.

📍A New Jersey teenager who was “nudified” by a classmate is suing ClothOff, one of the largest players in the noxious ecosystem. Telegram is also named as a defendant.

📍Tokyo police arrested a man accused of creating thousands of deepfake nudes and of selling custom nudes to at least 50 subscribers, earning about $8,000 in 11 months.

📍Brazilian TikTok hustlers are still buying and selling accounts targeting Spanish-language Americans with false news.

📍YouTube has started rolling out a likeness detection program that will allow creators to request the removal of unauthorized AI clones on the platform.

📍The New York Times documented “a yearslong shift by [President Donald'] Trump to deploy fake imagery, generated by artificial intelligence, as part of his social media commentary.”

Tools & Tips

We noted last week that X is testing a new way to show additional profile information, including the country the account’s based in, how many times it has changed its username, etc.

Nikita Bier, X’s head of product, said that the new “About this account” information panel is live on his account. You can access it by clicking on a small chevron next to the date his account joined X:

It brings up this information:

Now you know that if you see that icon on a profile, you can view its “About this account” info panel. But note that as of now Bier and X haven’t explained what data they use to determine where an account is based. The origin of “Connected via” info is also unclear. Is it based on how the account is currently accessing X, or is the platform citing the app/interface that was originally used to create the account? Or something else?

Hopefully, more information will be forthcoming as the test rolls out.

In the meantime, people are already sharing doctored images of “About this account” panels. Here’s a fabricated profile panel for an account that says it’s run by a a Russian-American woman:

It got 11,000 likes.

A satirical Romanian account also fooled some people with a Photoshopped profile panel for far-right politician Călin Georgescu:

Goes to show that people can exploit a new feature even before it’s launched. — Craig

📍 Tools of War is a “comprehensive resource tracking dual-use shipments from EU machine tool providers to Russian companies.” Investigative reporter Dylan Carter said on LinkedIn that the tool is “available to journalists and researchers worldwide who want to investigate the companies in their own countries supplying Russia's arms industry.” You can also read a recent investigative story that used the information.

📍 The DW Innovation team tested three open-source deepfake audio detection tools: Hiya, DeepfakeTotal, and DeepFake-O-Meter. They concluded that “we cannot fully recommend one single detector; deepfakes are based on different synthesis models; each one requires a different detection tool trained on that model.” The Deutsche Welle team correctly noted that “AI detection tools are just one part of the verification puzzle, and we should treat them as such.”

📍 Jeremy Caplan of Wonder Tools published “My Private, Free AI Setup.” It offers step-by-step instructions for how to “Download a free, private AI program to run on your computer” and how to “Use it offline without any subscription cost and avoid the risk of having sensitive info ingested …”

📍 OSINT Combine published a new, free guide, “Managing AI Integration in OSINT.” You can download it, and other guides, here.

📍 The most recent edition of Stage Talks with Bellingcat was, "Student Researchers Vs The Misinformation Machine" with Utrecht University.

📍 The deepfake detection experts at GetReal Security published a video about “how AI-generated face swaps, lip-sync deepfakes, and voice clones have become nearly impossible to spot, even for trained eyes.”

Events & Learning

📍 Craig is moderating a free webinar on Oct. 28, “Tips and Tools for Uncovering Online Scams.” It’s organized by the Global Investigative Journalism Network and features four speakers that will offer “practical strategies and tools for tracking, verifying, and reporting on online scams.” Join us!

📍 And on Oct. 29 in Toronto, he’s doing a free live talk and Q&A, “The New Age of Digital Deception: What's Changed and How to Fight Back.”

Reports & Research

📍 A major study conducted by 22 public media organizations found that “four of the most commonly used AI assistants misrepresent news content 45% of the time — regardless of language or territory.” A summary says that “the study found that almost half of all answers had at least one significant issue, while 31% contained serious sourcing problems and 20% contained major factual errors.” Nicholas Diakopoulos of Northwestern University raised some concerns about the study’s methodology.

📍 The Nieman Journalism Lab published an investigation into AI-generated news sites that “spout viral slop from forgotten URLs.” Unidentified people are scooping up expired domains and loading them up with AI-generated articles.

📍 Researchers with Texas A&M University, University of Texas at Austin, and Purdue University tested what they call the “LLM Brain Rot Hypothesis.” They found that “continual exposure to junk data—defined as engaging (fragmentary and popular) or semantically low-quality (sensationalist) content—induces systematic cognitive decline in large language models.”

📍 Joan Donovan of Boston University released a new paper, “A Short History of Misinformation-at-scale and Efforts to Mitigate it.”

Want more studies on digital deception? Paid subscribers get access to our Academic Library with 55 categorized and summarized studies:

One Thing

The Republican candidate for lieutenant governor of Virginia, John Reid, used AI to generate a representation of his Democratic opponent and proceeded to “debate” her/it for roughly 40 minutes on YouTube.

“The AI bot had been trained on [Democratic candidate Ghazala] Hashmi’s previous public statements on the debate topics, according to the Reid campaign, which also stressed that Reid did not see the questions in advance,” The Washington Post reported.

“This is where we are right now, like it or not,” Virginia political strategist Bob Holsworth told the Post. “Unless there are norms and rules that emerge in terms of the use of AI, you’re likely to see it used in much more sophisticated ways in 2026. So, welcome to the future.”

Indicator is a reader-funded publication.

Please upgrade to access to all of our content, including our how-to guides and Academic Library, and to our live monthly workshops.