Our weekly Briefing is free, but you should upgrade to access all of our reporting, resources, and a monthly workshop.

We’re currently offering 20% off all Indicator memberships for a limited time. Join us!

This week on Indicator

Craig revealed that a buzzy digital fashion startup ran dozens of ads on Instagram and Facebook that that promoted its gen AI product as a tool for non-consensual sexualized image generation. In response, Meta removed roughly 200 ads and PUMA suspended its partnership with the company.

Alexios found 27 LinkedIn pages targeting aspiring development workers with spammy pages, sham listings, and scam recruiters. The pages had a collective 5.9 million followers and operated in a coordinated manner. LinkedIn banned all the pages and Google restricted ads on the underlying websites following our reporting.

Deception in the News

📍The Australian government wants to effectively ban nudifier and deepfake porn apps. It intends to put the onus on tech platforms to stamp out such “abhorrent technologies,” according to recent comments from Minister for Communications Anika Wells.

📍Also in Australia, Google did not renew its deal to help fund the Australian Associated Press (AAP)’s fact-checking team, according to Crikey. A Google spokesperson said that the “nature of any partnership will evolve over time, and we hope to partner with AAP for years to come.”

📍In Malaysia, social media platforms have removed more than “40,000 pieces of AI-created disinformation content, including deepfake investment scams” since Jan. 1 2022, according to government data.

📍The New York Times queried Grok and other A.I. chatbots more than 10,000 times and revealed how “Mr. Musk and his artificial intelligence company, xAI, have tweaked the chatbot to make its answers more conservative on many issues.”

📍Designers and writers whose jobs have been displaced by AI are sometimes being hired to unslopify AI images and content.

📍There’s a cottage industry of AI slop history channels on YouTube that create content for people to fall asleep to. “I think only my older viewers still come to my videos, but for others my channel is now hidden under a pile of AI slop,” the owner of a non-slop ASMR history channel told 404 Media.

📍Reuters published another story in its alarming series about Meta’s chatbots gone awry. This time we learned that “Meta has appropriated the names and likenesses of celebrities – including Taylor Swift, Scarlett Johansson, Anne Hathaway and Selena Gomez – to create dozens of flirty social-media chatbots without their permission.”

A special offer for Indicator subscribers from My OSINT Training

At My OSINT Training, you’ll learn directly from three leading voices in OSINT—Micah “webbreacher” Hoffman, Griffin “hatless1der” Glynn, and Lisette “technisette” Abercrombie. Their combined decades of experience and uniquely approachable teaching style make even the toughest topics accessible for anyone.

Special offer for Indicator subscribers: For a limited time, get 15% off either of MOT’s course bundles: OSINT Immersion and All OSINT Courses by using special offer code: indicator25a

MOT’s on-demand video library covers everything from foundational skills to advanced techniques, blending expert instruction, real-world demonstrations, and hands-on exercises. Whether you’re new to OSINT or leveling up, you’ll find everything you need to succeed!

Tools & Tips

📍 Henk van Ess published the “Reporter’s Guide to Detecting AI-Generated Content” for the Global Investigative Journalism Network. He also created the Image Whisperer tool to help you analyze suspected AI images.

📍 The SPOT geolocation tool from DW Innovation got a few updates, including an upgraded AI model and improved integration with OpenStreetMap. More here. You can read our previous post about SPOT here.

📍 Conspirador Norteño published, “A picture isn't always worth a thousand words.” They tested Grok’s ability to detect the authenticity of viral images and video. Grok “both misidentifies AI-generated images as real photographs and real photographs as AI-generated.” Read our recent piece about using consumer AI tools to identify public figures in videos.

📍 Makepeace Sitlhou published, “A Guide To Monitoring Conflict Amidst a Sea of Misinformation” for Bellingcat. It shows you how to use open source methods to analyze images of weapons and drones, investigate looted weapons, and more.

Events & Learning

📍 This year’s edition of SkopeNow’s OSINT Live will be held on Oct. 15. It’s free to attend the all-day virtual event. Speakers include Sam Gregory of WITNESS, Matteo Tomasini of District 4 Labs, and Indicator’s own Craig Silverman.

📍 Want to make videos that can better compete with viral deception? Sophia Smith Galer is giving a virtual workshop, “Reel Journalism: Using AI To Create Smarter Vertical Video” on Sept. 15. Tickets are only £5.

Reports & Research

📍 The latest NewsGuard audit of popular AI chatbots found that they’re getting worse at recognizing misinformation. “The 10 leading AI tools repeated false information on topics in the news more than one third of the time — 35 percent — in August 2025, up from 18 percent in August 2024,” according to the company.

📍 In addition to getting things wrong, chatbots are susceptible to flattery and peer pressure, according to researchers from the University of Pennsylvania. They employed strategies and tactics laid out in the bestselling book “Influence: The Psychology of Persuasion” and were able to jailbreak GPT-4o Mini. You can read their paper, “Call Me A Jerk: Persuading AI to Comply with Objectionable Requests,” here.

📍 Researchers from the Foundation for Defense of Democracies uncovered a network of five YouTube channels that operate out of Nigeria and are “publishing videos that have received millions of views denigrating Ukraine, the United States, the European Union, and Israel.”

📍 A new(ish) term has entered the chat: slopaganda. The authors of this draft paper define it as “unwanted AI-generated content that is spread in order to manipulate beliefs to achieve political ends.”

Want more studies on digital deception? Paid subscribers get access to our Academic Library with 55 categorized and summarized studies:

One Thing

A listing on Shein used an AI-generated image of Luigi Mangione to sell a floral shirt.

Indicator is a reader-funded publication.

Please upgrade to take advantage of our limited-time 20% discount and to access to all of our content, including our how-to guides and Academic Library, and to our live monthly workshops.

Keep Reading

No posts found