This week on Indicator
Alexios and Santiago Lakatos published an exclusive, in-depth analysis of the economics and infrastructure of the AI nudifier industry.
Alexios also published a tentative taxonomy of AI slop, which proposed a two-by-two matrix for classifying this type of content.
Also: save the date for our next members-only workshop on Friday, August 1 at 12 pm ET. We’ll demo tools and techniques from the Indicator guide to connecting websites together using OSINT tools and methods, and more! Upgrade now to get access.
Deception in the News

📍 X offered a brief and somewhat puzzling explanation of why Grok, the AI model integrated into the social network, went on an antisemitic rant, declared itself “MechaHitler,” and told a user that its surname is Hitler. The company said the model called itself Hitler because it wasn’t programmed with a surname, “so it searches the internet leading to undesirable results.” It also acknowledged that Grok would search to see what Elon Musk said about a topic and mirror that in its responses. X said Grok did this in order “to align itself with the company.”
📍 Meanwhile, French authorities have launched a criminal investigation into whether X manipulated its algorithms.
📍 The Trump administration continues to take heat over its failure to release additional information related to the crimes and death of Jeffrey Epstein. Trump lashed out at “weaklings” that are demanding more information about what he called the “Jeffrey Epstein Hoax.” NewsGuard also reported on the spread of AI-generated images that show Trump with Epstein and underage girls.
📍 Scam compounds in Myanmar continue to expand, in spite of crackdowns and international scrutiny. Japan’s Nikkei used satellite imagery to identify common characteristics of the buildings and to document their growth.
📍 The Royal Brunei Police Force warned that scammers are using AI-generated video and audio to impersonate officers in order to lure victims.
Tools & Tips
📍 Microsoft released “AI Red Teaming 101,” a free online video course that teaches “the core concepts and techniques you need to understand generative AI security risks.”
📍 Henk van Ess published, “Google vs. AI: when to use which.”
📍 Nicholas Diakopoulos launched the AI Accountability Review, which “aims to translate research into up-to-date guidance around what to do about the problem(s) of AI Accountability.”
📍 The Global Investigative Journalism Network added new chapters to its Reporter’s Guide to Investigating War Crimes. The chapters focus on starvation, the arms trade, and forced displacement.
📍 GIJN also published, “How Journalists Expose Extremism Networks in Sub-Saharan Africa Using Open Source Tools.”
📍 Joe Amditis recently updated the LLM journalism tool advisor, which “helps journalists select appropriate LLM tools for specific tasks.” It suggests tools for research, data analysis and visualization, source finding, and more.
Reports & Research
📍 Resemble AI published its Q2 Deepfake Incident Report: “From April through June 2025, we documented 487 verified deepfake incidents, representing a 41% increase from Q1 2025's 345 cases and a staggering 312% increase from Q2 2024.”
📍 Microsoft, Northwestern University, and WITNESS released the AI Detection Benchmark, “a collaborative, open-source dataset of over 50,000 AI-generated and real-world manipulated media artifacts.”
📍 A Stanford University study found that therapy chatbots fuel delusions and give potentially dangerous advice.
📍 The American Sunlight Project published research about “LLM grooming,” which is “the mass-production and duplication of false narratives online with the intent of manipulating LLM outputs.”
📍 Maldita.es continued its investigation into malicious Facebook pages that masquerade as public transportation services in order to steal money and information from users. The Spanish outlet has identified more than 1,000 pages and linked them to unknown actors in Vietnam and to digital infrastructure in Russia.
Want more studies on digital deception? Paid subscribers get access to our Academic Library with 52 categorized and summarized studies:
One Thing
A restaurant in India apparently used AI to help write descriptions for some of its dishes in the Zomato food delivery app. The result was anything but delicious. The text for Chicken Pops described them as “Small, itchy, blister-like bumps caused by the varicella-zoster virus; Common in childhood."
Here’s what it looked like, according to a Reddit user:

Futurism reached out to Zomato for comment but didn’t receive a response. (The description had been fixed by the time we accessed the page.)
Indicator is a reader-funded publication. Thank you, members!
Please upgrade to access to all of our content, including our how-to guides and Academic Library, and our live monthly workshops.




