Logo
Logo
Search
UPGRADE
ABOUT
RESOURCES
REPORTING
LOGIN

Indicator Briefing (May 23, 2025)

Your must reads and must tries on the digital deception beat this week

Craig Silverman
Alexios Mantzarlis
Craig Silverman & Alexios Mantzarlis

May 23, 2025

Indicator Briefing (May 23, 2025)

Our weekly Briefing is free to subscribers but you should upgrade to a membership in order to access all of our reporting, archives, resources, and a monthly workshop. We’re offering a limited time 20% launch discount.

This week on Indicator

Happy first week to us! In case you’re wondering what you’re looking at, Indicator is a result of the merger of two newsletters: Faked Up by Alexios Mantzarlis and Digital Investigations by Craig Silverman.

This week, Craig revealed how a Pakistan-based scam network used fake online reviews on Trustpilot and Facebook to lure customers for its book publishing scheme. He also published a new guide, “How to investigate online reviews.“ Alexios wrote about “pornbait” ads on Facebook that were being used to drive traffic to sites that monetize with Google ads.

Our stories resulted in dozens of fake accounts being removed by Trustpilot, more than 3,000 deceptive ads and their related accounts being removed by Meta, and the demonetization of 8 click-farming websites by Google. Paid subscribers help us make the internet a little less deceitful.

Deception in the news

Take It Down. US President Donald Trump signed the Take It Down Act into law, enacting a bill that criminalizes the distribution of deepfake nudes and requires platforms to remove them swiftly if notified. The FTC expects it will need dedicated software and staffing to enforce the law. As Alexios has written in the past, the bill doesn’t tackle the problem of underlying technology head on. And as 404 Media found this week, it is still possible to use Civitai to create deepfake nudes.

Grok's genocides. Late last week, xAI admitted that an “unauthorized modification” turned the Grok chatbot into a disseminator of the white genocide conspiracy theory. This week, the bot dabbled in Holocaust denial. Renée DiResta drew clever parallels with Google Gemini’s “diverse Nazis” fiasco in 2024. None of this gave pause to Microsoft, which is now offering Grok 3 on its Azure platform.

India/Pakistan. India and Pakistan’s disinformation war continues. BOOM Managing Editor Jency Jacob wrote of “a slow but steady rise of AI-generated deepfakes of political leaders from both countries” as TV media “continue to peddle false news without accountability.” Google’s AI Overview was dangerously inaccurate when asked about a viral claim of a nuclear incident.

AI shamans and sloperations. Aos Fatos found more than 30 video ads on Meta using inaccurate AI-generated renditions of indigenous leaders in Brazil promoting scammy natural cures. Marc Owen Jones uncovered a network of over 130 Facebook pages that he deemed an “AI Sloperation” due to its “coordinated, large-scale campaigns that deploy (often) low-quality AI-generated slop.”

Scams galore. The Wall Street Journal went deep on how Meta’s ad platform is a favorite tool of scammers. Take a trip down memory lane by reading investigations about Meta’s ad scam bonanza from 2018 and 2020. For its part, Meta recently published a guide on how to avoid investment and payment scams.

Tools & Tips

❝

“This is the OSINT paradox: the more you can find, the more disciplined you must become.”

Brett R, “OSINT Rabbit Holes: When to Dig, When to Pivot, and When to Stop”

📍 A big tools merger: Hunchly was acquired by Maltego. Congrats to all! We’re excited to see how they integrate. Lean more at a live Q&A session on May 28th. Register here.

📍 LolArchiver launched a new tool: YouTube Comment History. You can view all of the video comments from a specific user account. They said they’ve archived 20 billion comments from 1.4 billion different accounts so far.

📍 Alltext.nyc is a search engine “that finds text in New York City's Google Street View images.” It was created by artist and technologist Yufeng Zhao. (via Christiaan Triebert)

📍 Santiago Villa shared tips for how to use the Strava Fitness app in investigations.

📍 Henk van Ess outlined how to use chatbots/LLMs to analyze images, assist with geolocation, and extract text.

📍 Profiler.me is a new OSINT platform that combines several types of searches for phone numbers, usernames, emails, Twitter/X accounts etc. Free and paid options. More info here.

📍 IntelBase is en email investigation tool. Some free features. (via Cyb Detective)

📍 Sherlockeye is another new tool for email/phone number/username enumeration. (via the OSINT Newsletter)

📍 Lei.info allows you to search in a database of close to 3 million legal entities by company name, SWIFT code, LEI etc. (via Cyb Detective)

📍 Kathryn Caudrelier and Amanda Schein from OSINT Combine outlined OSINT tips for event monitoring.

Events & Learning

  • The Organized Crime and Corruption Reporting Project is accepting applications for a free one-day workshop that covers tracking assets across borders, uncovering hidden corporate ownership, analyzing datasets, and more. Open to mid-career journalists in Europe. Apply by June 9.

  • My OSINT Training is offering two days of in-person OSINT training in Boston on June 12 and 13. It costs $450 and is followed by the Layer 8 social engineering and intelligence conference on June 14.

  • OSINT Academy is offering a $99 workshop, ”Introduction to Cryptocurrency Investigation,” on June 29.

Reports & Research

State fact-checking. The Internet Freedom Foundation has been using freedom of information requests to find out more about how Indian states are running “fact check units.” This follows a High Court decision to strike down such a unit at the federal level. The nonprofit writes that “the growing role of law enforcement and criminalisation of misinformation, combined with the absence of clear, standardised criteria, raises the risk of arbitrary enforcement and censorship.”

Reporting online abuse. PEN America and Meedan asked journalists, writers, and creators about their experience reporting online abuse to online platforms. They found that reporting flows on social platforms are “confusing, time-consuming, frustrating, and disappointing.” The bottom line: “If social media platforms fail to revamp reporting, as well as put more holistic protections in place, then public discourse in online spaces will remain less inclusive, less equitable, and less free.”

They want to believe, belong. “A new meta-analysis of 279 studies finds that belief in conspiracy theories is linked to three core psychological motives: wanting to make sense of the world, coping with uncertainty, and connecting with others.”

Describing AI misinformation on X. In this preprint, Chiara Drolsbach and Nicolas Pröllochs looked at the prevalence and characteristics of AI-generated misinformation in two years’ worth of Community Notes on X. They found that 5% of the approximately 90,000 misleading tweets in the study period contained AI-generated content. The flagged tweets were more likely than non-AI posts to go viral, and they discussed less serious topics such as entertainment.

Want more studies on digital deception? Paid subscribers get access to our Academic Library with 51 categorized and summarized studies:

Indicator Academic Library

This is a regularly updated collection of academic studies and industry reports about digital deception. It currently includes short descriptions of 51 academic studies and systematic reports.

indicator.media/academic-library

One thing we loved this week

ARTE’s digital investigative magazine, Data Sources, released an 18 minute mini-doc about social media fraudsters based in Vietnam. (You can also watch it in French on YouTube.)

Indicator is a reader-funded publication. Please upgrade to take advantage of our limited time 20% launch discount. You get access to all of our content, including our how-to guides and Academic Library, and to our live monthly workshops.


Keep Reading



Indicator is your essential guide to understanding and investigating digital deception.

cursor-click