ICYMI: Indicator is the merger of Faked Up by Alexios Mantzarlis and Digital Investigations by Craig Silverman. Our weekly Briefing is free, but you should upgrade to access all of our reporting, resources, and a monthly workshop. We’re offering a limited time 20% launch discount.
This week on Indicator
Alexios revealed that AI nudifiers are expanding their services and continue to reach big audiences on major platforms. His article led to Meta deleting 72 advertiser accounts and 663 ads, Apple deleting 4 apps, and Google deleting 1 app.
Craig exposed the world of “cold outreach” companies that use rented, ID-verified LinkedIn profiles to spam thousands of DMs a month. He revealed the global trade in rented profiles and caused LinkedIn to remove multiple accounts, which the company said were fake.
Deception in the news

The viral paraglider
Miraculous, problematic Paraglider. About a week ago, Chinese state media distributed a video of a Chinese paraglider who said he was accidentally swept more than 28,200 feet into the icy air while on a training flight. The footage proved irresistible to international media. Then the experts at GetReal Security, which specializes in synthetic media detection and analysis, took a look and said they were “fairly confident” that the video includes AI-generated images.
South Korea fights deepfakes. Misinformation about this week’s South Korean presidential election included synthetic media targeting winning candidate Lee Jae-myung. The country’s election commission filed complaints against three YouTubers for creating deepfakes of the candidates, according to the Korea Herald. It’s the first application of a 2023 law that banned use of AI-generated content in the 90 days before an election. The government also announced it would allow Meta to use facial recognition to fight deepfake scams that use the likeness of celebrities.
A victory against nudifiers. San Francisco’s City Attorney settled a lawsuit with Briver LLC, the company behind AI nudifiers undresser[.]ai and porngen[.]art. The defendant agreed to pay $100,000 in civil penalties and to shut down the websites.
Lining up apples and bananas. ABC News in Australia obtained documents that outline how Chinese internet censors are supposed to handle references to the Tiananmen Square massacre. The guidelines instruct them to remove any visual metaphor that could evoke the massacre, even "one banana and four apples in a line."
Google with caution. Google Search’s AI Mode and Overviews hallucinated picnic tables, struggled with what year we’re in, and was no good at helping Alexios do the crossword.
Quick hits. Activist group antibot4navalny says the Kremlin-aligned influence operation Matryoshka has built a presence on TikTok — and abandoned Bluesky. Pillow-peddling election denier Mike Lindell is on trial. White House chief of staff Susan Wiles was reportedly impersonated in synthetic audio calls. Filipino legislators are discussing several bills to combat disinformation. Brazilian YouTube channels spread made up news with flimsy disclaimers.
The following block is an ad. We accept a very small proportion of the ads that are pitched to us. All of out paywalled content is ad-free.
You’ve never experienced business news like this.
Morning Brew delivers business news the way busy professionals want it — quick, clear, and written like a human.
No jargon. No endless paragraphs. Just the day’s most important stories, with a dash of personality that makes them surprisingly fun to read.
No matter your industry, Morning Brew’s daily email keeps you up to speed on the news shaping your career and life—in a way you’ll actually enjoy.
Best part? It’s 100% free. Sign up in 15 seconds, and if you end up missing the long, drawn-out articles of traditional business media, you can always go back.
Tools & tips
📍 InsTrack is a tool to track analytics of an instagram account. (via the OSINT Rack)
📍 Instaloader is a “Free command line tool to download photos from Instagram. Scrapes public and private profiles, hashtags, stories, feeds, saved media, and their metadata, comments and captions.”
📍 The PSAi is an interactive online experience from Columbia Journalism Review that “uses viral AI images to teach people how to spot AI visuals.”
📍 The presentations from OSINTCon 2025 are available on YouTube. You can also view the slides. Topics include, “How Data Pipelines & Automation Power Enterprise OSINT Platforms,” “OSINT in International Criminal Investigations,” and “Unmasking Cybercriminals: Advanced OSINT.”
📍 Room 101, which offers great Reddit tools, has a general directory of OSINT tools. (via @cyb_detective)
Reports & research
Perturbed. This preprint found that making strategic edits to synthetic audio can significantly increase the likelihood of fooling AI detection tools. While some of the edits may sound strange to the human ear (see below), they can cause commercial detectors to rate the content as genuine.

Deepfakes with a heart(beat). A paper on Frontiers in Imaging concludes that the previous belief that a deepfake video could be detected by monitoring for absent heartbeats is “no longer valid for current deepfake methods.” One of the paper’s co-authors told BBC Science Focus that “deepfakes will get so good that they’ll be hard to detect unless we focus more on technology that proves something hasn’t been altered, rather than detecting if something is fake.”
AI labels: Popular but imperfect. Really appreciate this practical preprint on AI labels from a group of researchers at the CISPA Helmholtz Center for Information Security and the universities of Leibniz and Ruhr. In a survey of more than 1,300 respondents across the EU and US, more than 75% said they want to see labels on AI-generated content in their social media feeds. But when asked to rate posts with the labels, there was little improvement in discernment of accuracy. Instead, the “AI-generated” labels slightly increased the likelihood that people believed the content was false — including when it was true. The labels also slightly increased the perception of accuracy of content labeled as human-generated. This suggests more nuanced labels are better (duh) and that “AI” is more likely than not to be considered a proxy for “false.”

Want more studies on digital deception? Paid subscribers get access to our Academic Library with 54 categorized and summarized studies:
One thing we loved this week
Search engine optimization is a ruthless game. Wayfair, the online furniture and design retailer, is trying to win with an aggressive and unintentionally hilarious strategy. As longtime blogger Andy Baio explained, “Wayfair's search automatically generates page titles and descriptions that fill in your search text, even if there are no matching product results. All it takes is for someone to link to that search on the web, wait for Google to index it, and boom!”
Baio demonstrated what this looks like in practice, including for a search that evoked the false Wayfair child trafficking conspiracy theory from a few years back:

Indicator is a reader-funded publication.
Please upgrade to take advantage of our limited time 20% launch discount. You get access to all of our content, including our how-to guides and Academic Library, and to our live monthly workshops.





