Indicator’s mission is to expose digital deception and to teach people how to do it for themselves. This page shows how our work has resulted in real world impact that holds bad actors and platforms accountable: banned accounts, removed content and ads, lawsuits, new policies, etc.
We measure our success in a variety of ways. To us, it’s equally important to expose a bad actor as it is to help a trust and safety professional recognize and mitigate a new threat or to teach a journalist or researcher a new technique and tool. Each helps defend the information space and fight digital deception. But some are easier to track and quantify than others.
Here’s a look at some of Indicator’s impact since launching in May 2025. If you want to read and support our work — and learn how to do it yourself — please become a paying member.
Platform takedowns
TikTok deleted 48 videos (and restricted recommendations for two more) that were pushing wild conspiracy theories and AI tributes about Charlie Kirk after we found they had gotten 32 million views in three days.
Meta removed hundreds of ads for AI nudifiers and more than twenty related pages and accounts following our reporting. Apple also deleted two related apps.
Our reporting on spammy job listing websites targeting development workers led LinkedIn to delete 27 pages with 5.9 million followers and Google to restrict ads served on 9 websites.
Our reporting on a digital fashion startup advertising its tool for non consensual sexualized image generation resulted in Meta taking down its ad accounts and ~200 associated ads as well as to PUMA ending its commercial relationship with the company.
Meta banned a 124,000-member strong Facebook group that served as a marketplace for social media accounts and TikTok banned two creators encouraging fraudulent geo-shifting of accounts following our reporting on “TikTok Dark”
TikTok banned 94 accounts with a combined 1.8 million followers using AI avatars of well-known journalists to spread clickbaity misinformation in Spanish.
Meta removed 114 ads and several related accounts promoting three different AI nudifier services. Google also deleted a Play app that was behind 17 of those ads.
Google stopped providing single-sign on services to 23 AI nudifier websites we wrote about as part of our special investigation on the ecosystem.
YouTube terminated 33 videos and the related accounts tied to a group of creators we found were posting fake highlights of Club World Cup matches before they were even played.
TikTok deleted 96 accounts and 3 videos tied to an organized network using videos of allegedly moribund celebrities to source leads for an advance fee scam on WhatsApp. Meta also blocked at least 6 WhatsApp business accounts.
YouTube terminated 16 channels and demonetized several more after we showed they used generative AI to spread false claims about the Sean “Diddy” Combs trial.
Amazon deleted 198 of error-filled, AI-generated children’s books that we found on the platform.
LinkedIn removed multiple fake accounts after we reported on account rental schemes.
Meta removed more than 1,000 pornbait ads that we found had deceived users and broke its rules.
Trustpilot and Meta removed fake online reviews that we showed were linked to a notorious global scam network.
Legal and regulatory consequences
The US National Association of Attorneys General referred to our report on the economics of AI nudifiers in a letter to online payment platforms urging them to take “strong action” against those tools. The same report was referenced in an enforcement action by the Australian eSafety Commissioner against a company involved in these services.
Meta sued the company behind a noxious network of AI nudifiers and introduced new classifiers to detect this type of content. The moves came after we published multiple stories that caused Meta to remove more than 10,000 of ads from the same company.
US Senator Dick Durbin and US Representative Debbie Dingell quoted reporting by Faked Up (the predecessor newsletter to Indicator) in letters to tech CEOs urging them to act with greater conviction to limit the reach of AI nudifiers.
Media pick-up and coverage
NBC News picked up our findings on AI avatars of Spanish-speaking news anchors flooding TikTok with fake news.
The Washington Post referred to our analysis of Meta’s Community Notes feature in its own testing of the product.
Our findings on the economy of AI nudifiers served as the basis for this Wired article.
The Guardian republished two of our stories on fake Club World Cup highlights and AI slop about Diddy.
Nieman Lab, Poynter, and Semafor covered our launch