Indicator is designed for people on the front lines of digital investigations: journalists, analysts, researchers, and trust and safety professionals—alongside everyday citizens who want to make sense of what they see online.
Indicator is for people who care:
About our information environment
About open source reporting and investigations
About studying and exposing digital deception and manipulation
About evidence and accountability
We welcome tips, questions or concerns:
Email [email protected] for story ideas and recommendations
Email [email protected] for any issues with access to the site, payments, etc.

I am a recovering fact-checker and tech worker. I spend 80% of my time at Cornell Tech as the director of the Security, Trust, and Safety Initiative and the remaining time on Indicator.
More about Alex
I am a recovering fact-checker and tech worker. I spend 80% of my time at Cornell Tech as the director of the Security, Trust, and Safety Initiative and the remaining time on Indicator.
Before starting Indicator, I sent the newsletter Faked Up. Prior to that, I spent 5 years at Google Trust & Safety where I built the content adversarial red teaming for Gemini and lead a Search content policy team.
I was the founding director of the International Fact-Checking Network (IFCN), the global coalition of fact-checking projects. As Director of the IFCN, I helped draft the fact-checkers' code of principles, shepherded a seminal partnership between fact-checkers and Facebook (RIP).
I care about online information quality as a fundamental prerequisite for healthy societies

I’ve spent more than 15 years researching and reporting on how our information environment is being manipulated. I’m obsessed with learning and teaching new digital investigative and OSINT techniques.
More about Craig
I’ve spent more than 15 years researching and reporting on how our information environment is being manipulated. I’m obsessed with learning and teaching new digital investigative and OSINT techniques, and with spreading the skills to as many people as possible.
Prior to launching Indicator, I was a national reporter with ProPublica focused on investigating digital platforms and online manipulation. I previously served as media editor of BuzzFeed News, where I developed approaches to exposing digital disinformation and media manipulation.
I’m the editor of the European Journalism Centre’s Verification Handbook series, which offer free world-class guidance on how to verify online content and investigate disinformation and media manipulation. My reporting has been honored with a George Polk Award, the Carey McWilliams Award from the American Political Science Association, two SABEW awards, and the McGillivray Award from the Canadian Association of Journalists.

Indicator’s mission is to expose digital deception and to teach people how to do it for themselves. Our work has resulted in real world impact that holds bad actors and platforms accountable: banned accounts, removed content and ads, lawsuits, new policies, etc.
We measure our success in a variety of ways. To us, it’s equally important to expose a bad actor as it is to help a trust and safety professional recognize and mitigate a new threat or to teach a journalist or researcher a new technique and tool. Each helps defend the information space and fight digital deception. But some are easier to track and quantify than others.
Here’s a look at some of Indicator’s impact since launching in May 2025.
If you want to read and support our work — and learn how to do it yourself — .
101
TikTok removed 101 accounts impersonating mainstream media organizations that had 5.5 million followers and videos with more than 215 million views.
2,000
Meta removed more than 2,000 ads and their associated ad accounts that promoted nudifier websites and AI girlfriend apps. Apple also removed three AI girlfriend apps.
38M
TikTok deleted 48 videos (and restricted recommendations for two more) that were pushing wild conspiracy theories and AI tributes about Charlie Kirk after we found they had gotten 32 million views in three days.
100s
Meta removed hundreds of ads for AI nudifiers and more than twenty related pages and accounts following our reporting. Apple also deleted two related apps.
5.9M
Our reporting on spammy job listing websites targeting development workers led LinkedIn to delete 27 pages with 5.9 million followers and Google to restrict ads served on 9 websites.
~200
Our reporting on a digital fashion startup advertising its tool for non consensual sexualized image generation resulted in Meta taking down its ad accounts and ~200 associated ads as well as to PUMA ending its commercial relationship with the company.
124,000
Meta banned a 124,000-member strong Facebook group that served as a marketplace for social media accounts and TikTok banned two creators encouraging fraudulent geo-shifting of accounts following our reporting on “TikTok Dark”
1.8M
TikTok banned 94 accounts with a combined 1.8 million followers using AI avatars of well-known journalists to spread clickbaity misinformation in Spanish.
114
Meta removed 114 ads and several related accounts promoting three different AI nudifier services. Google also deleted a Play app that was behind 17 of those ads.
23
Google stopped providing single-sign on services to 23 AI nudifier websites we wrote about as part of our special investigation on the ecosystem.
33
YouTube terminated 33 videos and the related accounts tied to a group of creators we found were posting fake highlights of Club World Cup matches before they were even played.
96
TikTok deleted 96 accounts and 3 videos tied to an organized network using videos of allegedly moribund celebrities to source leads for an advance fee scam on WhatsApp. Meta also blocked at least 6 WhatsApp business accounts.
16
YouTube terminated 16 channels and demonetized several more after we showed they used generative AI to spread false claims about the Sean “Diddy” Combs trial.
198
Amazon deleted 198 of error-filled, AI-generated children’s books that we found on the platform.
2+
LinkedIn removed multiple fake accounts after we reported on account rental schemes.
1000+
Meta removed more than 1,000 pornbait ads that we found had deceived users and broke its rules.
Reviews
Trustpilot and Meta removed fake online reviews that we showed were linked to a notorious global scam network.
More coming soon!!
More coming soon!!
More coming soon!!
The US National Association of Attorneys General referred to our report on the economics of AI nudifiers in a letter to online payment platforms urging them to take “strong action” against those tools. The same report was referenced in an enforcement action by the Australian eSafety Commissioner against a company involved in these services.
Meta sued the company behind a noxious network of AI nudifiers and introduced new classifiers to detect this type of content. The moves came after we published multiple stories that caused Meta to remove more than 10,000 of ads from the same company.
US Senator Dick Durbin and US Representative Debbie Dingell quoted reporting by Faked Up (the predecessor newsletter to Indicator) in letters to tech CEOs urging them to act with greater conviction to limit the reach of AI nudifiers.
CNBC quoted our findings on the AI nudifier economy in a special report on three victim-survivors of this form of image-based sexual abuse. This work was also quoted by The Economist.
NBC News picked up our findings on AI avatars of Spanish-speaking news anchors flooding TikTok with fake news.
Our findings on the economy of AI nudifiers served as the basis for this Wired article.
The Guardian republished two of our stories on fake Club World Cup highlights and AI slop about Diddy.
Nieman Lab, Poynter, and Semafor covered our launch