Our weekly Briefing is free, but you should upgrade to access all of our reporting, resources, and a monthly workshop.
This week on Indicator
We launched OSINT Navigator! It’s a free app (with extra perks for members) that allows you to ask a question like “How do I find the owner of a website?” and receive recommended tools and guidance to help with your investigation. Free users get 10 free queries a day; Indicator members get 50, plus access to the API and MCP. Give it a test right now and read more here.
Alexios wrote what is likely the first deep dive into a coordinated influence operation targeting Community Notes on X. During the during the 2024 UK general election, five accounts often submitted identical notes and agreed on 99.8% of their ratings with the goal of removing fact checks from tweets posted by Conservative Party accounts.
This week we also launched, in partnership with Wired, a map of reported cases of AI nudifiers being used in schools around the world. The map displays the real world harm of these tools of abuse.
And a little brag: Indicator was honored in two categories at the Digital Media Awards Americas 2026. We were named Best in Countering Disinformation and Best Emerging News Providers.

Google ran ads on queries about deleting intimate images
When someone searches Google for help removing intimate images of themselves, they could be shown ads for paid services that don’t meet their needs.
Search queries like “Delete nudes of me on Tik Tok” and “Delete nudes of me on Google” triggered ads in Google search for at least five different services – Incogni, Guaranteed Removals, DeleteMe, Reputation Defender, and Fix Your Name.
In total, I was shown 12 different ads across seven search queries that were run from the United States; performing the same searches from Italy returned ads for different companies.
Most of the ads promoted generic content removal products that don’t provide a tailored service for nonconsensual intimate imagery (NCII). The companies are seemingly bidding to show their ads next to queries related to online content removal.
Most importantly, all of them are paid services. This means that victims seeking help removing nonconsensual intimate content are shown a sales pitch before they see the free removal page for a given platform.
I stumbled on the ads as part of my work auditing the reporting flows of 11 major platforms through the eyes of a victim. To see how easy or hard the reporting tools were to find, I recorded results for search queries like “delete nudes of me from [platform]” and "delete deepfake nudes of me from [platform]."
Before I even reached the platforms, however, I noticed the ads displayed in Google Search results. They raised critical questions: How many desperate victims click on the ads? Do the services deliver on their promise?
Indicator contacted the five companies whose ads were displayed, but did not receive a response.
The ads appeared exclusively on the searches for real nudes, with none showing up in search results for deepfaked material. Legislation in several countries requires platforms to take down nonconsensual nudes regardless of whether they are genuine or not, but Google appears to be inconsistent across the two areas.
Google did not return multiple requests for comment from Indicator about the ads.

Notably, the ads may be a Google-specific phenomenon: the same searches on DuckDuckGo and Bing did not display sponsored results. Ironically, however, Bing surfaced results for AI nudifier sites when queried for how to delete deepfake nudes.

Search placement matters. While ads reportedly capture 1-2% of total clicks, applying standard e-commerce metrics to a survivor in crisis is tricky. In a moment of heightened emotions, it is possible that a survivor is more inclined to click a sponsored link that promises to fix their problem.
"In that moment when somebody discovers that they have intimate images online, it's almost like the world is just swirling around them, there's so much anxiety and fear,” said Ilse Knecht, a policy advocate at the Joyful Heart Foundation. “Thinking clearly and linearly while you're in trauma is really difficult, and the minute you see those photos, you want them down."
Even for users who scroll past sponsored content, the ads behave like a barrier, pushing legitimate, free resources like StopNCII.org or the platform's own reporting mechanisms further down the page and obstructing the path to safety.
Search engines like Google should adopt the approach they have taken for other vulnerable user queries, such as those related to self-harm. In the United States, such searches return a large, unmissable banner for the 988 Lifeline, with no ads. NCII should be treated with the same safety logic. Free tools like StopNCII.org and the relevant platform’s removal forms should be as accessible.
When it comes to nonconsensual nudes, finding the “Report” button shouldn't require a credit card. – Rachel Keels
Deception in the News
📍 X cut payouts to accounts that post “stolen reposts and clickbait” by 60%, with another 20% reduction in the works. It’s unclear how such accounts will be detected, beyond a reference to penalizing the overuse of “BREAKING.” Head of product Nikita Bier posted that “X will never infringe on speech or reach—but we will not compensate for manipulation of the program or our users.” This sure looks like more trust and safety measures dressed up as creator policy to me… — Alexios
📍 The US Department of Justice settled a lawsuit that claimed the State Department had “actively silenced and censored disfavored speech” by outlets like The Daily Wire through its Global Engagement Center (GEC). The plaintiffs claimed that the GEC’s support of anti-disinformation initiatives had led to the suppression of their content on platforms such as X. The State Department agreed to not to “use, finance or promote technology that would be used to suppress, censor, demonetize or fact-check free speech of Americans or domestic media outlets” through 2036.
📍 Russia’s internet regulator Roskomnadzor has reportedly banned Bluesky.
📍 NBC News found Grok is still generating sexualized images of real people on X. Meanwhile, Britain’s technology secretary proposed that senior executives at technology companies be made personally liable if they fail to act on deepfake nudes when required to by Ofcom, the media regulator.
📍 Manhattan District Attorney Alvin Bragg sent a letter to Meta CEO Mark Zuckerberg asking him to do more to remove fake accounts that “falsely pose as pro bono legal services organizations, such as Catholic Charities,” in order to defraud people. “If you sincerely wish to protect the safety of your users from fraud, we urge you to take necessary, proactive steps,” Bragg wrote.
📍 The inclusion of links to the prediction market Polymarket in Google News was an error that has been patched, according to a company spokesperson.
📍 AI influencers showed up en masse to Coachella, reports The Verge. Related: on The Atlantic’s Galaxy Brain podcast, Charlie Warzel and New York Times reporter Tiffany Hsu discussed whether the scale of synthetic creators means authenticity on social media is officially defunct.
📍 “Disinformation spares no one” is the wry headline on an overview of false news targeting Brazilian presidential candidate Flávio Bolsonaro. Flávio is the son of former Brazilian president Jair Bolsonaro, who was sentenced to prison for leading a conspiracy to keep himself in power. A report from the Brazil’s Federal Supreme Court found that during his administration, “the state’s intelligence structure allegedly was used to spread disinformation and illegally monitor politicians, judges, ordinary citizens and journalists,” according to the LatAm Journalism Review. Five people are facing related charges.
Tools & Tips

Last week I highlighted ShareTrace, a new open-source tool from Soxoj that can extract useful information from share URLs that are generated by social platforms and apps.
The context is that when you click the “copy link” button on an online post, the resulting share link often contains metadata. ShareTrace shows you the metadata, which may include the username and other details of the account that generated the link.
Soon after, Henk van Ess used the code to create a handy web-based app. No need to install it via Python. His site also offers useful background about when and how the tool can be useful. — Craig
📍 Ciggies.app is an extremely niche resource that could be useful if you ever need to research or identify Chinese cigarette packaging. It’s a virtual museum and database of roughly 200 Chinese cigarette packages, including international brands.
📍 Another fun one: Below the Fold is a database that “draws on every article published in The New York Times from January 2000 to the present.” It was built by journalist Ted Alcorn and has a bunch of nice filtering data visualization options.
📍 We recently published a deep dive into new and upcoming changes to beneficial ownership in the EU, UK, and overseas territories, while also highlighting free resources that investigators can use to examine corporate ownership and, in some cases, related assets. Some good news to report: Gibraltar’s ultimate beneficial ownership register is now free to search by company name or registration number. The news was highlighted by Stephen Abbott Pugh, a former journalist who now consults on open ownership data. He noted that “The register has been public since 2020 but until recently people needed to pay £2.50 per search.”
📍 The Global Investigative Journalism Network released new content as part of its recently launched Academy, which offers a wealth of training resources. One video features Craig offering advice on how to investigate an image.
Events & Learning

📍 We are having a jolly good time at the International Journalism Festival in Perugia! Folks who couldn’t make our panel on investigating platforms and workshop on cheap/free OSINT tools can check out the recordings on the IJF site.
📍 The EU Disinfo Lab is giving a free webinar on April 23, “Decoding Russian intelligence: What medals and insignia reveal.” A week later it will host, “Look what you made me do: How FIMI actors weaponise pop culture.” Info and registration is here.
Reports & Research
📍 GitHub stars are a signal of credibility in the developer community. They’re also closely watched by investors looking to spot the next hot idea to fund. But as with every other measurement of social proof or online popularity, there’s a thriving marketplace where anyone can purchase stars for their repo. An investigation by Awesome Agents dug into the stars-for-sale ecosystem and found that they “sell for $0.03 to $0.85 each on at least a dozen websites, Fiverr gigs, and Telegram channels.” The report also identified GitHub repositories with suspiciously high numbers of stars. Also worth reading: a paper from a team led by researchers at Carnegie Mellon University that “identified six million (suspected) fake stars across 26,254 repositories.”
📍 Musician and writer Eliza McLamb wrote a great post on her Substack about Chaotic Good Projects, “a digital marketing agency that promises to create virality by, among other things, manufacturing hundreds of fake fan accounts for musicians.” Wired followed up with a look at the agency and one of its clients.
📍 Aos Fatos found viral videos on TikTok that promoted the false claim that tea made with avocado pits can help you quit smoking or drinking. The accounts pushed users to purchase ebooks that contained health misinformation.
📍 The BBC spoke to one of the creators of the viral LEGO-themed Iranian propaganda videos, who confirmed that the theocratic government is “a customer.”
📍 The cybersecurity firm HUMAN found a network of malicious domains that hijacked push notifications through Google Discover in order to subscribe people to websites that pushed clickbait, “scareware,” and scams.
📍 The Tech Transparency Project found 38 apps across the Apple and Google app stores that could nudify or undress women. The apps were downloaded 483 million times and made more than $122 million in lifetime revenue, according to data from analytics firm AppMagic.
Want more studies on digital deception? Paid subscribers get access to our Academic Library with 75 categorized and summarized studies:
One More Thing
NASA’s Artemis II mission successfully traveled to the far side of the Moon and back, thrilling space nerds — and infuriating conspiracy theorists who believe humans have never traveled to the moon.
What do you do when high resolution, real-time images and video (and heaps of other evidence) undermine your pet theory? Turn to AI slop, of course.
Futurism reports that people have created and/or cited AI-generated videos in order to argue that the Artemis II mission was in fact filmed with a green screen.
FRANCE 24 did a real debunk of the fake debunk:
Indicator is a reader-funded publication.
Please upgrade to access to all of our content, including our how-to guides and Academic Library, and to our live monthly workshops.




