Our weekly Briefing is free, but you should upgrade to access all of our reporting, resources, and a monthly workshop.
This week on Indicator
We published the Indicator Guide to building your own reverse image search engine by Ishaan Jhaveri, a Pulitzer Prize-winning data journalist. He walked through two tools (one free one with a free trial) that enable you to index a private dataset of images (or videos) and run reverse image search queries against it.
Alexios uncovered a coordinated network of AI thirst traps on Instagram that featured the logo of gambling website 1win on their “clothes.” If money exchanged hands, this would violate US advertising laws. Meta took down the accounts following our reporting.
At noon ET today, Craig delivers our March members-only workshop. He’ll share some of the new tools he discovered while writing The Indicator Guide to investigating ecommerce sites. This is useful for anyone that looks into products, scams, and dropshippers, or who needs to investigate websites. If you’re a member and want the Meet link, just reply to this email.
If you’re not a member, you can upgrade and we’ll send you the meeting link, plus the recording, slides, and transcript.
X’s head of product wants to rein in slop and deception. Will Elon Musk get in the way?
Earlier this week, X head of product Nikita Bier announced that the platform would adjust creator payouts to prioritize impressions received from a user’s local market. That means someone in India would make less money for posts about US politics and more money for posts about Indian affairs.
It appeared to be designed to disincentivize people based overseas from posting about American politics, among other common examples of overseas targeting.
The move was an attack on a tried-and-true social media monetization model — one that has lined the pockets of everyone from the young Macedonians I wrote about roughly a decade ago to AI slop entrepreneurs and overseas commentators like Ian Miles Cheong. Jason Koebler of 404 Media said it well: “America’s Polarization Has Become the World's Side Hustle.”
Bier understands that there's a business model behind US-focused culture war and hyperpartisan content:

On Tuesday he said that the platform “will be giving more weight to impressions from your home region—to encourage content that resonates with people in your country, in neighboring countries and people who speak your language.”
Bier added, “While we appreciate everyone’s opinion on American politics, we hope this will disincentivize gaming the attention of US or Japanese accounts and instead, drive diverse conversations on the platform.”
There was pushback from some users, and there are reasonable concerns about how this could actually work. Then, hours after Bier’s announcement, X owner Elon Musk tweeted that X will “pause moving forward with this until further consideration.” (A Community Note citing Musk's tweet was added to Bier's announcement, which is an objectively hilarious turn of events.)
X’s monetization program has always operated with a level of opacity. But the announcement by Bier followed by Musk’s pullback is emblematic of a growing contrast between Bier and Musk’s priorities, especially in light of the latter’s political activity.
As one user tweeted:

Bier seems genuinely interested in finding ways to reduce manipulation and low-quality content. Musk is often an amplifier of unreliable information, extreme hyperpartisan rhetoric, and AI slop.
Bier launched an “About this account” feature that provides transparency about a user’s location (read our guide to it here). He recently rolled out the ability for people to restrict comments to their followers, which was pitched as a way to stop the “sloppers.” He also announced that X would demonetize accounts that fail to label AI-generated content about conflicts. And he’s talked about building X to “protect the platform against the AI Slopacalypse.”
In contrast, Musk downplayed Grok’s mass-undressing spree and is a driving force behind Grokipedia, which Indicator revealed is pushing political viewpoints and citing extremist websites in articles about important/contentious issues. Musk regularly reposts at-times porny AI slop to show off Grok’s image and video generation capabilities.

There are legitimate drawbacks and concerns about rewarding users that post localized content. Pausing to get it right is fine, assuming it actually launches. But the more consequential question is whether Musk’s embrace of increasingly extreme politics, AI slop, and sexualized imagery will undermine or derail Bier’s product roadmap. —Craig
Deception in the News: Meta Oversight Board weighs in on Community Notes

The Meta Oversight Board called on the company to consider several factors when deciding where to expand its Community Notes program. The board said Meta should weigh whether countries are facing conflicts and are divided across multiple axes and said that “human rights risks may warrant withholding community notes from a particular market.” (The crowdsourced fact-checking product, launched as part of Mark Zuckerberg’s effort to curry favor with Donald Trump, is currently only available in the United States.)
The Board also called for “substantial transparency, reporting and researcher access to data,” which was the key ask in a comment I submitted with Renée DiResta as part of a public consultation.
Despite promising that Meta would build Community Notes "just like X," the company hasn’t shared any meaningful data about the project.
For all of X's many problems, its Community Notes data is fully public and downloadable. This allowed, for example, Indicator to build a tracker that monitors the prevalence of AI-based Note contributors. If Meta wants help in making evidence-based decisions on Community Notes, it’s got to share the evidence first.—Alexios
📍 The European Parliament voted to postpone the deadline for AI companies to introduce watermarks on their content by three months, to November 2026. MEPs also agreed to ban tools that generated nude images of recognizable individuals without their consent. I expect AI nudifiers will implement checks that require users to acknowledge they obtained the consent of the person whose photo they are about to “undress,” in order to abide by the regulation. Whether that will be considered “effective safety measures” will probably be matter of litigation, as it is in the United States.
📍 Despite losing a Supreme Court case in 2024, a group of plaintiffs that alleged they were censored by social media platforms for sharing COVID-19 misinformation are now claiming victory following a related lawsuit that was settled this week. The settlement binds the CDC, CISA, and the surgeon general to not threaten social media platforms with punishment for failing to moderate content. As Renée DiResta put it, this is a participation trophy — not a victory.
📍 OpenAI announced it will shut down Sora. Turns out a feed heavily filled with deepfake slop that is also extremely costly is not a winning proposition! Good riddance to a horrid tool that made it possible to mock the dead in horrid new ways.
📍 Wikipedia editors further tightened the rules on AI use, effectively banning users from adding any synthetic text to the online encyclopedia. This comes after a Wikipedian was found to have deployed an AI agent called TomWikiAssist, which has since been blocked. Bill Adair, who has a book on Wikipedia in the works, spoke to the AI agent’s creator. I can’t get over the fact that the first edit the agent made was to the Wikipedia page for the Turing test.—Alexios
📍 Plenty of legal news this week on the deepfake nudes front. Germany is considering criminalizing this type of content after actress Collien Fernandes sued her husband for allegedly sharing real and synthetic intimate images without her consent. Baltimore is suing X for breaking consumer protection laws by placing insufficient guardrails on Grok’s ability to undress images of real people. And two former students of a Pennsylvania high school that created deepfake nudes of 60 girls in their community were each sentenced to 60 hours of community service.
📍 Andy Parsons, global head of content authenticity at Adobe, posted that “reports I've heard suggest xAI is now testing C2PA provenance. If that's true, I am delighted.” Let’s see if we’ll have to add them to our next AI labeling audit.
📍 Reddit’s CEO said the platform is considering adding mechanisms such as Face ID or Touch ID to prove that there’s a human behind each account.
📍 NewsGuard found that audio deepfakes of former US president Bill Clinton denouncing the Iran conflict got more than 10 million views on YouTube.
Tools & Tips: TweetDeck goes further behind the paywall

Remember TweetDeck, the awesome tool that let you monitor lists and run complex search queries for free on Twitter? If you’re like me, you stopped using it when Elon Musk put it behind the X paywall after buying the company.
Now it’s even less accessible. On March 26, the company moved TweetDeck (now called X Pro) into the Premium+ tier, which costs $40 a month in the US. (Premium is $8.)
X advanced search works pretty well most of the time, but the capabilities of TweetDeck/X Pro remain unique. And expensive.—Craig
📍 The OSINT Consultants released a web-based version of Deaddrop, a Telegram search tool. You can run queries against a dataset of more than 175 million messages across over 2 million channels. New users get a free 30-day trial.
📍 Another Telegram tool is Telegago, which was recently highlighted by The OSINT Newsletter. It’s a Programmable Search Engine that uses Google to search for Telegram content. (What’s a Programmable Search Engine? Read more in our recent guide to searching in the age of AI.)
📍 The OSINT Newsletter also launched an OSINT Tools Library, a “curated, investigator-first directory of tools used in real cases.”
📍 Claudia Tietze shared Netryx, an open-source geolocation tool that describes itself as “a local-first geolocation tool that identifies the exact GPS coordinates of any street-level photograph.” It runs on your machine. Tietze said that “It gives you enough other data to confirm a locations' accuracy quickly, or to pivot your investigation.”
📍 CanIRun.ai allows you to enter your computer’s specs and find out which AI models can run locally on your machine. (via Cyber Detective)
📍 The Global Investigative Journalism Network launched the Tech Focus Project, which features “stories on real-world techniques covering tech and AI, and masterclass videos with experts in accountability reporting on technology.” It just published the fourth chapter, “Investigating Disinformation in the Age of AI.” (Craig is quoted.)
📍 Precious Vincent published “Email OSINT: The Digital Foothold.” It’s a nice walkthrough of approaches for researching an email address.
📍 Nico Dekens wrote, “From Open Web to Real-World Target: The OSINT Trail Behind Proxy Violence.”
📍 Toddington International published, “OSINT Practice Platforms from Simulation to Real-World Impact.”
Reports & Research

📍 A paper in the Journal of Online Trust & Safety survey found Americans broadly support the labeling of social media posts that "contain misinformation with information about and from verified sources." While there is variance between Republican and Democratic respondents, labeling gets relatively high scores on both fairness and effectiveness from both sides of the US political spectrum (see chart above).
📍 OpenMeasures published an overview of how Telegram, 4chan, and Bluesky are used in attempts to game the prediction platform Polymarket. (Last week, The New York Times, reported that the company has published hundreds of false and misleading posts on its social media accounts.)
📍 With Iran’s propaganda efforts ramping up, PolitiFact published a useful overview of the theocratic regime’s influence operations.
📍 EUvsDisinfo launched FIMI Explorer, an interactive dashboard that enables you to examine and visualize foreign information manipulation activities. “It allows users to easily navigate key networks used in FIMI attacks, demonstrating connections between threat actors involved in information manipulation activities and their role,” according to the site. Read more here.
📍 Conspirador Norteño dug into an account that sells fake TikTok followers.
Want more studies on digital deception? Paid subscribers get access to our Academic Library with 75 categorized and summarized studies:
One More Thing

404 Media found that a company named WebinarTV has been recording and publicly posting (with out permission) tens of thousands of Zoom meetings that were not set to private. The webinars also get discussed in a companion episode of the “Phil & Amy Show,” an AI-generated podcast.
WebinarTV presents itself as “a search engine for the best webinars, making it easy to discover the most current wisdom from the world’s most knowledgeable people.”
Indicator is a reader-funded publication.
Please upgrade to access to all of our content, including our how-to guides and Academic Library, and to our live monthly workshops.



