Our weekly Briefing is free, but you should upgrade to access all of our reporting, resources, and a monthly workshop.
This week on Indicator
Craig outlined 19 free tools you can use to investigate a URL. Learn tools and techniques for digging into links, redirects, parameters, and more.
Alexios wrote about viral “isolation challenge” videos on TikTok that enticed people with the (false) promise of a big payday if they could spend a few weeks in a luxurious-but-secluded location. The posts were lures for advance payment scams; TikTok removed hundreds of them following our outreach.
A nice thing: Indicator is nominated for General Excellence in Digital Publishing at the Digital Publishing Awards.

Send Indicator your trust and safety challenges
In June, I will be leaving my main job at Cornell Tech running the Security, Trust, and Safety initiative.
I’m going to miss a lot of things about it, one of which is the opportunity to engage with T&S professionals who are earnestly grappling with the legitimately hard trade-offs of moderating speech at platform scale.
To be clear, I don’t think all platform failures are complex sociotechnological challenges. Banning single-purpose AI nudifiers is a slam dunk, for example. But drawing the line on deepfakes of dead people is trickier. Community Notes can be both good and bad.
With that in mind, I’m opening up my inbox to trust and safety professionals who want an informed second opinion on a knotty problem they’re trying to solve. Note that my background is in content policy and red teaming, and my subject matter expertise is primarily on information integrity. But I will be drawing on outside experts for topics that may be outside my personal expertise.
Here’s how it will work:
You can email me at [email protected] (or [email protected] if you prefer a non-Google inbox) about a T&S topic that’s vexing you.
Good questions might include things like: “How do you define an authoritative health institution to highlight for sensitive topics without relying on government affiliation as a proxy?“ or “Should a cartoon likeness of a person doing something offensive count as harmful impersonation?”
Feel free to anonymize as much as needed, or to phrase it as a hypothetical. I’m also happy to share my Signal after you reach out. I won’t use any identifying information about you or your place of work unless you explicitly allow me to.
I will use your problem to look broadly at its manifestation across multiple services and interview experts about possible solutions.
Think of this a little like the trust and safety equivalent of The Ethicist column in The New York Times, down to a shared focus on romance scams.
Any takers?
— Alexios
Deception in the News

Credit: Spotify
📍 Spotify announced it will add a badge to artists it has verified as human based on a range of signals, including “concert dates, merch, and linked social accounts.” The company wrote in the announcement that “profiles that appear to primarily represent AI-generated or AI-persona artists are not eligible for verification,” but premised this sentence with an “at launch” that suggests the policy may change over time…
📍 The 19th reports that Minnesota’s Senate unanimously approved a ban on AI nudifiers. The bill just needs Governor Tim Walz’s signature to become law.
📍 Taylor Swift filed trademark applications for two voice clips and an image of herself in a move that experts say is a precautionary measure against AI impersonation.
📍 The shooting at the White House Correspondents’ Dinner promptly spawned conspiracy theories claiming it was staged. Extra credit to the people who think a time-traveling AI warned that this was going to happen.
📍 South Africa’s Minister of Communications and Digital Technologies withdrew a proposed AI policy after it was found to contain likely AI-generated fabrications. He said that the errors “compromised the integrity and credibility of the draft policy.”
📍 Americans reported $2.1 billion in losses from social media scams in 2025, according to the latest data from the Federal Trade Commission.
📍 Three Arizona women have sued the men behind AI ModelForge, a network of AI porn influencers. The plaintiffs claim their likeness was used to generate the synthetic models, and that AI ModelForge actively trained other men to trawl the internet for photos of real women to use in their own nonconsensual pornographic schemes. Some of this material is apparently still live on Instagram and Fanvue.
Sponsored
The Free Tech Newsletter That Readers NEVER Skip
Your uncle forwards you sketchy tech articles. Your coworker won't stop talking about AI taking everyone's jobs. And you're stuck Googling the same five questions every week.
The Current is a daily tech newsletter written by Kim Komando that helps you stay up to date on AI, tech, and trends in about 5 minutes a day.
Each morning she breaks down what’s happening in tech so you can quickly understand what matters without digging through a bunch of different questionable sources.
In each issue you’ll find things like:
Important AI updates
Useful tech tips
How to avoid the latest scams
It’s a simple read designed to help you eliminate the hours you probably spend Googling the same 5 tech questions
Tools & Tips

We added over 2,000 new tools to OSINT Navigator.
When it launched a couple of weeks ago, Navigator suggested digital tools for your investigation based on a database of just over 7,500 entries drawn from 9 online OSINT toolkits. Now it includes over 9,700 tools from 17 lists.
The additional 8 sources are publicly maintained lists hosted on GitHub and elsewhere. You can view all 17 sources by clicking here:

Now you can go to Navigator, type in a search like “How do I investigate a crypto wallet,” and receive an improved list of potentially useful tools. Navigator offers 10 free searches a day, and shows which toolkit(s) suggested each tool. Tom Vaillant, the developer of Navigator, has also open sourced the data.
This is a great time to try Navigator if you haven’t already!
If you find it useful, you can upgrade to an Indicator membership and get 50 Navigator searches per day, MCP and API access, as well as access to all of our content and workshops. — Craig
📍 Aleksandra Bielska offered tips for how to access hidden results tabs in Google Search.
📍 Aidan Raney shared 13 OSINT tools you can use to investigate Reddit.
📍 Dmitry “Sox0j” Danilov made updates to Maigret, his open-source username enumeration tool. He said he rewrote the code and added “AI analytics and identity resolution, so you can get fast conclusions about a target across 3000 sites.”
📍 Mario Santella shared a link to ShareAll, which aggregates a wide range of searching engines, chatbots, and other tools.
📍 Nico Dekens wrote, “Vibe Coding Is Becoming an OSINT Risk."
📍 Andrew Deck of the Nieman Journalism Lab wrote, “Geospatial AI is reinventing the rainforest beat.”
Events & Learning
📍 OSINT for Ukraine and The Hague Humanity Hub are offering a paid workshop on May 19, “Mapping Incidents for Accountability – A Practical Introduction.” It costs €35.
Reports & Research

“Warming up” LLMs negatively affects their accuracy, per this study on Nature
📍 Three researchers at the Oxford Internet Institute trained several LLMs to be warmer and more friendly in their responses. They found this also made the models far likelier to agree with inaccurate opinions. This could be a factor with AI companion bots that are typically developed to be support-providing rather than information-surfacing.
📍 The DFR Lab wrote about a pro-Kremlin effort to spread “unfounded EU economic loss figures in Estonia’s Russian-language information space ahead of October 2025 municipal elections.”
📍HKU journalism professor Masato Kajimoto shared a slide deck that maps out the tensions of legitimacy frameworks deployed by fact-checkers in different media ecosystems. In a comment, public health researcher Tina Purnat wrote that “if verification is always going to be read as agenda-driven, the more useful question is how to work within that reality -- not keep arguing against it.” As the person who helped bake “a commitment to nonpartisanship” into the fact-checkers’ code of principles, it’s haunting but crucial food for thought. — Alexios
📍 Reset Tech documented over 350,000 Meta ads (and more on Google) that often impersonated celebrities, doctors, and pharmaceutical companies to sell “unregulated and potentially dangerous nutraceutical products” in the EU.
📍 What To Fix found that Meta, TikTok, YouTube, and X are monetizing accounts that belong to EU-sanctioned individuals and entities.
Want more studies on digital deception? Paid subscribers get access to our Academic Library with 75 categorized and summarized studies:
One More Thing
The latest edition of AI-features-we-absolutely-didn’t-ask-for. (via Katie Notopoulos.)

Indicator is a reader-funded publication.
Please upgrade to access to all of our content, including our how-to guides and Academic Library, and to our live monthly workshops.





