Our weekly Briefing is free, but you should upgrade to access all of our reporting, resources, and a monthly workshop.
This week on Indicator
Craig gave a detailed walkthrough of a Earth Index, the tool that made the planet searchable via AI. It democratizes geospatial machine learning, handing high-end technical power to anyone with a browser.
We also released the fourth episode of our podcast, Show & Tell! This month, Barbara Marcolini, a visual investigator with Amnesty International's Evidence Lab, spoke about how she verified videos of Israel’s destruction of civilian infrastructure in Southern Lebanon. Watch the show on YouTube or listen to it on Apple Podcasts or Spotify.
Deception in the News
📍 A US federal judge heard oral arguments in Coalition for Independent Technology Research v Marco Rubio. The lawsuit was filed against the US government over the State Department’s decision last year to place visa restrictions on fact-checkers and content moderation experts that it said “have led organized efforts to coerce American platforms to censor, demonetize, and suppress American viewpoints they oppose.“
📍 The New York Times emailed freelancers with a reminder not to use AI in submissions to the newspaper, Futurism reported. The message came two weeks after the paper had to issue an Editor’s Note for an article that had attributed an AI-hallucinated quote to Pierre Poilievre, the leader of the Canadian Conservative Party.
📍 The Press Gazette reported that four freelance journalists that cover crypto failed to provide any evidence that they are real. The publication found that the group “have written over 1,000 articles for more than 30 different news outlets,” including Forbes, Huffington Post, and TheStreet.
📍 Related: The Florida Trib dug into a local news site and found that its reporters “are creations of artificial intelligence – complete with fake headshots and made-up biographies.” The site is part of a network run by The Discoverability Company, an online reputation management service.
📍 A secondary school in the UK was reportedly blackmailed by an anonymous group that scraped photos of students from the school website and used them to create deepfake nudes. The alleged blackmailers threatened to publish the sexualized images.
📍 The chair of the US Federal Trade Commission reminded 12 tech companies of their upcoming obligation to comply with the Take It Down Act. The law requires platforms to remove nonconsensual synthetic intimate imagery within 48 hours of receiving a valid request.
📍 Santa Clara County sued Meta, alleging that it “knowingly facilitates and profits from billions of scam advertisements on its popular Facebook and Instagram platforms that defraud seniors and families and squeeze legitimate small businesses out of fair access to consumers.”
📍 Actors in several micro dramas were shocked to discover that their series were being promoted with AI-modified sexualized clips that they never filmed, according to Business Insider.
📍 The South China Morning Post found at least 40 “Chinese-language videos suspected to be AI deepfakes with different narrators berating Singapore for its treatment of China and sidling up to the United States.”
Sponsored
Free email without sacrificing your privacy
Gmail is free, but you pay with your data. Proton Mail is different.
We don’t scan your messages. We don’t sell your behavior. We don’t follow you across the internet.
Proton Mail gives you full-featured, private email without surveillance or creepy profiling. It’s email that respects your time, your attention, and your boundaries.
Email doesn’t have to cost your privacy.
Tools & Tips

Benjamin Strick published the latest edition of his Field Notes newsletter. This one looks at “phone data mercenaries, stolen wheat, Hormuz traffic, and how to shut down adtech surveillance.”
It includes the below paragraph about map apps that have followed the lead of American commercial satellite imagery providers and obscured imagery in the Middle East:
OSINTtechnical noted that some map providers appear to blur sensitive Kuwaiti sites after strikes during the Iran war, with Ali Al Salem Air Base cited as an example. A closer look at Kuwait shows numerous small Apple Maps blurs around water towers and power stations that aren’t blurred on Google. Blurring removes information, but it also creates it.
“Blurring removes information, but it also creates it” is a great line — and a reminder that the absence or redaction of information can be a powerful signal.
Strick is also quoted in a recent Columbia Journalism Review story about how newsrooms have adapted to the lack of high-resolution satellite imagery from Planet and others. Here’s a quick summary of some of the alternate imagery sources:
Absent Planet, researchers turned to imagery from a range of European, South American, and Asian satellites, available for purchase from reseller websites such as Australia’s Soar, South Korea’s SI Imaging Services, and Berlin-based Up42. Others sought high-resolution imagery from France’s Airbus, which charges in the low hundreds of dollars for a single image.
It began with Planet and Vantor (formerly Maxar) restricting access, and has seemingly spread to map providers. But persistent investigators will always find a way… — Craig
📍 Instagram is testing a new, voluntary “AI creator” label. The company said that “creators who often create with AI” can add it, and that it will appear “on your profile and alongside your content.” We’ve reported on the onslaught of undisclosed synthetic Instagram influencers and their myriad grifts. Let’s hope the label becomes mandatory soon.
📍 The latest edition of Technisette’s OSINT newsletter includes a bunch of helpful tips related to Facebook, Google Search, and more. I particularly liked her tip about how to find the date that an account became a Telegram channel admin. (Just note that many channels don’t publicly list their admins.)

📍 Joe Gray wrote, “The INT in OSINT: Why the Community Stops at Collection.”
📍 Toddington International published, “AI Tools for OSINT: A Practical Starting Point for Investigators” and “Reading the Room Online: SOCMINT, Language, and Digital Culture”
Events & Learning
📍 The Pulitzer Center is hosting a free, four-part webinar series, “Investigating the Ocean.” The first session is May 19. More info and links to register are here.
📍 I-Intelligence is hosting free webinars about Russian open-source intelligence collection techniques. Info is here and you can email [email protected] to register.
Reports & Research

AI transparency labels on Instagram, LinkedIn, Pinterest, TikTok and YouTube
📍 In a research brief for the Danish Center for AI in Society, six researchers argue that labeling synthetic content on online platforms is “normatively important for informing users about content provenance.” (As Indicator has found, platforms are not living up to their promises to label AI-generated content.) The authors also noted that “research suggests that labelling alone is unlikely to mitigate manipulation, restore trust, or empower citizens.”
📍 Swedish developer Daniel Stenberg put Mythos, the too-powerful-to-release-publicly AI model from Anthropic, to the test on cURL, the popular open source tool he developed. His conclusion was more sedate than some of the chatter about the model's capacity to find security flaws. Stenberg writes that AI code analyzers "do not reinvent the field in that way, but they do dig up more issues than any other tools did before."
📍 A new report from five Canadian research groups found that, “foreign adversaries are exploiting the Alberta separatist debate to erode social cohesion, deepen domestic divisions, undermine trust in democratic institutions, and amplify perceptions of political instability that damage investor confidence in Canada.”
📍 ABC News found dozens of online retailers using AI content to pose as “down-on-their-luck craftsmen or small business owners in need of support.” The synthetic shopkeepers often elicit sympathy by falsely claiming they’re being forced to shut down or have been targeted with online abuse.
📍 It’s still relatively easy to circumvent AI safety guardrails, with one set of researchers managing to do so using poetry.
Want more studies on digital deception? Paid subscribers get access to our Academic Library with 75 categorized and summarized studies:
One More Thing
We truly live in incredible times:

Read the full story at 404 Media.
Indicator is a reader-funded publication.
Please upgrade to access to all of our content, including our how-to guides and Academic Library, and to our live monthly workshops.





