Our weekly Briefing is free, but you should upgrade to access all of our reporting, resources, and a monthly workshop.

This week on Indicator

📍 Craig published the Indicator Guide to hunting for documents and files in open buckets, servers, and directories. It details Google dorks and tools that can help you find interesting and potentially confidential documents sitting on publicly-accessible servers or websites.

📍 Alexios shared the results of the first ever site-wide comparison of the text and citations of Grokipedia and Wikipedia, which he conducted with Cornell Tech researcher Hal Triedman. Elon Musk’s AI encyclopedia is both highly derivative of the crowdsourced version and far less discerning when it comes to source quality. You can take a look at the underlying data on GitHub and Hugging Face.

📍 We also published the video, transcript, and slides for our most recent live members workshop. It showed how to use digital ad libraries to follow the money and research companies, scams, politics, and advocacy campaigns.

Last but not least, we want to flag a mistake in last week’s Briefing: arXiv hasn’t stopped accepting all CS preprints, just surveys and position papers in that field. We regret the error and are grateful to a reader for flagging it.

Deception in the News

📍 Donald Trump’s Truth Social account posted the (ridiculously) false claim that Barack Obama has been collecting government royalties for the use of his name in "Obamacare." The claim originated in an article published by Christopher Blair, the longtime Facebook hoax merchant that we previously reported is raking in engagement and money thanks to Meta’s Content Monetization program. Another hoax from Blair’s network, about a trucking company leaving NYC following Zohran Mamdani’s election, earned millions of views on X.

📍 Tech advocacy group Public Citizen called on OpenAI to withdraw the Sora 2 model from public platforms, calling it “a direct and existential threat to our shared information ecosystem.” In very related news, Aos Fatos found that 26 Sora-generated videos with false information were viewed 41 million times on TikTok.

📍 British doctors warned about “a flurry of men being pushed to up-tick their testosterone” as a result of misleading social media posts that promote testosterone replacement therapy to people that don’t need it.

📍 Google sued a Chinese cybercrime group for allegedly selling “phishing for dummies” kits under the Lighthouse brand.

📍 A solar panel installation company in Minnesota sued Google over the tech giant’s Search AI Overviews. The solar company alleges that it lost business when the AI feature falsely told people that the company had “settled a lawsuit with the state attorney general over deceptive sales practices.” There was no suit, or settlement.

📍 At the COP30 climate conference, ten countries signed on to the Declaration on Information Integrity on Climate Change. Among other things, the declaration calls on platforms to “assess whether and how platform architecture contributes to the undermining of climate information ecosystem integrity, providing researchers with access to data to ensure transparency and build an evidence base.”

📍 The European Commission is proposing the creation of a European Centre for Democratic Resilience in order to combat foreign information manipulation. It also called for an “independent European Network of Fact-Checkers … to boost fact-checking capacity in all EU official languages.” For the record, I don’t love this level of institutional involvement in fact-checking. — Alexios

Tools & Tips

Geolocation can be time consuming and difficult. It’s especially hard if an image lacks indicators like street signs or unique landmarks, etc.

In some cases, AI may be able to offer a helpful suggestion or a somewhat educated guess.

I recently learned of two tools that use AI to try and determine the location of an image:

  • Where Is This Photo? is an AI-powered tool that allows you to upload a pic and receive an estimate of where it was taken. It says that it “uses advanced AI image geolocation and reverse photo location search to identify GPS coordinates from pictures.” There are free and paid versions. (via Cyber Detective)

  • GeoVLM is another option for AI-assisted image geolocation. (via The OSINT Newsletter)

GeoVLM offered these tips for users:

  • Outdoor images work best

  • Street views are ideal

  • Clear photos with visible landmarks

  • Unique architectural or natural features help

Keep in mind that Bellingcat recently tested the geolocation abilities of several LLMs and found them to be largely ineffective. It’s unclear how useful the above mentioned tools are. But specialized models tend to perform better than general LLMs.

The most effective AI geolocation tool I’ve tested is GeoSpy. If you want to get a sense of how it works, why specialized models may perform better than LLMs, and how GeoSpy is apparently able to identify locations based on photos taken inside a house or location, we’ve unlocked our Q&A with GeoSpy CEO Dan Heinen so anyone can read it over the next 48 hours.

We hope you enjoy it and will consider upgrading to access to all of our content, including our detailed OSINT guides, and monthly members-only workshops. — Craig

📍 Alicja Pawlowska’s OSINT newsletter published the latest installment of its OSINT for languages series with i-intelligence. This one is focused on Arabic.

📍 The FBI has subpoenaed a domain registrar to demand that it reveal who runs archive.today, the much-used web archiving site. As noted by 404 Media, people often use the site to share paywalled content. But it’s also popular with OSINT investigators. The legal maneuver highlights how we often rely on “tools built by anonymous individuals, with opaque funding, to preserve the digital history on which justice itself can depend,” wrote an OSINT analyst with the International Development Law Organization that goes by Oleksii N on LinkedIn. “It’s time for governments, academia, and investigative communities to invest in open, auditable, and jurisdiction-neutral archiving systems - built not just to fight paywalls, but to preserve truth.”

📍 Johanna Wild of Bellingcat wrote, “Lessons from Building an Online Toolkit to Aid Open-Source Investigations.” Congrats to her and to Bellingcat on the one year anniversary of an incredible resource!

📍 Benjamin Strick published the second edition of his monthly OSINT newsletter. It covers everything from “simple panorama how-to's and Starlink signals to drone footage, fake architects, and the digital fingerprints that expose hidden networks.”

📍 Micah Hoffman and Griffin Glynn of My OSINT Training recently went on David Bombal’s YouTube channel to talk about how to build your own OSINT bookmarklet. They gave a nice shout out to Indicator’s recent piece, “OSINT tips for investigating WordPress sites.”

You can also read more about MOTs awesome bookmarklets in the recent Indicator guide, “How bookmarklets can reveal hidden online profile info and speed up OSINT investigations.”

Reports & Research

📍 According to the Tow Center’s CJ Robinson, just eight AI contributors to X’s Community Notes were behind 5 to 10% of all “helpful” notes on any given day over the past month. As I wrote in October, I think human participation in the program will likely continue to fall as peoples’ notes are crowded out by synthetic ones, and as fewer human-authored notes are shown to users. It’s a betrayal of the program’s stated purpose of putting moderation in the hands of “regular” users. — Alexios

📍 People are more likely to believe a false headline if it’s accompanied by a realistic AI-generated image that supports the claim, according to a pre-registered and peer-reviewed study published in the Harvard Misinformation Review.

📍 Older-generation large language models struggled when asked about the veracity of a false claim that their interlocutor believed to be true, according to a study in Nature Machine Intelligence. Models released after GPT-4o did a lot better.

Want more studies on digital deception? Paid subscribers get access to our Academic Library with 55 categorized and summarized studies:

One Thing

Pakistan’s Dawn newspaper had to apologize after it published an article that was generated using an LLM. The story, which was about auto sales in Pakistan, included a final paragraph that read:

“If you want, I can also create an even snappier “front-page style” version with punchy one-line stats and a bold, info-graphic-ready layout — perfect for maximum reader impact. Do you want me to do that next?”

Image via Reddit

The paper posted an apology on Facebook:

Indicator is a reader-funded publication.

Please upgrade to access to all of our content, including our how-to guides and Academic Library, and to our live monthly workshops.