Our weekly Briefing is free, but you should upgrade to access all of our reporting, resources, and a monthly workshop.

This week on Indicator

Craig found more than two dozen accounts that used AI-generated doctors and health professionals to push often dubious advice on Instagram, Facebook, TikTok, and YouTube. They collectively amassed 8.5 million followers.

"@grok is this true" was the single most frequent reply tagging X's AI chatbot in the six months following its launch, according to new data. Alexios explored what that means for Community Notes and fact-checking on Elon Musk’s platform.

Deception in the News

📍 Facta reports that a clip showing Italian Prime Minister Giorgia Meloni challenging critics of her pro-Trump stance was edited to falsely portray her as standing up to the US president. The misleading clip made it seem like Meloni was genuinely talking about closing US bases and storming McDonald’s. Facta said the doctored video originated from an Indian pro-Putin content farm on X. (Even in the age of deepfakes, simply removing context is an effective way to disinform — Alexios)

📍A few stats that illustrate the staggering scale of the nonconsensual deepfake nudes phenomenon: Wired found that “more than 1.4 million accounts were signed up to 39 deepfake creation bots and channels on Telegram,” while The Guardian collected 150 Telegram groups that shared “a feed of images – of celebrities, social media influencers and ordinary women – made nude or made to perform sexual acts by AI.” And the Tech Transparency Project found almost 100 nudifier apps across the Apple and Google app stores.

📍 Meanwhile, the EU Commission is investigating xAI for not doing enough to prevent Grok from generating millions of nonconsensual sexualized images. Thirty-seven attorneys general for US states and territories are also investigating.

📍 The Singaporean government is requiring Meta to implement “enhanced facial recognition measures” to prevent impersonation scams — or face a fine. Meanwhile, Belgium’s King Filip is being impersonated by scammers in video calls.

📍 Mainstream media outlets in Italy and Germany fell for AI-generated videos that claimed to show a (legitimately historic) snowstorm in the Russian region of Kamchatka.

📍 ChatGPT has been spotted quoting Grokipedia. (I guess that’s better than Grokipedia quoting Grok 1,000+ times?)

📍 Aos Fatos reports that cutesy AI-generated videos of anthropomorphized fruits and other foodstuff are spreading health misinformation on TikTok.

📍 Instagram’s universe of AI thirst trap accounts — which Alexios wrote about in 2024 — is getting weirder and weirder.

Tools & Tips

📍 X user @Harrris0n, who works at CovertLabs, wrote an eye-opening article, “How Waze quietly built the world's largest crowdsourced surveillance system.” He revealed that when users hit the “report” button in Waze they share their “exact location, the precise time, and your username.” He showed how to collect the information and use it to identify people. Waze subsequently removed usernames from publicly-available reports.

📍 The Nieman Journalism Lab reported that large publishers like the New York Times are limiting (or totally restricting) the Internet Archive’s web crawlers from accessing their sites. The publishers say it’s to prevent AI companies from using the service as a back door to index their content. You can still save individual pages to the Archive, but fewer pages will be automatically added from places like The Guardian and The New York Times.

📍 Rolli IQ, a useful tool for monitoring and analyzing social media, added support for four additional platforms: Facebook, Instagram, Threads, and Weibo. They join Bluesky Linkedin, X, Reddit, and YouTube. Rolli is a paid platform that offers free credits to journalists. Craig previously dug into Rolli and two other monitoring platforms.

📍 The OSINT Vault is a nicely organized collection of tools from Nicole Hurey. She recently redesigned it.

📍 Ensun is an AI-powered company search tool that can accept semantic queries like, “companies that sell OSINT tools.” (via boudjenane soufiane)

📍 Josh Lepawsky wrote, “A recipe for finding and mapping data centres.” A nice tutorial that uses QGIS, Open Street Map, and Google Earth Pro.

Events & Learning

📍 Partner event: OsintifyCON is a virtual conference dedicated to Actionable Open Source Intelligence and designed for investigators, analysts, journalists, and intelligence professionals. The conference is on Feb 5 and the training is on the 6. Go here to get a conference ticket and here to register for the full day of training.

📍 Factchequeado and the Public Tech Media Lab are offering a free OSINT course for journalists. The English session is on Feb. 4 and the Spanish workshop is on Feb 11. More info here.

📍 Lupa is hosting an event for the release of the 1st Panorama of Disinformation in Brazil, a report that analyzes disinformation trends in the country. It’s on Feb. 5 and you can register for free here.

📍 Ethos is hosting a free webinar on Feb. 25, “OSINT Advantage: Uncovering Fraud with Open-Source Intelligence.” Register here.

📍 Plessas Academy has a free online course, “Finding and Identifying Internet Subcultures.” Sign up here.

Reports & Research

📍 A high-profile group of researchers and practitioners published an article in Science arguing that AI agents risk making influence operations larger and harder to detect, turning the rigid botnets of the past into adaptable “AI swarms.”

📍 Maldita spoke to a TikTok creator that’s pushing AI-generated polarizing material about protests around the world. The interview was part of an investigation into a group of 550 such accounts. The creator said that their goal is to hit 10,000 followers so they can monetize through TikTok’s Creator Rewards Program.

📍 Conspirador Norteño found that “services that sell fake social media engagement are advertising on Bluesky at an increasing rate.”

📍 When The Markup posted a job for an engineer, it was hit with AI slop resumes, impersonators, and fake candidates. Andrew Losowsky described the red flags and scams.

One More Thing

An important and timely reminder that “AI-enhancement” photo tools don’t uncover evidence — they invent data. In the wake of the killing of Alex Pretti, people tried to use such tools to “enhance” images of his deadly encounter with ICE, according to BBC Verify.

“AI-enhancement tools don’t have some privileged knowledge of the reality that lies beneath a low-quality image. Instead, they approximate what an enhanced version of an image could look like,” generative-AI expert (and Indicator member!) Henry Ajder told the outlet.

Indicator is a reader-funded publication.

Please upgrade to access to all of our content, including our how-to guides and Academic Library, and to our live monthly workshops.