In partnership with

Our weekly Briefing is free, but you should upgrade to access all of our reporting, resources, and a monthly workshop.

This week on Indicator

We published our review of 2025, the year tech embraced fakeness. VC-backed bot farms, endless AI slop, industrial level scams, abusive AI nudifiers, Meta paying people for hoaxes... This year deception was legitimized, monetized and shoved down the public’s throat. Read it for free.

Craig published the top new and updated digital investigative/OSINT tools of 2025. It’s a look at more than 45 tools that you can use to gather and analyze information.

Deception in the News

📍 Actor Liam Neeson narrated an antivax documentary that calls COVID-19 shots “dangerous experiments,” according to Important Context. Neeson had in the past publicly thanked the scientists behind the mRNA vaccine. His publicist said that “Liam never has been, and is not, anti-vaccination.”

📍 X terminated the EU Commission’s ad account for “[taking] advantage of an exploit in our Ad Composer — to post a link that deceives users into thinking it’s a video and to artificially increase its reach.” This was supposed to play as a form of gotcha after the EU had fined the company €120 million over, among other things, deceptive design of its blue checkmark. The effect was weakened by X’s head of product claiming that “X believes everyone should have an equal voice on our platform,” even as it explicitly prioritizes content from accounts that pay for a blue checkmark. Meanwhile, Russian propaganda took Elon’s side. — Alexios

📍 Reddit is piloting verifying a small number of profiles using grey checkmarks, which would replace the “official” watermark given to some businesses.

📍 Rail officials in Northwestern England canceled service for 90 minutes over a bridge that appeared to have been damaged in a viral AI-generated image. The BBC reported that “32 services including passenger and freight trains were delayed” (h/t Henry Ajder).

📍 An AI-generated YouTube video purporting to show Brazilian special forces cheerfully touring the destroyed headquarters of the Comando Vermelho criminal group was seen 3.5 million times.

📍Australian rock band King Gizzard & the Lizard Wizard left Spotify earlier this year to protest the fact that the company’s CEO had invested in an AI weapons company. The band was soon replaced on the platform by an impostor account called “King Lizard Wizard” that posted AI-generated songs with the same titles as the real deal. “Seriously wtf we are truly doomed,” the band’s frontman told an Australian music magazine.

📍Instagram is creating clickbaity and inaccurate AI-generated headlines for posts that are visible on Search. The original posters aren’t pleased.

You can (easily) launch a newsletter too

This newsletter you couldn’t wait to open? It runs on beehiiv — the absolute best platform for email newsletters.

Our editor makes your content look like Picasso in the inbox. Your website? Beautiful and ready to capture subscribers on day one.

And when it’s time to monetize, you don’t need to duct-tape a dozen tools together. Paid subscriptions, referrals, and a (super easy-to-use) global ad network — it’s all built in.

beehiiv isn’t just the best choice. It’s the only choice that makes sense.

Tools & Tips

Norwegian media researcher Ståle Grut’s prediction for Nieman Lab is that “a visual verification tax comes due.” He writes:

In 2026, newsrooms face a new reckoning: Either you’ve built out visual investigative competence or you’ll pay the price in lengthy corrections, lost credibility, and missed opportunities.

…the verification tax is coming due: You either invest in visual verification capabilities now or pay the price in credibility later. The verification tax represents the cost of operating in a media landscape where visual material is both everywhere and increasingly untrustworthy. For journalism, paying it entails either building the competence to assess eyewitness footage, satellite imagery, and AI-generated content, or ceding authority to those who can.

Grut correctly notes that some newsrooms have invested in skilled visual verification teams, but they’re the exception. Newsrooms generally lack the skills needed to meet the challenge of our easily manipulated digital information environment. It’s disappointing to me that this remains the reality more than a decade after we published the first Verification Handbook.

Specialized visual investigations teams will never be the norm in most newsrooms. They can’t afford it, and it doesn’t make sense for smaller publications. The good news, as Grut, notes, is that “The core techniques of visual investigations aren’t magic.” Anyone can learn fundamental visual verification techniques — and that’s what should be widespread within newsrooms.

Along with the Handbook(s), there are lots of other free resources to build visual investigative skills, many of which have been highlighted in this newsletter. It’s a shame and, yes, a risk that newsrooms and journalism schools haven’t been more aggressive in helping reporters gain such skills. I would love to see us make progress on this in 2026. —Craig

📍 THINKPOL is a set of tools to analyze Reddit users and content. (It used to be called R00M 101.) You get 50 free credits when you sign up. As an example, the profile analyzer says it can extract “AI insights about any Reddit user—demographics, interests, personality traits, and behavioral patterns.” (via @0xtechrock)

📍 D4rk_Intel maintains a GitHub repo filled with “scripts, resources, and methodologies to aid in gathering and analyzing publicly available information.” They recently updated it to include tools and resources for image analysis, data visualization, geolocation, social media analysis, and more.

📍 Investigative reporter Ben Heubl shared the slides from a presentation he recently gave about “how to use RAG AI models to work with large leak dataleaks.” The slides are primarily in German (with English here and there) but can be easily translated.

📍 Cosmograph is a free tool to “visualize and analyze large network graphs and machine learning embeddings on your own device.” The 2.0 version was just released, and you can read about the improvements here.

📍 Agence France-Presse published a quick YouTube video that shows you how to use the InVID verification plugin to search for user generated content on X.

Events & Learning

📍 OSINT trainer Kirby Plessas is giving a free webinar on Dec. 12 at 3pm ET, “How AI is Rewriting OSINT—And How You Can Leverage It Today.” Register here.

Reports & Research

Researcher Benjamin Shultz, who collaborated with Indicator on this story in September, wrote about a novel approach he used to measure the extent to which disinformation research has been chilled in the US and elsewhere.

He examined the frequency of social media posts about the topic from academic journals, public and private US colleges, and non-US universities. Shultz argued that “the degree of public-facing research communications from academic institutions about disinformation research can function as a proxy for measuring the health of the field.”

He found a decline in public communications about disinformation across the board, providing “quantitative evidence of a chilling effect on academic communication about social media and disinformation in the wake of the 2024 U.S. election.”

Shultz observed that US colleges exhibited the most dramatic drop-off following Donald Trump’s election victory:

Social media activity covering research in these areas fell sharply among U.S. universities immediately after the election, consistent with an environment of heightened scrutiny and fear of political retaliation from an administration openly hostile to this research field. Academic journals and non-U.S. universities displayed slower and smaller—but still-measurable—declines, suggesting that the chilling effect extended beyond the institutions most directly dependent on U.S. federal funding. Most importantly: the data shows that this effect was not systematic across disciplines, with some hard science communications remaining consistent and unaffected by the election.

—Craig

📍 The Guardian published a multi-part series about nonconsensual deepfake pornography, including stories about Mr DeepFakes, a new UK law that criminalizes intimate image abuse, and how nudify apps are being used to abuse girls in school.

📍 New in Science: the Cambridge Online Trust and Safety Index, a (cool if somewhat ambitiously-named) tracker of the cost of SMS verification across countries and platforms. It’s is a helpful proxy for the cost of generating inauthentic accounts and therefore of digital deception. Kai Kupferschmidt has written a useful analysis.

📍 Maldita found 40 TikTok accounts with over 1.5 million followers that were sharing AI-generated child sexual abuse material in order to lure predators onto Telegram and other services in order to sell them actual CSAM. Check out the bewildering six-part investigation here.

📍 TikTok is also hosting thousands of deepfakes of UK prime minister Keir Starmer, according to NewsGuard. The accounts pushing this content appear to have a similar economically-driven motivation as ones we’ve previously covered on Indicator.

📍 Mike Caulfield’s recent post about using AI for fact-checking is worth a careful read. Mike found that Grokipedia was correct about a Nobel-related that claim PolitiFact had claimed was inaccurate. While he stresses that “you absolutely do not have to hand it to Grokipedia,” he also said that “there is no future of verification and contextualization that doesn’t involve both better understanding of LLMs and more efficacious use of them.” I’m as bearish as they come with respect to AI and fact-checking, but I promise to pay heed to Mike’s entreaty in 2026. — Alexios

📍The Digital Democracy Institute of the Americas found “4,000 unique messages that appeared to promote fraudulent immigration services circulating in Latino public WhatsApp groups.”

📍 Conspirador Norteño looked at the market for buying Bluesky followers.

One More Thing

A popular AI-powered toy tested by NBC News and Public Interest Research Group was found to be promoting Chinese propaganda regarding Taiwan and Xi Jinping. You have to watch the video.

Indicator is a reader-funded publication.

Please upgrade to access to all of our content, including our how-to guides and Academic Library, and to our live monthly workshops.