Our weekly Briefing is free, but you should upgrade to access all of our reporting, resources, and a monthly workshop. If you’re already a member, thank you for your support!
This week on Indicator
Craig did a deep dive into three free or affordable social media monitoring and analysis tools. Then he dug into them even more during our latest members-only workshop. Upgrade to get access to the video, slides, and transcript.
Alexios tested seven general purpose AI tools to see how well they performed a common task on the misinformation beat: identifying the speaker in a sketchy video. One tool stood out.
Deception in the News

📍 The Blackbird Spyplane fashion Substack revealed (in a pretty good piece of digital detective work!) that J.Crew had posted AI-generated images to Instagram. The fashion brand acknowledged that the images were created using AI by later adding a “Digital art by” credit to the person who created them. This follows a recent AI-related dustup at Vogue.
📍 Wired reported that a liberal dark money group in the United States has been paying as many as 90 influencers to post content favorable to the Democratic Party, without disclosing the payments.
📍Updates to TikTok’s Community Guidelines don’t appear to have significantly affected its misinformation policy. Unlike Meta, the company has maintained its (smaller scale) partnership with fact-checkers.
📍 The US Department of Defense is shopping around for agentic AI systems to bolster its influence operations.
📍 India’s Parliamentary Standing Committee on Home Affairs recommended new legislation against deepfakes.
📍 Disinformation peddlers continue to leverage the credibility of traditional media brands. In Brazil, videos posted on TikTok and YouTube falsely claimed The New York Times had reported that former president Jair Bolsonaro’s son Eduardo was being investigated in the United States. In Germany, Tagesschau’s website was spoofed to promote a dubious crypto scheme.
📍 YouTube got caught using AI to enhance videos without telling users or getting permission from creators. The platform also struck a deal to add right-wing cable network OAN to YouTube TV. Oliver Darcy at Status wrote that YouTube is “breathing life into a network that has trafficked in lies and conspiracies for years, and granting legitimacy to the propaganda outlet YouTube once rightly judged too toxic to monetize, let alone carry.”
📍 Philstar.com uncovered a network of 71 inauthentic X accounts that spread “identical or similarly worded messages” that labeled Vietnam as a threat in the South China Sea. Some of the accounts also spread “pro-Duterte and anti-Marcos talking points aimed at Philippine audiences.”
Tools & Tips

📍 Eitan Livne added new features to his Email Permutator tool, which generates potential email address using variables such as name, DOB, username etc.
📍 Nico Dekens wrote, “Stop Calling It OSINT.” It’s a rant against shortcuts, lazy thinking, and “astrology with street view.”
📍 Bellingcat’s Foeke Postma tested GPT5’s geolocation abilities and found it’s worse than other models.
📍 Aisvarya Chandrasekar and Klaudia Jaźwińska wrote, "Why AI Models Are Bad at Verifying Photos."
📍 Magic File Tool “allows you to extract text, metadata, selectors and images from supported files.” (via Cyber Detective)
📍 SightSwarm is “a marketplace connecting organizations with elite OSINT talent.” (via Alicja's OSINT Newsletter)
Reports & Research
📍 Can games help defend against misinformation? A new study in Nature’s Scientific Reports tested a game called Bad Vaxx that attempts to inoculate people against vaccine misinformation. Matthew Facciani’s Substack has a good summary of the findings, including that "just 15 minutes of gameplay made people better at spotting manipulative vaccine content, more confident in their judgments, and more discerning about what they’d share online."
📍 Zeve Sanderson and Scott Babwah Brennen of NYU wrote that, after roughly 10 years of effort, the mobilization of academic, civil society, government, and media institutions to combat misinformation has led to “little, if any, discernible progress.” They argue that it is essential to “expand the scope of how we defend democracy in the digital age while preserving the institutional momentum of the past decade.”
📍 Anthropic’s Threat Intelligence Report outlines several ways in which its tools were misused by cybercriminals and scammers, including for hacking and extortion. A case study in the report focused on a Telegram bot that used Claude and other AI models to support romance scam operations.
Want more studies on digital deception? Paid members get access to our Academic Library with 55 categorized and summarized studies:
Events & Learning
📍 The EU Disinfo Lab is hosting a free webinar on Sept. 4, “Synthetic Propaganda: Generative AI and the future of political communication.”
📍 The Pulitzer Center is hosting a free webinar on Sept. 9, “How To OSINT the Ocean.”
📍 The Center for Disaster Risk Management is offering a free, twice a week course “to empower development professionals with essential spatial analysis and remote sensing skills.” You have to register by August 31.
One more thing
Will Smith is on tour and he recently posted a short video that showed some of his adoring fans. But people quickly noticed that things didn’t seem right. Some fans looked AI generated, as did a few of the signs they were holding up, as reported by Futurism.
The experts at GetReal Security dug in and concluded that, “The crowds in the video are not -- as being claimed online -- fake, but the footage is likely to have been edited or modified by some AI-powered tools.”
Watch their analysis in a report from NBC News:
Indicator is a reader-funded publication.
Please upgrade to access to all of our content, including our how-to guides and Academic Library, and to our live monthly workshops.



