Our weekly Briefing is free, but you should upgrade to access all of our reporting, resources, and a monthly workshop.
This week on Indicator
Craig published a detailed look at more than a dozen free tools you can use to investigate YouTube videos, channels, and comments.
Alexios reviewed a study about the impact of Facebook’s fact-checking program, which continues around the world one year after Mark Zuckerberg terminated it in the United States.
We also published the video, slides, transcript, and AI-generated noes from our December members-only workshop. Craig gave a hands on demo of the new and updated OSINT/digital investigative tools of the year, which we highlighted in this previous post.
This piece was updated to reflect X’s partial restriction on Grok image generation.
Grok’s mass undressing event isn’t an aberration. It’s the new normal
We are well into the second straight week of X users using Grok to flood Elon Musk’s platform with nonconsensual sexualized images.
Researcher Genevieve Oh found that Grok produced about 6,700 sexually suggestive images on X in a 24-hour period. In what it described as a “conservative” estimate, AI detection company Copyleaks identified “roughly one nonconsensual sexualized image per minute in the observed image stream” on December 31. Another researcher cataloged almost 500 cases.
Many of the images aren’t “just” nonconsensual. They are abusive, macabre, and offensive. Grok put a bikini on the corpse of Renée Good, the woman killed by an ICE agent in Minneapolis on January 7. It accepted a prompt to add a swastika bikini to the photo of a Holocaust survivor. It engaged with requests to represent women in scenarios that evoked physical assault.
What’s visible on X is just one part of it. Grok’s standalone app has also been used to generate sexual imagery of children aged 11 to 13, according to the Internet Watch Foundation. As Emanuel Maiberg wrote on 404 Media, Telegram users exchange tips on how to generate “convincing and lurid” sexual deepfakes and share their creations. One abuser grumbled that “too many people using grok under girls post are gonna destroy grok fakes. Should be done in private groups.”
The man responsible for all this, Elon Musk, tweeted once about the fiasco, writing that “anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.” He did not take responsibility for Grok’s failures. A statement from X’s Safety team focused on child sexual abuse material, even though synthetic nonconsensual intimate imagery is also illegal in many countries. Only on Friday, January 9, did X take a minor step towards stemming the abuse by restricting some of the image generation features to paying users.
Musk has spent considerably more time boasting about Grok’s recent growth, which presumably has been driven in part by the surge in abusive prompts. He published or reshared at least 30 celebratory posts about Grok’s user base and product features between January 7 and January 8. He hasn’t appeared to express remorse for the widespread abuse unleashed on X users. This is the same man who, in 2024, called biased responses from Google’s Gemini chatbot “extremely alarming” and posted about them nonstop.
Governments around the world are paying attention. Australia’s eSafety commissioner Julie Inman-Grant said that “we will use this range of regulatory tools at our disposal to investigate and take appropriate action.” British, French, German, and Indian government officials all threatened consequences. The European Commission called Grok’s outputs “illegal” and requested that X retain relevant documents through the end of 2026. None of these regulators were impressed by the limited step of restricting image generation to paying subscribers.
All of this is likely of little concern to Musk for as long as he maintains the support of the American government. Disseminating nonconsensual deepfake intimate imagery is a criminal offense in the United States under the 2025 Take It Down Act. The law’s co-sponsor Ted Cruz wrote that “many of the recent AI-generated posts are unacceptable and a clear violation of my legislation,” adding that “guardrails should be put in place.” Yet Cruz also claimed to be “encouraged that X has announced that they're taking these violations seriously.”
Take It Down requires platforms to take down nonconsensual deepfake imagery within 48 hours of being notified, but this provision only goes into effect in May. Based on an audit that showed X consistently removed nonconsensual content when requested under copyright laws, the law will likely help reduce the presence of nonconsensual nudes on the platform. But it’s not clear to me that bikini pictures would be eligible for removal. Plus, this system places the burden on victims to report content, rather than requiring generators to prevent it from being created in the first place.
Safety guardrails don’t appear to be top of mind for Musk. CNN reported that the latest bout of Grok-enabled abuse followed a meeting where the xAI CEO expressed frustration about restrictions that had been placed on the chatbot’s image and video generation capabilities. Ashley St Clair, the mother of one of Musk’s many children, was also targeted by Grok’s undressing spree. She claims to have lost her monetization privileges for speaking up about it.
Grok’s bikini requests on X will probably peter out as the trend dissipates and the removal requests catch up. Already a sizable part of the visible feed appears to be driven by adult content creators [NSFW] – including AI-generated ones [NSFW!] – trying to ride the surge in attention by requesting Grok to nudify their own pictures.
But it’s not just Grok. Wired found that Google and OpenAI’s chatbots have also been used to generate nonconsensual sexualized images. Dedicated AI nudifiers make millions and reach millions. Mass generation of nonconsensual sexualized harassment is a fixture of the current online ecosystem, and it will take a lot of work to change that. – Alexios
Deception in the News
📍 In a recent post, Instagram boss Adam Mosseri made some bold promises about building credibility signals into the platform, verifying authentic content, and labeling AI-generated content. (He’s got a lot of work to do on the last one.) Mosseri also seemed unworried about AI slop because “there’s a lot of amazing AI content.” At least it was clearer than Microsoft CEO Satya Nadella’s recent word salad:
we need to get beyond the arguments of slop vs sophistication and develop a new equilibrium in terms of our “theory of the mind” that accounts for humans being equipped with these new cognitive amplifier tools as we relate to each other.
📍 Poland asked the EU to investigate TikTok over AI-generated content that it claimed was Russian disinformation.
📍 Religious leaders across the United States are being impersonated in AI videos and images that are often used to scam their congregations.
📍 Deepfakes related to the capture of Venezuelan president Nicolás Maduro clogged social media feeds last week. The creator of one particularly viral synthetic image told AFP that he used Google Gemini’s Nano Banana Pro to generate the picture. A viral video of a fake British MP asking why Israeli Prime Minister Benyamin Netanyahu can’t also be captured and brought to justice was instead created with OpenAI’s Sora.
📍 A combination of fake images, clickbaity blogs, and sloppy fact-checking led thousands of New Yorkers to wait for a non-existent New Year’s Eve fireworks show at the Brooklyn Bridge. (Something similar had happened in 2024 with an AI-hallucinated Halloween parade in Dublin.)
📍 Casey Newton debunked the viral food delivery app “whistleblower” post on Reddit. Notably, SynthID helped him sniff out the hoax by identifying a fake company badge.
Tools & Tips
Meta recently launched a new “Low impression count” label in its Ad Library. If you see the label, it typically means an ad isn’t performing well and/or doesn’t have a lot of spend behind it. Here’s what the label looks like when applied to an ad in the Meta Ad Library:

Meta applies the label to ads that have received fewer than 100 impressions. It’s useful if you’re looking at a lot of ads and want to identify the creative that’s likely getting the most most (or least) amount of impressions.
For example, here’s a row of 8 active ads (among a list of more than a thousand for this particular brand). I can easily scroll and ignore those that have low impression count labels:

I added this information to “The Indicator Guide to investigating digital ad libraries.” We regularly update our growing collection of OSINT guides with the latest tools and features. —Craig
📍 TGSpyder is an new open source Telegram investigation tool from Darksight Analytics. It can scrape members and messages from public and private groups, and extract t.me invite links from the chat, among other features. More info here from creator Valdemar Balle.
📍 DorkSearchPro is a free tool that generates advanced Google queries to look for publicly available documents, files, directories etc. connected to a domain. (via Logan Woodward)
📍 Nico Dekens wrote, “The Next Five Years of OSINT: Trends Innovations and Challenges Transforming Investigative Landscapes.”
📍 Rae Baker wrote, “Knowing When to Stop: The OSINT Skill No One Teaches.”
📍 Nihad A. Hassan wrote, “19 tips on AI for OSINT research.”
📍 Hana Lee Goldin wrote, “How to Spot AI Hallucinations Like a Reference Librarian.”
Events & Learning
📍 OsintifyCON is a global virtual conference and training event focused on actionable open source intelligence. Craig is giving a workshop on “Social Media Intelligence Techniques with Free Tools: Practical Methods for Modern Investigators.” The conference is on Feb 5 and the training is on the 6. Indicator has partnered with OsintifyCON to offer our readers discounted tickets. Go here to get 30% off an early bird conference ticket, and here to get 30% off the early bird price for a full day of training.
📍 Eve C, a trainer with I-Intelligence, is giving two free sessions of a webinar titled “Using AI to Enhance Chinese OSINT” on Feb. 19 and 20. To register, email [email protected]. More info here.
Reports & Research
📍 An investigation by Maldita found that TikTok and Instagram allowed 37 TikTok advertisers that had previously broken platform rules to run at least 18,000 fraudulent ads between Black Friday and Christmas.
📍 Marc Owen Jones identified a network of “19,000 UAE-aligned bots” on X that have promoted positive narratives about the Rapid Support Forces, a Sudanese paramilitary force that has been accused of committing massacres and genocide.
📍 An analysis by Hazel Gandhi for Rest of World found that Meta has run gambling ads in at least 13 countries that prohibit such promotions.
Want more studies on digital deception? Paid subscribers get access to our Academic Library with 55 categorized and summarized studies:
One More Thing
A recent graphic from the US National Weather Service left people in Camas Prairie, Idaho puzzled. The wind forecast contained typical information about expected gusts and impacts. But it also included details for “Orangeotild” and “Whata Bod” — two places that don’t exist.

The NWS, which experienced significant layoffs last year thanks to Elon Musk’s Department of Government Efficiency, later admitted that the graphic was created with AI, according to the Washington Post.
“The blunder — not the first of its kind to be posted by the NWS in the past year — comes as the agency experiments with a wide range of AI uses, from advanced forecasting to graphic design,” the paper reported.
Futurism also amusingly noted that the hallucinated town of “Whata Bod” sounds “more like an old timey dirty joke than an actual place where people live.”
Indicator is a reader-funded publication.
Please upgrade to access to all of our content, including our how-to guides and Academic Library, and to our live monthly workshops.



