In partnership with

Correction: We updated the item about arXiv’s policy change to reflect that the repository hasn’t stopped accepting all CS preprints, just surveys and position papers. We regret the error.

Our weekly Briefing is free, but you should upgrade to access all of our reporting, resources, and a monthly workshop.

This week on Indicator

Alexios interviewed Shane Vogt, a lawyer representing the New Jersey teen that’s suing AI nudifier ClothOff.

Craig wrote about two former Meta ads leaders that have launched a new nonprofit aimed at bringing more transparency to digital advertising in order to fight deceptive ads.

It’s official: Meta earns billions from scam ads

For more than 5 years, I’ve been mildly obsessed with a number. I didn’t know what it was, but I believed it to be huge. Definitely in the hundreds of millions. Almost certainly in the billions.

What’s the number? It’s the amount Meta earns per year from scam ads placed on its platforms. Over the years, I’ve uncovered close to $100 million in scam and deceptive ad revenue at the company, including:

  • A San Diego-based marketing agency that ran more than $50 million worth of Facebook scam ads.

  • Eight deceptive Meta advertising operations that spent more than $25 million on deceptive, scammy ads that often used deepfakes of Donald Trump and other public figures.

  • An internal Meta study that found that nearly 30% of ads placed by China-based advertisers violated at least one Facebook policy. As I reported at the time, the violations included “selling products that were never delivered, financial scams, shoddy health products, and categories such as weapons, tobacco, and sexual sales.”

If a single, small company could run $50 million worth of scam ads, what was Meta’s annual rake?

Thanks to Jeff Horwitz at Reuters, we have a number: $16 billion. His story, which is based on internal documents, reported that

Meta internally projected late last year that it would earn about 10% of its overall annual revenue – or $16 billion – from running advertising for scams and banned goods, internal company documents show …

On average, one December 2024 document notes, the company shows its platforms’ users an estimated 15 billion “higher risk” scam advertisements – those that show clear signs of being fraudulent – every day. Meta earns about $7 billion in annualized revenue from this category of scam ads each year, another late 2024 document states.

It’s jaw dropping stuff. In response, a Meta spokesperson said the 10% figure was “rough and overly-inclusive” and that the documents “present a selective view that distorts Meta’s approach to fraud and scams.” He said the company ultimately concluded that the actual figure was much lower. Meta did not share the actual figure. 

The spokesperson shared the same stats that Meta sent me for my story this week: the company had reduced the number of user reports of scam ads globally by over 50% and removed more than 134 million pieces of scam ad content so far this year. (Casey Newton at Platformer pointed out that “Google, the world's largest ad seller, removed 415 million scam ads in 2024, according to its most recent ad safety report.”)

My story reported that Rob Leathern, the former Meta manager who led its effort to stop scam ads, believes that it and other platforms aren’t doing enough to fight deceptive ads and deepfakes. He commissioned a survey that showed that US consumers feel Meta is doing a poor job of preventing such ads.

The documents obtained by Reuters show that Leathern and US consumers are right.

But let’s also assume that Meta is correct in saying that the roughly $16 billion figure is higher than its true annual scam ad revenue. It still means that Meta has admitted that it earns billions a year from scam ads — and keeps the money.

The urgent question is what’s the company going to do about it?

The simple answer has to be more. More people working on the problem, more resources for them, more willingness to lose revenue in order to attack the problem, more cross-industry partnerships and data sharing, more collaboration with anti-scam organizations, more lawsuits and other efforts to punish scammers, more product innovation…

Leathern, the former Meta ads integrity leader, said in a recent Substack post that companies like Meta should notify users when they clicked on an ad that was subsequently removed for being a scam. Many scams unfold over days or weeks. A notification from Meta might stop someone from getting completely taken in. It’s one idea and it’s not perfect. But we need more of them.

Just as important, companies like Meta should not be able to keep the money that it earns from scam ads. I’ve been banging this drum for at least 5 years:

The money should go to help victims and/or fund anti-scam organizations. It should have to be disgorged. Scam ads should not be profitable for platforms. That’s the only way to fully align the financial incentives so that platforms are motivated to fight scam ads with the level of ambition and innovation that they put into other areas of their business.

Now that we have the number, it needs to get as close to zero as possible.  Craig

Free, private email that puts your privacy first

A private inbox doesn’t have to come with a price tag—or a catch. Proton Mail’s free plan gives you the privacy and security you expect, without selling your data or showing you ads.

Built by scientists and privacy advocates, Proton Mail uses end-to-end encryption to keep your conversations secure. No scanning. No targeting. No creepy promotions.

With Proton, you’re not the product — you’re in control.

Start for free. Upgrade anytime. Stay private always.

Deception in the News

📍An AI-generated image of Hurricane Melissa claimed to show birds trapped in the eye of the storm. But as one meteorologist told Yale Climate Connections, the birds in the photo would have been roughly the size of a football field in order to have appeared at that scale.

📍 Racist AI slop featuring black women “losing it” about the end of food stamp benefits in the US went viral. Some of the videos were reported as fact by Fox News, which went on to update rather than retract its article. (Here’s the before and after.)

📍 Google removed the Gemma model from AI Studio after it appeared to fabricate a rape accusation against Tennessee Senator Marsha Blackburn. Earlier in October, conservative activist Robby Starbuck sued Google for a similar hallucination. He also filed a similar suit against Meta, which settled and named him an advisor.

📍 Speaking of Google goof-ups, an Australian government department warned that Search’s “People Also Ask” section had misrepresented local laws on vehicle headlights.

📍 The academic paper repository arXiv announced that it will only accept peer reviewed submissions for survey articles and position papers in the Computer Science category. Kat Boboris, arXiv’s Community Engagement Manager, wrote that while this is technically not a policy change — preprints in this topic area had always accepted at a moderator’s discretion — it became necessary because “arXiv has been flooded with papers” that “are little more than annotated bibliographies,” especially after the advent of LLMs.

📍 Full Fact, Fundación Maldita.es, and the European Fact-Checking Standards Network are building an AI-powered misinformation-monitoring and prebunking tool.

📍 One of Alexios’ students spotted a Community Note on their actual Instagram feed, and it’s … quite something.

Tools & Tips

Nico Dekens (aka Dutch OSINT Guy) wrote, “Black Swans in OSINT: Why We Keep Missing the Impossible.” It a warning against falling victim to assumptions and looking for the same patterns. Dekens offered a few interesting exercises to help get you out of a mental rut. Here’s one he called “Structured Imagination”:

I teach analysts a simple exercise:

Take something you believe about your target.

Now flip it.

If you think they’re disorganised, assume they’re not.

If you think they’re done, assume they’re regrouping.

Then ask: What evidence would exist if that were true?

Now go find it.

📍 doxcord is a Python tool that can scan a Discord server “for social media links containing tracking parameters. The tool can identify Instagram, TikTok, and Facebook links that include tracking identifiers, and organize them by user and server.” (via The OSINT Newsletter)

📍 Skopenow uploaded the videos from its annual OSINTLive online conference. You just need to fill out a quick form to access them.

📍 Here’s the video of a recent free workshop, “Tips and Tools for Uncovering Online Scams,” from the Global Investigative Journalism Network. Three reporters shared examples of tools and methods they used to dig into digital scams. Craig was the moderator.

Events & Learning

📍 On Nov. 20, the EU Disinfo Lab is hosting a free webinar, “Command and Control: How ANO Dialog surveils the Russian info space for the Kremlin.”

Reports & Research

📍 In a simulated experiment, researchers at IU Bloomington found that a sufficiently large minority of bad raters can effectively short circuit X’s Community Notes algorithm by suppressing valuable notes. The threshold might be as low as 5% of all raters if they coordinate and if the overall pool is polarized. Even without causing a complete breakdown, the researchers argue that the Community Notes system as currently designed causes “many genuinely helpful notes [going] unpublished.”

📍 A report by Fundación Maldita.es and AI Forensics said that videos promoting misinformation about the 2024 floods in Valencia generated more than 13 million views on YouTube, and more than 8 million views on TikTok.

📍 There’s an academic dispute simmering over the benefits of “inoculation” to combat misinformation. Tom Stafford breaks it down and celebrates the fact that it’s the kind of back-and-forth that makes for a healthy research ecosystem.

Want more studies on digital deception? Paid subscribers get access to our Academic Library with 55 categorized and summarized studies:

One Thing

Indicator is a reader-funded publication.

Please upgrade to access to all of our content, including our how-to guides and Academic Library, and to our live monthly workshops.