Our weekly Briefing is free, but you should upgrade to access all of our reporting, resources, and a monthly workshop.

We’re offering 20% off all Indicator memberships for 8 more days. Join us!

This week on Indicator

Craig examined why GitHub is restricting or banning accounts that share links to OSINT and dark web resources. At least half a dozen repos or entire accounts were recently banned or blocked from public access.

Alexios and Ben Shultz from The American Sunlight Project found that at least 4,125 ads for AI nudifiers have appeared on Meta since the company announced new countermeasures in June. The company’s weak implementation of EU disclosure obligations also allowed the actors behind the ads conceal their identity.

Indicator’s work to expose AI nudifiers is informing decision makers. Our August report on the economics of the apps was recently cited by the US National Association of Attorneys General and the Australian eSafety Commissioner. The organizations urged platforms to do more to fight this vector of online abuse.

Meta throws in the towel on hoaxes

Close to a month ago I sent Meta a list of nearly 150 Facebook pages that were spreading deceptive AI slop and celebrity hoaxes. I reported on how the foreign-run pages had racked up engagement thanks to false claims about people like Stephen Colbert, Caitlin Clark, and Vanessa Bryant, the widow of Kobe Bryant.

When I reported on the loosely connected network of pages earlier this year, Meta removed 81 pages.

This time, Meta only removed one page. It also restricted multiple page admins that had violated the company’s policy that requires “authentic identify representation,” according to spokesperson Daniel Roberts.

Here’s the bottom line: After spending close to a decade trying to stamp out viral hoaxes, Meta is letting them flourish in the US.

Want to falsely suggest that Vanessa Bryant is pregnant with an NBA player’s baby and use AI-generated images to trick people? No problem.

How about spreading the lie that the infamous Coldplay concert couple are back together and on a $95 million super yacht? Go for it.

Or you can falsely claim that Mel Gibson is about to release the names of Hollywood celebrities on the “Diddy-Epstein Lists."

If you post such content, tens of millions of views await you on Facebook. And you won’t break any of Meta’s rules or risk being downranked in the US.

Roberts pointed me to the company’s misinformation policy, which in April was edited to remove references to hoaxes and viral misinformation.

To its credit, Meta’s misinformation policy page allows you to see the edits it made in April

It’s a remarkable reversal given that a senior Facebook executive said in late 2016 that, “We’ve focused our efforts on the worst of the worst, on the clear hoaxes spread by spammers for their own gain.” It’s why the company launched its third-party fact checking program that year, and why it implemented new systems to tamp down viral falsehoods.

It bears repeating: Meta’s fact checking program was created (and still exists outside the US) to prevent viral deception. Political speech was never — and still isn’t — supposed to be the main focus.

But now the US fact checkers are gone and Meta has adopted a highly permissive approach to what I used to call fake news: 100% false content that is created to deceive and to make money.

Even more alarming, Meta is paying people to produce viral hoaxes, thanks to its Content Monetization program.

Case in point: Christopher “Busta Troll” Blair, one of Facebook’s longest-running hoaxsters in the US. He operates of a network of hoax pages that he says use satire expose the gullibility of American conservatives. (I first wrote about him back in 2017.)

Here’s a recent post from his biggest page, America’s Last Line Of Defense:

Blair said in a Sept. 6 Facebook post that his pages are now part of Meta’s Content Monetization program, which pays page owners based on engagement. Since being accepted into the program, ALLOD has almost doubled its followers and is raking in engagement and, presumably, cash payouts from Meta.

“Facebook stopped fact-checking, monetized my pages for all content, started showing it to even more geriatric Trumpsters, and now ALLOD is one of the highest-ranking pages on the platform,” he wrote.

Maarten Schenk, the cofounder of fact checking website Lead Stories, which used to be a Meta partner in the US, shared Blair’s Facebook post on Bluesky.

“The brakes are completely off at Meta...” he wrote.

This is the scenario I laid out in my story from earlier this year, when I reported that Meta’s rollback of moderation systems, cancellation of US fact checking partnerships, and expansion of its content monetization program could cause a “resurgence of incendiary false stories on Facebook, some of them funded by Meta.”

Blair doesn’t want to talk about it. He recently posted on Facebook to say that his earlier post was set to private and shouldn’t have been shared publicly. He said he won’t do interviews with the media, adding:

I don't have a comment on why the world is fucked. I don't feel the need to justify what I do or why. I'm not interested in yet another hyperbolic hit piece declaring that I make $225K a year to lie to average Americans.

Roberts, the Meta spokesperson, declined to comment on Blair’s pages but pointed me to the company’s Partner Monetization Policies. The rules require partners to “Share authentic content” and notes that partners may lose their monetization status if they post content that has been labeled as false by a fact checker, or content that is “clickbait or sensationalism.” But with no fact checkers in the US, it’s open season for viral hoaxes.

Welcome back to 2016, everyone! — Craig

Deception in the News

📍 BOOM reports that “a Meta AI chatbot impersonating deceased Bollywood actor Sushant Singh Rajput spread conspiracy theories about his death, urging users to ‘seek justice’ for the star who died by suicide in 2020.”

📍 The US State Department informed its European partners that it was terminating agreements aimed at combating foreign information manipulation and interference.

📍 Joe Rogan fell for an obviously AI-generated video of Minnesota Gov. Tim Walz. His producer corrected him on air but Rogan was unbothered. “You know why I fell for it?” he said. “Because I believe he's capable of doing something that dumb." In its way, it’s a perfect class on confirmation bias (h/t Henry A).

📍 5 percent of Japanese respondents to a Red Cross survey said they have acted on online information they later realized was false in the aftermath of a disaster.

📍 Meta sent cease and desist notices to 46 companies advertising for AI nudifier services on its platforms.

📍An internal review at Business Insider found that it had published as many as 38 likely AI-generated articles that were authored by apparently fake personas.

Tools & Tips

📍 Bellingcat released Council Meeting Transcript Search, a tool that “enables researchers to search auto-generated transcripts of Council meetings for different authorities across the UK and Ireland.” You can also listen to a recent talk from Galen Reich, who made the tool.

📍 YouTube Video Finder allows you to to search through public archives to try and locate archived videos and metadata. (via Cyber Detective)

📍Sarah W explained how you can use iknowwhatyoudownload.com “to research what IPs have downloaded specific filenames or topics via advanced search operators.”

📍 Mike Caulfield wrote, “Is the LLM response wrong, or have you just failed to iterate it?” Read it in combination with a recent post on OSINT Combine from Jacob H, “Why AI Agreeableness Poses Risks to OSINT Work.”

📍 Chris Miles, with Brandon Silverman and Anna Lenhart, wrote, “From Dashboards to Data Acquisition: Charting the Social Media Monitoring Market.” Indicator Members can also read my recent guide to “3 free or affordable social media monitoring tools for your OSINT toolkit.”

Events & Learning

📍 The EU Disinfo Lab is hosting a free webinar on Sept. 18 titled, “Operation Overload - Smarter, Bolder, Powered by AI.” Register here. (via Alicja Pawlowska’s excellent free newsletter.)

Reports & Research

📍 The anonymous collective antibot4navalny claims to have identified a network of at least 1,360 bot accounts on X that promote Donald Trump, Elon Musk, and Vivek Ramaswamy.

📍In a preprint, researchers from OpenAI and Georgia Tech write that AI chatbots hallucinate because they’re incentivized to provide an answer regardless of how confident they are about it. Introducing an “I don’t know” option in benchmark tests and penalizing it less than an inaccurate answer may reduce the problem, the researchers argue.

📍Josh Dzieza’s long read on Wikipedia, which he calls “the factual foundation of the web,” strikes an ultimately optimistic note about the site’s resilience.

📍 83% of Americans surveyed by Boston University support AI labels on social media and protections for users who get impersonated by a digital replica. A bill to this effect just passed the California State Senate.

📍 The Australian Strategic Policy Institute published a deep dive into the economics and operations of scam compounds, “Scamland Myanmar: how conflict and crime syndicates built a global fraud industry.” This week also saw the US government issue sanctions against scam compound operators in Myanmar and Cambodia.

Want more studies on digital deception? Paid subscribers get access to our Academic Library with 55 categorized and summarized studies:

One More Thing

It kind of speaks for itself:

Indicator is a reader-funded publication.

Please upgrade to take advantage of our limited-time 20% discount and to access to all of our content, including our how-to guides and Academic Library, and to our live monthly workshops.

Keep Reading

No posts found