
🤥 Faked Up #14: Instagram's army of AI thirst traps represent a new blended unreality, Wikipedia is a battlefield in Brazil, and the Harris campaign co-opts media credibility
This newsletter is a ~8 minute read and includes 57 links. I love to hear from readers: please leave comments or reach out via email.
THIS WEEK IN FAKES
Yet another “AI candidate” got creamed. X’s image-generator is predictably bad. Australia says a whole lot of crypto ads on Facebook were scams. Misinfo-tracking tool CrowdTangle is dead, but the EU is asking questions. The FBI is sharing leads on foreign disinfo operations again. SF is suing AI nudifiers (go get ‘em!). Google deleted the deepfake porn app I flagged in last week’s newsletter (Apple has not).
TOP STORIES
AI THIRST TRAPS OF INSTAGRAM
I follow some personal trainers on Instagram, so its algorithm has decided that I am interested in the overall category of shirtless men. 10 days ago, the impeccably chiseled Noah Svensgård showed up on my Explore Feed.
The account’s aesthetic screams AI. So do other details I spotted across his posts, including gibberish text, eyeglasses with inconsistent arms and a smushed-up watch.

In the year of the first AI beauty pageant and people marrying their chatbots, it is not surprising that I stumbled on one likely deepfaked hot dude.
It is more noteworthy, however, that AI avatars are flooding Instagram without the platform doing much to disclose their artificiality. And it hints to a future where body expectations on the world’s premier image-based social network are set by accounts that are not just unrealistic — but entirely unreal.
Over the past week, I collected more than 900 Instagram accounts of stereotypically hot people that I am either certain or highly confident are AI-generated.1 I could have easily collected 900 more.

The accounts are predominantly female, typically scantily clothed and often improbably endowed.
The accounts in my dataset have a cumulative 13 million followers2 and have posted more than 200 thousand pictures.
These accounts exist primarily to make money, building audiences that are then redirected to Fanvue, Patreon, Throne and bespoke monetizing websites. On those platforms, AI nudes are exchanged for real money (typically a $5-$15 monthly subscription).
As with every AI grift, a cottage industry of marketers is offering to teach creators how to get in on the hustle. One of them is professor.ep, the creator of Emily Pellegrini, a semi-famous AI model with 262K followers. He claims the account netted him more than $1M in 6 months. While I can’t verify this figure, Fanvue appears to have confirmed Pellegrini’s account earned more than $100K on its platform alone.
In promotional material, professor.ep encourages people to get into “AI Pimping” and notes that the “increasingly blurred” line between humans and AI drives engagement.

He’s posted this totally normal video to drum up business.
professor.ep isn’t alone. Other tools at the service of the accounts I’ve found include genfluence.ai and glambase.app (to create the models) and digitaldivas.ai (which offers a $49.99 course in “AI Influencer Instagram Mastery”).

To be clear, I don’t think there’s anything wrong with people paying for AI erotica. This newsletter does not kink-shame.3
I do worry, however, about the effect of a growing cohort of fake models on a social network built for real people.
In reviewing accounts, not once did I come across the “AI info” label that Meta uses to signal some synthetic content. [Update: This is partly due to the fact that I was logged into Instagram on Firefox rather than in the app. Following publication I did spot the label on a minority of the posts within the iOS app. I apologize for the inaccuracy 4].
In the absence of consistent flagging from Instagram, about half of the accounts do the responsible thing and self-identify as AI-generated in their bio (or very rarely, with their own watermark).
But the remainder either wink to the fact that they are unreal with terms like “virtual soul” or a 🤖 emoji, hide any disclosure at the end of a long bio so it’s behind the ‘more’ button, or just full-on avoid the subject.

There is also a sizable cohort of accounts that claim to be “real girls” that are enhanced by AI. This may be a thing, but I did not have time to study the accounts enough to believe it.
It’s not clear the extent to which followers are in on the synthetic nature of their thirst traps — or care. Most posts I reviewed had collected your standard cascade of hearts and flattering replies.

But the tenor of some of the comments appears to suggest that some folks probably don’t realize what’s going on.

My guess is that the majority of people following don’t know they are interacting with a computer-generated creature.
On the one hand: Who cares? What harm do some imaginary thirst traps really cause?
I don’t think the harm is in the thirst traps — I think it’s in their coexistence with real accounts.
Instagram and others have already normalized retouching photos of naturally impressive physiques. Now the unrealistic body standards are on track to becoming completely unreal. One user’s comment on a (transparently disclosed) male AI model is worth quoting at length:
[these pictures] make people with normal, actual human bodies feel simultaneously horny and incredibly shit about this body and their own. Probably [the model] will help encourage teenage boys to spend their lives in a gym and start gruelling roid regimes before they are even 20 in order to look as much as possible like this whereas in reality only a tiny fraction of humanity has the genes to have a real, natural body like this. But yeah - hot.

Beyond propagandizing certain types of bodies, I think Instagram is facilitating a new blended unreality.
The AI thirst traps of Instagram are gradually training us to either ignore or fail to care about the difference between photos of real humans and fake ones. What is happening in one corner of the social network will spread (partly by design).
Detection-based labeling won’t get us out of this. At least as of this moment, Instagram’s “AI Info” is functionally meaningless.
I think there is an urgent need to discuss and test at scale content credentials or personhood credentials.
The alternative is social platforms overwhelmed by AI-generated slop.
Do you have any ideas on what else to do with this dataset? Please reach out!