The AI companion app Linky AI has been downloaded more than 10 million times on the Play store. According to Sensor Tower, its developers raked in $700,000 through in-app purchases across the Apple and Google app stores in the past month.
Linky AI also hosts crude chatbot impersonations of at least a dozen well-known individuals and markets itself aggressively on Meta as a way to build digital girlfriends that look like “a friend, colleague, or ex.”
And it’s not alone. Linky AI is one of almost 500 “AI companion” apps available on app stores. Researchers at the University of Washington and Georgetown University are warning in a new working paper that many of these apps do not provide sufficient protections against nonconsensual use of a real person’s likeness.
Simply put, it’s still too easy for someone to turn a “virtual crush” into a replica of a real human, regardless of that person’s wishes. The results “paint an alarming picture,” writes Grace Brigham, a PhD student at UW and one of the paper’s authors.
Brigham told Indicator that apps seem to primarily rely on reactive reporting from affected individuals rather than proactive measures.
More broadly, it’s not clear how users can be truly and reliably protected from unwanted use of their synthetic likeness. Deepfake impersonation has been operating on a “I know it when I see it” model that doesn’t capture the full range of replicas being deployed.
The researchers closely reviewed a stratified sample of 30 apps and found that 14 of them allow users to upload an image of a real person to either directly represent their AI companion or inform its visual representation. Three apps let users upload a voice from a recording.
When you consider that 19 apps also allow image generation and nine enable explicit image generation, the end result is the commodification of nonconsensual sexualized impersonation.
Some of these apps don’t just turn a blind eye to this problem. They specifically market themselves for the nonconsensual use case, much like “AI nudifiers” do. Brigham and her colleagues found almost 150 ads on Meta that promoted the fact that you could create AI avatars that look like “someone you know” or “someone you shouldn’t.”
Additional research from Indicator found 70 more such ads promising an AI companion that looked like “a gym crush, ex, or colleague.” In total, the paid posts reached 2.5 million Meta users in the EU and UK alone. (The company does not share data for other countries because it is not legally required to).

We shared these ads with Meta, which removed all of them by Tuesday night. It also appears to have disabled several related advertiser accounts.
The apps buried their edgier ads within an ocean of marketing on Meta. Indicator found that Linky AI alone ran as many as 22,000 ads on the company’s platforms over the past year. With support from Algorithm Watch, we estimated that the advertiser accounts promoting Linky AI, Eva AI, Flamify and Dialogue AI reached at least 9 millions users across the EU.
Join Indicator to read the rest
“I’m a bit more at peace knowing there are people out there doing some really heavy lifting here - investigating, exposing, and helping us understand what’s happening in the wild. And the rate & depth at which they're doing it..." — Cassie Coccaro, head of communications, Thorn
Upgrade nowJoin Indicator to get access to:
- All of our reporting
- All of our guides
- Our monthly workshops

