Logo
Logo
Search
UPGRADE
ABOUT
RESOURCES
REPORTING
LOGIN
book-bookmark
ACADEMIC LIBRARY

Synthetic non consensual intimate imagery

This is a regularly updated collection of academic studies and industry reports about digital deception. It currently includes short descriptions of 55 academic studies and systematic reports.

This library is organized in five clusters:

number-one

Prevalence and characteristics of misinformation

number-two

Effects of fact-checking interventions

number-three

Prevalence, effects, formats, and labeling of AI-generated deceptive content

number-four

Synthetic non consensual intimate imagery

number-five

Deepfake Nudes & Young People

📝 Thorn

Mar 2025

6% of Americans aged 13-17 surveyed by Thorn claim to have been targeted by deepfake nudes. Conservatively, that would net out to about 1 million affected teens. Even one tenth of that number would be a huge figure, and it's only one country in the world.

Characterizing the MrDeepFakes Sexual Deepfake Marketplace

📇 arXiv

Jan 2025

Catherine Han, Anne Li, Deepak Kumar and Zakir Durumeric

This useful preprint gives a bit of a sense of what the largest platform for deepfake porn, MrDeepFakes, looked like in November 2023. At that time, the site hosted 43,000 videos that had been viewed a collective 1.5 billion times.

90% of these videos mentioned who was depicted in the videos. The researchers were thus able to identify 3,803 unique individuals, who follow a long-tail distribution: “the ten most targeted celebrities are depicted in 400–900 videos each, but over 40% of targets are depicted in only one video.”

Of the celebrities who account for 95% of the videos, one third were American and one tenth were Korean. The primary occupations were actors, musicians and models.


Fully 271 of the top 1,942 named individuals in this subsample were not celebrities, contravening MrDeepFakes modicum of a policy restricting deepfake video generation to famous individuals. As the researchers write:

For 29 targets, we found no online presence, and for another 242 targets, we could not find profiles on any of the listed social media platforms that satisfied the minimum following criteria. In one example, we find that 38 Guatemalan newscasters with little to no social media following appear in over 300 videos.

All of these videos were posted by two users, who both describe their focus on Latin American individuals in their profiles.

In Deep Trouble: Surfacing Tech-Powered Sexual Harassment in K-12 Schools

📝 Center for Democracy & Technology

Sept 2024

Elizabeth Laird, Maddy Dwyer, Kristin Woelfel

The Center for Democracy & Technology surveyed American public school students in grades 6-12 (roughly speaking, ages 11 to 18). 15% of these students said that they know of a deepfake depicting individuals associated with their school being shared in the past school year.

With 15M students in U.S. public high schools, this suggests the number of deepfake nudes in school settings around the country may be as high as 225,000. Even if some cases were double counted because multiple pupils from the same school took the survey1 and even if half of the students answered falsely, we’d still be looking at tens of thousands of cases. And this would assume that every case only affected one victim (unlikely) and that there were no cases in private schools (impossible).

You should read the whole report, but two other things stood out to me. First, I was surprised to see most students report that non consensual intimate imagery (authentic and deepfaked) is shared primarily via social media, not messaging apps.

Second, ~60% of teachers and students report that their school has not communicated their procedures for addressing deepfake nudes.

The new face of digital abuse: Children’s experiences of nude deepfakes

📝 Internet Matters

Oct 2024

Catherine Han, Anne Li, Deepak Kumar and Zakir Durumeric

A survey of British teenagers by the industry-funded nonprofit Internet Matters found that at least 13% of them had either created a deepfake nude, knew someone who had, or had encountered this type of content online.

The most interesting finding of the report to me was that 55% of the teens thought being targeted by a deepfake nude was worse than by a real nude. One girl, aged 16, had this to say in a open-ended response: “If a nude image was sent of me currently that I consented to filming even though it's sad/unfortunate I would know that (it) was my choice that led to that image being shared. However, with a deepfake I didn't choose for that image to be created and its not realistic to me.”

Reporting Non-Consensual Intimate Media: An Audit Study of Deepfakes

📇 arXiv

Sept 2024

Qiwei Li, Shihui Zhang, Andrew Timothy Kasper, Joshua Ashkinaze, Asia A. Eaton, Sarita Schoenebeck, Eric Gilbert

In a preprint, researchers at the University of Michigan and Florida International University tested X’s responsiveness to takedown requests for AI-generated non consensual intimate imagery.

The study created 5 deepfake nudes for AI-generated personas and posted them from 10 different X accounts. They then proceeded to flag the images through the in-platform reporting mechanisms. Half were reported to X as a copyright violation and the other half as a violation of the platform’s non consensual nudity policy. While the copyright violations were removed within a day, none of the images flagged as NCII had been removed three weeks after being reported.

“Violation of my body:” Perceptions of AI-generated non-consensual (intimate) imagery

arXiv

Oct 2024

Natalie Grace Brigham, Miranda Wei, Tadayoshi Kohno, Elissa M. Redmiles

In this preprint, researchers at the University of Washington and Georgetown studied the attitudes of 315 US respondents towards AI-generated non consensual intimate imagery (AIG-NCII). They found that vast majorities thought the creation and dissemination of AI-generated NCII was “totally unacceptable.” That remained true whether the object of the deepfake was a stranger or an intimate partner, though there was some fluctuation based on the intent.

In an indication that generalized access could normalize AIG-NCII, respondents were notably more accepting of people seeking out this content.

Non-Consensual Synthetic Intimate Imagery: Prevalence, Attitudes, and Knowledge in 10 Countries

CHI '24

May 2024

Rebecca Umbach, Nicola Henry, Gemma Beard, Colleen Berryessa

Earlier this year, a separate group of researchers across four institutions shared the results of a mid-2023 survey of 1,600 across 10 countries that found a similar distinction between creation/dissemination versus consumption. In the chart below on a scale from -2 to +2 you can see the mean attitude towards criminalizing the behaviors listed on the left column. (The redder the square, the more people favored criminalization.)

Need access to our exclusive content?

Login or upgrade your account to become a member to access content below.



Indicator is your essential guide to understanding and investigating digital deception.

cursor-click