Logo
Logo
Search
UPGRADE
ABOUT
RESOURCES
REPORTING
LOGIN
book-bookmark
ACADEMIC LIBRARY

OTHER

This is a regularly updated collection of academic studies and industry reports about digital deception. It currently includes short descriptions of 55 academic studies and systematic reports.

This library is organized in five clusters:

number-one

Prevalence and characteristics of misinformation

number-two

Effects of fact-checking interventions

number-three

Prevalence, effects, formats, and labeling of AI-generated deceptive content

number-four

Synthetic non consensual intimate imagery

number-five

Other

A computational analysis of potential algorithmic bias on platform X during the 2024 US election

QUT ePrints

November 2024

Timothy Graham & Mark Andrejevic

An analysis by researchers at Queensland University of Technology and Monash University claims there was a significant increase in engagement with right-leaning accounts after Elon Musk’s endorsement of Donald Trump on July 13, 2024.

While this increase may have been organic in nature and tied to the increasingly partisan nature of the platform, one metric caught my eye. Views of Musk’s own tweets went up by 138% compared to his average for the first part of the year.

If this was intentional, it would not bet unprecedented. In 2023, Platformer reported that Musk had ordered X engineers inflate the reach of his posts by a factor of 1,000. If he was willing to do that because (allegedly) his Super Bowl tweet performed less well than Joe Biden’s, it is hard to imagine that he wouldn’t have done the same in the interest of an election he described in cataclysmic terms.

Reliability Criteria for News Websites

ACM Transactions on Computer-Human Interaction

Jan 2024

Hendrik Heuer and Elena L. Glassman

In a preprint whose study design I can only define as chaotic good, two computer scientists asked 23 local politicians from Germany and 20 journalists from the US to describe how they would rate three low-credibility news sources. The researchers found there were 11 criteria that respondents tended to use to assess the trustworthiness of the site in front of them. These included the website’s content, reputation and self-description.

While the two groups mentioned some criteria at similar rates (see chart below, where “experts” are the journalists and “laypeople” are the elected officials), two exceptions stand out.

18 out of 20 journalists looked for the author of an article and considered their biography. Only 9 out of 23 politicians did the same.

Equally interesting was the differential reliance on Ads as a proxy for credibility. This was the single-most cited (negatively impacting) factor for politicians but only came up from 4 out of 20 of the journalists. I wonder whether this is because the latter are more familiar with the reality of online business models for news and more inured to the dissonance of seeing a lousy ad next to a credible news article.

Breaking reCAPTCHAv2

arXiv

Sept 2024

Andreas Plesner, Tobias Vontobel, Roger Wattenhofer

In this preprint, researchers at ETH Zurich claim that advanced YOLO Models can solve 100% of reCAPTCHAv2 bot-filtering tests (I swear those were all real words). The paper shows that VPN use, mouse movement and user history all affect the likelihood of detection. The authors conclude that “we are now officially in the age beyond captchas.”

On the one hand, great! I won’t miss these capricious and deranged puzzles. But reCAPTCHAv2 is one the internet’s main defenses against automated bots. So this is probably not the best thing to happen just as generative AI unloads hordes of imitation humans in our online spaces.

Need access to our exclusive content?

Login or upgrade your account to become a member to access content below.



Indicator is your essential guide to understanding and investigating digital deception.

cursor-click