A new study explores themes in Twitter discussions of deepfake videos related to the Russian invasion of Ukraine, highlighting the potential for real videos to be mistaken for deepfakes and for deepfakes to fuel conspiracy theories. John Twomey of University College Cork, Ireland, and colleagues present these findings in the open-access journal PLOS ONE on October 25, 2023.
Credit: John Noonan, Unsplash, CC0 (https://creativecommons.org/publicdomain/zero/1.0/)
A new study explores themes in Twitter discussions of deepfake videos related to the Russian invasion of Ukraine, highlighting the potential for real videos to be mistaken for deepfakes and for deepfakes to fuel conspiracy theories. John Twomey of University College Cork, Ireland, and colleagues present these findings in the open-access journal PLOS ONE on October 25, 2023.
Created using artificial intelligence, deepfake videos typically feature a person saying and doing things they never actually did in real life. Deepfake technology has advanced considerably, sparking concerns about its potential harms. Deepfakes related to the Russian invasion of Ukraine represent the first instances in which deepfakes have been used in attempts to influence a war.
To better understand the potential harms of deepfakes, Twomey and colleagues analyzed Twitter discussions about deepfakes related to the invasion. They used a qualitative approach known as thematic analysis to identify and understand patterns in the discussions, which included a total of 1,231 tweets from 2022.
The researchers found that many of the tweets expressed negative reactions to news about deepfakes. For instance, some tweets expressed worry, shock, or confusion about news related to a deepfake that falsely depicted Ukrainian President Volodymyr Zelensky surrendering to Russia. However, some tweets overlooked potential harms or had positive reactions to deepfakes directed against political rivals, especially deepfakes created as satire or entertainment.
Some tweets warned about the need to prepare for increased use of deepfakes, discussed how to detect them, or highlighted the role of the media and government in rebutting them. However, some tweets suggested that deepfakes had undermined users’ trust to the point that they no longer trusted any footage of the invasion. Some tweets linked deepfakes to users’ apparent belief in conspiracy theories, such as deepfakes of world leaders being used as cover while they were actually in hiding, or that the entire invasion was fake, anti-Russian propaganda.
This analysis suggests that efforts to educate the public about deepfakes may unintentionally undermine trust in real videos. The authors note that their findings and future research could help inform efforts to mitigate the harms of deepfakes.
The authors add: “Much previous research on deepfakes has been concerned with potential future harms of the technology. However we have focused on how deepfakes are already impacting social media as we have seen during Russia’s invasion of Ukraine. Our research shows how deepfakes are undermining faith in real media and are being used to evidence deepfake conspiracy theories.”
#####
In your coverage please use this URL to provide access to the freely available article in PLOS ONE: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0291668
Citation: Twomey J, Ching D, Aylett MP, Quayle M, Linehan C, Murphy G (2023) Do deepfake videos undermine our epistemic trust? A thematic analysis of tweets that discuss deepfakes in the Russian invasion of Ukraine. PLoS ONE 18(10): e0291668. https://doi.org/10.1371/journal.pone.0291668
Author Countries: Ireland, UK, South Africa
Funding: This work was supported with the financial support of the Science Foundation Ireland grant 13/RC/2094_2 and co-funded under the European Regional Development Fund through the Southern & Eastern Regional Operational Programme to Lero – the Science Foundation Ireland Research Centre for Software (www.lero.ie, Award PP5004). These groups had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Journal
PLoS ONE
DOI
10.1371/journal.pone.0291668
Method of Research
Observational study
Subject of Research
People
Article Title
Do deepfake videos undermine our epistemic trust? A thematic analysis of tweets that discuss deepfakes in the Russian invasion of Ukraine
Article Publication Date
25-Oct-2023
COI Statement
The authors have read the journal’s policy and have the following competing interests: Matthew Aylett is employed by CereProc as Chief Scientific Officer. This does not alter our adherence to PLOS ONE policies on sharing data and materials.