Estimated read time: 4-5 minutes
- New research from Utah Valley University highlights the rise and impact of AI-generated deepfake media.
- The study shows deepfakes are often perceived as credible, complicating trust in election security and a number of other things.
- Study participants struggled to identify deepfakes from real media, raising concerns over misinformation and trust.
OREM — Utah Valley University President Astrid Tuminez grew up in the Philippines, under martial law, and saw firsthand the damage that a distrust of election systems can have.
"During that period of time, there were no elections. And when there were elections, even when I was a little kid, they were always contested — very, very violently," Tuminez said. "I arrived in the United States as an immigrant and I couldn't believe people peacefully ceded power. I'd never known that in my country."
This is something she's never taken for granted.
Still, there's a sizable partisan gap in the trust level of the administering of elections, according to the Pew Research Center.
A new survey published last week showed that 73% of voters, including 90% of Vice President Kamala Harris' supporters, expect the election will be administered well, while 57% of former President Donald Trump's supporters feel the same.
While there is a myriad of reasons for this, one concern related to election security and disinformation is the presence of AI-generated deepfakes, according to research out of UVU presented on Monday.
From 2019 to 2023, the number of deepfakes online increased by 552%, with research indicating that deepfake content will continue to increase exponentially.
"The rise of deepfake content has made it harder to find reliable sources and has increased distrust in online information. This problem has major implications," said William Freedman, a research assistant at UVU's Gary R. Herbert Institute for Public Policy.
The interdisciplinary study tested 244 subjects across the U.S. with real and AI-generated deepfake video and audio to determine their ability to distinguish between the two.
To do so, the UVU Sales Marketing Applied Research Test Laboratory (SMARTLab) employed biometric methods, including eye tracking and facial coding, to analyze the results. The research also examined how deepfakes can influence elections and outlined necessary measures to ensure election safety and security.
So, with an inevitable rise in deepfake content, what did the study learn about the issue?
Deepfakes tested in the study received equivalent or higher ratings than authentic content in categories including credibility, trustworthiness and persuasiveness. Essentially, deepfakes are often seen to be as credible as real media, the study shows.
"The question is no longer whether deepfakes can get that good (to be seen as credible). It is a question of who is capable of creating deepfakes that are that good and how long it will be before that capability becomes widespread," said Hope Fager, a strategic research team lead at the UVU Center of National Security Studies.
Another takeaway from the study is that, generally, people aren't proficient in identifying deepfakes. In fact, over 50% of participants rated deepfake content as "probably real" or "definitely real," even after being informed there was a chance that they had seen a deepfake.
"In this study, we simulated as best as possible the natural circumstances of scrolling through videos on social media. Generally, people's default reaction was to assume that they were seeing real content, the same way that they would in real life," Fager said.
Additionally, most participants were confident their assumptions about the content being real or fake were correct, suggesting retrospective deepfake identification is incredibly difficult.
"This means, that with a good deepfake, a stranger could fraudulently become law enforcement, a field expert, a personal friend or a politician," Fager said. "Based on our research, someone could adopt your identity, authority and expertise with at least a 50% accuracy rate."
Furthermore, the presence and prevalence of deepfakes create problems on their own, even when the media is real. Only 70% of participants who saw real videos thought they'd seen a real video, according to the study. This suggests that even the idea that something could be a deepfake is enough to make people unsure of what they're seeing, at the very least.
The final key takeaway from the study is that, even though people aren't very good at identifying deepfakes, they're non-consciously responding to them differently.
"The real video resulted in stronger natural emotions like fear, joy (and) surprise," Fager said. "The deepfake video, on the other hand, had higher levels of engagement and confusion. This confusion came from some imperfections in the video ... where the face and the eyes and the teeth aren't quite right. However, there is no indication that this confusion was felt on any sort of conscious level."
This, the study said, suggests a non-conscious response associated with the "uncanny valley" effect when viewing deepfake media. The problem is that it's not felt at a high enough level to actually inform people that what they're seeing is fake.
With such striking findings, it makes sense that today's political climate, combined with dangerous deepfake media, is as volatile as it is.
Still, Utah leaders took time to reaffirm the state's — and the nation's — election integrity.
"We have to become, reconditioned, I guess, into accepting the results of elections no matter what they are," Utah Lt. Gov. Deidre Henderson said. "We have a good system. We have a system that has some flaws that we are continually working to fix, to make better. We have a system run by humans that sometimes makes little mistakes. Sometimes they make big mistakes. But we have to get back in the habit of accepting the results of elections that we don't like. Without that, we're pretty lost as a country."