Artificial intelligence (AI) has become a major topic of discussion since the introduction of the AI language model ChatGPT, which brought the technology into the mainstream.
“This audio emerged just before the final days of the polls, when media and politicians were not supposed to comment, and it took time to verify that it was manipulated,” said Gregory. He suggested that the upcoming November 5 US elections could also see an “October surprise” involving compromising content published on social media.
While the use of AI for malicious purposes is a growing concern, deepfake audio of politicians is particularly troubling, according to deepfake expert Henry Ajder.
“It’s still unclear if these deepfakes are realistic or persuasive enough to actually change voters’ opinions and influence their voting decisions,” Ajder told Anadolu, adding that “the changes we’ve seen in the landscape are significant.”
Complete Breakdown of Trust in Media
Ajder highlighted the risk of a “complete breakdown of trust in media” due to the increased use of AI-based manipulation.
Although it’s uncertain if deepfakes are influencing people, Ajder said there is “certainly a strong chance they will be.”
“The danger for me is a complete breakdown in trust in all media, whether it’s real or not,” he said, explaining that people are “increasingly doubting the authenticity of real media because they think it could be a deepfake.”
“The jury is still out on how effective deepfakes will be, but we’re seeing signs of the information ecosystem being undermined because people now know deepfakes are possible.”
Also Read: Alia Bhatt Deepfake Video Angers Fans
Absence of Strong Regulations
Gregory, an AI expert, emphasized the lack of proper regulations to address manipulative AI content.
“There’s no strong regulation determining what is legitimate use of synthetic content, and safeguards against malicious use are absent,” he warned, adding that “there are no strong technical counter-measures.”
“Methods to detect deepfakes are not widely available or reliable and are not paired with the necessary skills and resources for most journalists. New approaches to show transparently how AI was used in content creation are not fully implemented across platforms and production tools,” he said.
“So, even with the current level of AI usage, we’re not well-prepared for an escalation in deceptive use of generative AI, or an increase in claims that real footage is AI-generated.”
Gregory pointed to “levels of polarization, the absence of robust legislation in some states, and inadequate technical mitigations, particularly in minority communities targeted with disinformation and voter suppression,” highlighting the need for concern.
Risk of Being Misled
According to Gregory, at least some voters are at risk of being manipulated or misled by synthetic AI-generated content, with two main reasons for concern.
“First, most voters seek out content that confirms their positions, so deceptive AI content that reinforces their views or humanizing AI content that elevates their candidates will be welcome,” he said.
He also emphasized that AI technology is improving rapidly, making it increasingly difficult to detect synthetic content.
“Voters are encouraged to scrutinize images or listen closely to audio, but this strategy won’t work as AI improves. Publicly available detection tools found through a Google search often fail and produce frequent false positives and negatives, increasing confusion and doubt,” he said.
Many share this fear, with Gregory noting that the World Economic Forum (WEF) has identified “misinformation and disinformation powered by AI” as the number one threat going forward.
“In reality, AI so far has been a complement to traditional campaigning and influence operations, not a replacement or significant multiplier,” he said.
Ajder noted that “the average Internet user doesn’t really have the ability to distinguish deepfakes from authentic content,” and added that “the technology has rapidly improved the quality” of synthetically generated voice audio.
“I can say from my experience in this field that it is increasingly difficult to listen to a clip of someone allegedly speaking and determine if it’s real or fake. For the average person, it’s even more difficult,” he added.
Data Harvesting
Another consequence of AI’s rapid progress is the unprecedented amount of data being harvested by tech companies, Gregory said.
“There’s an unprecedented degree of data harvesting going on right now as AI companies secure training data for their models,” he said.
“Many individuals use chatbots and other AI services without considering that they might be sharing private information that will be incorporated into datasets,” Gregory warned.
He pointed to India’s recent general elections as an example of AI’s potential manipulative use.
“Generative AI can scale up direct-to-voter candidate communication,” he said, adding that it could enhance “covert influence operations by creating more plausible text that sounds like a native speaker and producing diverse versions to use in potential campaigns.”
Easier to Deceive Voters
Regarding the possibility of an orchestrated attack in the context of the upcoming US elections, Gregory referred to recent reports by Microsoft, which found that AI is increasingly used for “meme and content creation.”
He noted that the US-based tech giant had studied the “integration of generative AI into campaigns ahead of the Taiwanese elections, as well as in the US,” and uncovered the “potential use of deceptive audio and videos with dubbed and lipsync-matched content.”
He also cited research by the UK-based Center for Countering Digital Hate and rights group WITNESS, emphasizing how easy it is to create the types of images and scenes that fuel conspiracy theories and could be used for voter suppression or confusion.
“We’ve seen how AI-generated images can rapidly spread and be contextualized to frame issues. For example, recent images showing former President Donald Trump with Black voters, initially created to illustrate articles or as satire, have been recycled as deceptive images implying they are real,” he said.
Gregory added that such acts will “exploit the same vulnerabilities as previous attempts, compounded by the lack of adequate detection tools and resources in the most vulnerable communities.”
“While deepfake discussions often focus on the idea of a big, flashy fake that shifts an election, a more significant risk might be the volume of smaller-scale content that suppresses voting and reduces voter commitment and enthusiasm,” he concluded.