Will the detectability of AI deepfakes in campaigns have any impact?



Deepfake Audio: The Threat to Democracy

Deepfake Audio: The Threat to Democracy

By Jim Saksa | (TNS) CQ-Roll Call

WASHINGTON — In the lead-up to the 2024 election, fears regarding President Joe Biden’s actions will be confirmed when a leaked tape surfaces. This audio, possibly recorded surreptitiously from a pocketed phone, will capture an 80-year-old Biden sounding confused and even forgetting he is the president, before becoming explosively angry. The tape might be anonymously emailed to journalists or spread through social media. Alternatively, the uproar could be caused by audio of former President Donald Trump saying something that his supporters find disqualifying. Whether these recordings are genuine or the creation of the rapidly advancing AI models, the affected politician will inevitably dismiss them as fake and accuse the other side of lying, cheating, and stealing their way to the White House. Despite the potential for AI experts to identify charlatans, verifying the authenticity of a recording remains impossible. Moreover, it is doubtful whether provenance evidence will matter to partisan voters who are quick to reject any information that challenges their worldviews. These “deepfake” audio recordings, which sound authentic but are entirely fabricated using short snippets of someone’s voice, pose a significant risk for underhanded political strategies.

AI developers warn that the rapid development and widespread use of deepfake technology threaten the very foundations of representative democracy. Sam Altman, the CEO of OpenAI, testified before the Senate Judiciary subcommittee in May, stressing his grave concerns regarding AI’s ability to generate personalized disinformation. A United Nations adviser recently expressed deep worries about a deepfake “October surprise.” In previous GOP presidential battles, campaigns have already utilized deepfake technology for less malicious purposes. For example, a political action committee supporting Florida Gov. Ron DeSantis’ presidential campaign used AI to create a fake recording of Trump reading a post from his new social media platform, Truth Social. Another super PAC supporting Miami Mayor Francis Suarez posted videos of an “AI Francis Suarez” touting the mayor’s conservative achievements. However, the manipulation of media to deceive voters is not a new phenomenon and does not necessarily require AI. Videos have been altered, and images have been edited to mislead the public for political gain. Attack ads have long utilized unflattering images of opponents to cast them in a negative light. Nevertheless, generative AI significantly enhances the ability of campaigns, and their rogue supporters, to create convincing fakes.

Generative AI can produce images and videos that initially appear real but reveal subtle abnormalities upon closer inspection, such as the unnatural positions of hands or implausible actions. While computers alone cannot yet produce visually convincing videos, skilled video editors can combine AI-generated content with their expertise to create realistic content that can easily deceive viewers on small screens. The exponential growth of computing power suggests that AI’s ability to produce compelling visual content will inevitably improve. As such, Hany Farid, a generative AI expert at the University of California, Berkeley, states that “there’s no putting the generative-AI genie back in the bottle.”

Experts in the field of generative AI propose a two-pronged approach to address the dangers of deepfakes. “Passive” methods involve analyzing media to identify markers of AI generation. For example, artificial audio often exhibits an unnaturally regular cadence, while image generators struggle with the concept of perspective. Meanwhile, “active detection” measures, such as digital watermarks, could be embedded in media metadata. Adobe, Microsoft, and the BBC are leading the way in developing technical standards for certifying the authenticity of digital content through the Coalition for Content Provenance and Authenticity (C2PA). These companies have pledged to implement active detection protocols to combat deepfakes. However, ensuring user compliance with ethical standards remains a challenge. Some AI companies are reluctant to take responsibility for preventing malicious use and argue that it is up to users to behave responsibly. Therefore, the development of industry standards must be accompanied by collaboration between AI companies and governments to address the vulnerabilities posed by generative AI and disinformation.


Reference

Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment