Days before a pivotal national election in Slovakia last month, an audio clip started circulating on social media that appeared to show the leader of the country’s Progressive party, Michal Šimečka, discussing plans to rig the vote. Another scandal soon followed when the leader of the UK’s Labor party was caught on tape berating a staff member in a profanity-laden tirade. Both clips turned out to be fake, but they continue to generate outrage on platforms like Facebook. The rise of artificial intelligence (AI) has made it easier than ever to create believable audio content, leading to a flood of fake audio circulating online.
Last week, the actor Tom Hanks warned his social media followers that bad actors were using his voice to promote dental plans, while fake news reports on TikTok falsely connected former president Barack Obama to the death of his personal chef. In response, a bipartisan group of senators has proposed the No Fakes Act, which would penalize the production and distribution of AI-generated replicas of someone’s voice or appearance without their consent.
The rapid advancement of voice cloning technology means that almost anyone can create sophisticated audio content from their bedroom. These faked audio campaigns are difficult to detect, as opposed to images and videos that often have obvious flaws. Social media companies struggle to moderate AI-generated audio, and there are few safeguards preventing illicit use of the technology.
Previously, voice cloning software produced unrealistic voices, but with improved computing power and refined software, it can now analyze millions of voices, spot patterns, and replicate them in seconds. Online tools, such as those offered by Eleven Labs, make it simple for anyone to create a deepfaked voice for a small monthly fee.
Although deepfake videos have long been a concern, it is AI-generated audio that is becoming a crisis. Fake audio content has the potential for real-world consequences, including violence, election fraud, and stolen identities. The voices of individuals are their livelihood, and AI threatens to take that away.
Recent examples highlight the prevalence of AI-generated audio scams. In Slovakia, fake audio clips circulated on social media in the lead-up to a national election. They were also used to create political ads, spreading disinformation. The impact of these fake audio campaigns on the election outcome is uncertain, but the technique is likely to be employed in future elections across Europe.
Concerns about AI-generated content misleading voters extend beyond Europe. US politicians have sent a letter to the CEOs of Meta and Twitter expressing serious concerns about the use of AI-generated content in political ads. EU Commissioner Thierry Breton has also pressed Meta CEO Mark Zuckerberg to address the issue of deepfakes ahead of upcoming elections.
AI-generated conspiracy theories are spreading on social media platforms like TikTok. NewsGuard identified multiple accounts using AI text-to-speech software to spread misinformation. These accounts have generated millions of views and likes on videos that falsely claim connections between notable figures and various controversies. TikTok has taken action on some of these videos after being alerted to their contents.
The challenge with AI-generated audio is that it doesn’t have the same visual glitches as AI-generated videos or images. Companies that develop AI text-to-voice tools have software to identify AI-generated voice samples, but these systems are not widely used by the public. AI voice software has also improved in replicating foreign languages, contributing to the increasing number of deepfake campaigns in regions experiencing conflicts or political instability.
Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.