1 in 4 Speech Samples as Deepfakes Remain Undetectable by People

Recent research from University College London has revealed that humans are struggling to identify deepfake speech samples created by artificial intelligence. Deepfakes refer to the technology that replaces a person’s face or voice with a convincing likeness of someone else. The study, published in the journal PLOS One, utilized a text-to-speech algorithm trained on publicly available datasets to produce 50 deepfake speech samples in English and Mandarin. Shockingly, participants were only able to detect deepfake speech 73% of the time. The detection rate only slightly improved after participants received training on recognizing deepfake voices.

Kimberly Mai, a PhD student in machine learning at UCL who co-authored the study, emphasized that humans are currently unreliable at detecting deepfake speech, even with training. Moreover, the study used relatively outdated algorithms, raising concerns about the effectiveness of deepfake speech detection against more sophisticated technology.

While this British study is the first to explore humans’ ability to detect artificially generated speech in a language other than English, the detection rates between English and Mandarin speakers were found to be similar. English speakers often noted breathing patterns, while Mandarin speakers focused on cadence when deciphering the authenticity of the voices.

The researchers at UCL warn that deepfake technology is rapidly advancing, with the latest algorithms able to recreate a person’s voice using just a 3-second sample. To combat potential threats, they aim to develop stronger automated speech detectors.

Professor Lewis Griffin, senior author of the study, acknowledges the risks associated with deepfake technology but also highlights the positive possibilities it offers. As governments and organizations formulate strategies to tackle abuse of these tools, it is crucial to recognize the potential benefits they can provide.

In addition to the concerns surrounding deepfake speech, experts believe that deepfakes may play a dangerous role in the 2024 elections. Platforms like TikTok have already taken measures to ban deepfakes of young people, as scams utilizing deepfakes to extort money or exploit individuals with pornographic content continue to rise.

Reference

Denial of responsibility! VigourTimes is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment