What are voice deepfakes, the new banking scam

This spring, Clive Kabatznik, a Florida investor, reached out to his local Bank of America representative to discuss a large money transfer he wanted to make. He then made another call. Except, the second call wasn’t from Kabatznik himself. Instead, a computer program artificially generated his voice and attempted to deceive the bank employee into transferring the money to a different destination.

Kabatznik and his banker were the victims of a cutting-edge scam that has caught the attention of cybersecurity experts: the use of artificial intelligence to generate deepfake voice recordings that mimic the voices of real people.

The problem is still so new that there isn’t a comprehensive account of how often it occurs. However, an expert from Pindrop, a company that monitors audio traffic for many major US banks, stated that they have seen an increase in prevalence and sophistication of voice fraud attempts using AI this year. Another major voice authentication provider, Nuance, suffered its first successful deepfake attack against a financial services client at the end of last year.

In Kabatznik’s case, the fraud was detectable. However, the rapid development of technology, the falling costs of generative AI programs, and the widespread availability of voice recordings online have created the perfect conditions for voice-related scams aided by AI programs.

Customer data, such as stolen bank accounts sold on underground markets, helps fraudsters carry out these attacks. It becomes even easier with wealthy clients whose public appearances, including speeches, are often readily available online. Finding audio samples of everyday customers can be as simple as doing an online search on platforms like TikTok and Instagram using the person’s name, as the fraudsters already have their banking information.

“There’s a lot of audio content out there,” said Vijay Balasubramaniyan, CEO and founder of Pindrop, which reviews voice verification systems for eight of the top 10 US credit institutions.

Over the past decade, Pindrop has reviewed recordings of over 5 billion calls received by call centers of financial companies they serve. These centers handle products like bank accounts, credit cards, and other services offered by major retail banks. All call centers receive calls from scammers, typically ranging from 1,000 to 10,000 per year. It’s common to receive 20 scam calls per week, according to Balasubramaniyan.

So far, the fake voices generated by computer programs represent only “a handful” of these calls, according to Balasubramaniyan, and they didn’t start occurring until last year.

Most of the fake voice attacks that Pindrop has encountered have taken place in credit card call centers, where human representatives assist customers with their cards.

Balasubramaniyan showed a journalist an anonymous recording of one of these calls that took place in March. Although it was a very rudimentary example, sounding robotic rather than human, the call illustrates how scams could occur as AI makes it easier to imitate human voices.

The call starts with a bank employee greeting the customer. Then, the voice, which sounds automated, says, “My card has been declined.”

“May I ask who I’m speaking with?” the bank employee responds.

“My card has been declined,” the voice repeats.

The bank employee asks for the customer’s name again. There’s a brief pause, followed by the faint sound of keystrokes. According to Balasubramaniyan, the number of keystrokes corresponds to the number of letters in the customer’s name. The scammer types words into a program that then reads them.

In this case, the synthetic speech of the impostor led the employee to transfer the call to another department and flag it as potentially fraudulent, explained Balasubramaniyan.

Calls like this, which use text-to-speech technology, are relatively easier to combat as call centers can use detection software to identify technical clues that the speech was generated by a machine.

“Synthetic speech leaves traces, and many anti-spoofing algorithms can detect them,” explained Peter Soufleris, Managing Director of IngenID, a voice biometrics technology provider.

However, as with many security measures, it’s an arms race between attackers and defenders, and it has evolved in recent times. Now, a fraudster can simply speak into a microphone or type a message and quickly translate it into the target’s voice.

Balasubramaniyan pointed out that Microsoft’s generative AI system VALL-E can create a voice imitation that says anything the user wants based on just a three-second audio sample.

In May, during an episode of 60 Minutes, security consultant Rachel Tobac used software to convincingly clone the voice of one of the program’s correspondents, Sharyn Alfonsi, and managed to trick a 60 Minutes employee into giving her Alfonsi’s passport number.

The whole attack took just five minutes, said Tobac, the CEO of SocialProof Security. The tool she used has been available for purchase since January.

Brett Beranek, Senior Director of Security and Biometrics at Nuance, a voice technology provider acquired by Microsoft in 2021, stated that while terrifying deepfake demonstrations are common at security conferences, actual attacks are still very rare. The only successful attack against a Nuance customer took the attacker more than a dozen attempts.

Beranek’s biggest concern is not attacks on call centers or automated systems like the voice biometric systems many banks have deployed. He is worried about scams where the person calling directly reaches an individual.

“I had a conversation earlier this week with one of our customers,” he said. “They said, ‘Hey, Brett, it’s great that we have our contact center secured, but what if someone calls our CEO directly on their cellphone and pretends to be someone else?'”

That’s what happened in Kabatznik’s case. According to the banker’s description, it seemed that the fraudster was trying to convince her to transfer the money to a new destination, but the voice was repetitive, talked over her, and used confusing phrases. The banker hung up.

“It was like he was talking to her, but it didn’t make sense,” Kabatznik said the banker had told him. (A Bank of America spokesperson declined to make the employee available for an interview).

After receiving two more similar calls in quick succession, the banker reported the incidents to Bank of America’s security team. Concerned about the security of Kabatznik’s account, she stopped responding to his calls and emails, even those coming from the genuine Kabatznik. It took about ten days to restore the connection when Kabatznik organized a visit to the bank’s office.

“We continuously train our team to identify and recognize scams and help our customers avoid them,” said William Halldin, a Bank of America spokesperson, who added that he couldn’t comment on specific customers or their experiences.

Although the attacks are becoming more sophisticated, they stem from a basic cybersecurity threat that has existed for decades: a data breach that reveals customers’ personal information. Between 2020 and 2022, personal data of more than 300 million people fell into the hands of hackers, resulting in losses of $8.8 billion, according to the Federal Trade Commission.

Once a batch of numbers is collected, hackers examine the information and link it to real individuals. The ones stealing the information are rarely the same individuals…

Reference

Denial of responsibility! VigourTimes is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment