Are Voice Authentication Security Systems Effective? Deepfake Attack Poses Alarming Threat


Computer scientists from the University of Waterloo have made a concerning discovery regarding the effectiveness of voice authentication security systems. 

They have identified a method of attack that can successfully bypass these systems with an alarming success rate of up to 99% after only six attempts.

COLOMBIA-AVIATION-BIOMETRIC-MIGRATION-SYSTEM-EL DORADO-FEATURE

(Photo : JUAN BARRETO/AFP via Getty Images)
Passengers use BIOMIG, the new biometric migration system, at El Dorado International Airport in Bogota on June 2, 2023. Colombian Migration launched a new biometric migration system for foreigners.

Deepfake Voiceprints

Voice authentication has become increasingly popular in various security-critical scenarios, such as remote banking and call centers, where it allows companies to verify the identity of their clients based on their unique “voiceprint.”

During the enrollment process of voice authentication, individuals are required to replicate a designated phrase, which is then used to extract and store a distinct vocal signature or voiceprint on a server. 

In subsequent authentication attempts, a different phrase is utilized, and the extracted characteristics are compared against the stored voiceprint to ascertain access.

However, the researchers at the University of Waterloo have found that voiceprints can be manipulated using machine learning-enabled “deepfake” software, which can generate highly convincing copies of someone’s voice using just a few minutes of recorded audio. 

Hence, developers introduced “spoofing countermeasures” to differentiate between human-generated speech and machine-generated speech.

The research team have created a method that bypasses these spoofing countermeasures, enabling them to deceive most voice authentication systems within only six attempts. 

They have identified the markers in deepfake audio that expose its computer-generated nature and have created a program to take out these markers, rendering the fake audio indistinguishable from real recordings.

During a evaluation conducted on Amazon Connect’s voice authentication system, the researchers accomplished a 10% success rate within a brief four-second attack, which escalated to over 40% in under thirty…

Source…