It turns out that deepfake audio can be more dangerous than video! According to the Pindrop report, in the two years 2023-2024 there was a 760% increase in the number of such deepfakes (audio).


In an era of increasing attacks, self-awareness seems to be a key barrier protecting humans from these types of threats. It is about:

●  limited trust in voice assistants,
● knowledge of social techniques used by fraudsters,
● control over the content you publish on the Internet.

In system solutions, it is obvious to use advanced biometric technologies and methodologies to detect deepfakes in real time.

For example, Pindrop uses a technique called acoustic fingerprinting as one of its capabilities. This involves creating a digital signature for each voice based on its acoustic properties, such as pitch, tone, and cadence. These signatures are then used to compare and match voices across calls and interactions. For more on deepfakes, check out this podcast with Vijay Balasubramaniyan, CEO of Pindrop. Link below
https://www.biometricupdate.com/202504/biometric-update-podcast-digs-into-deepfakes-with-pindrop-ceo

As a reminder, Pindrop is a company based in Atlanta, USA.  Their solutions are leading the way for the future of voice communications, setting the standard for identity, security, and trust in every voice interaction.  More at pindrop.com

For several days now, the echoes of the loud prank on President Duda, who instead of President Macron, talked to Russian pranksters – Vladimir Kuznetsov (Vovan) and Alexei Stoljarov (Lexus) – have not been silent.

As part of our current research and development activity, we biometrically analyzed the recordings of the pranksters’ voices and compared them with the voice of the real Macron (Polish and English versions). We downloaded all voice samples in the form of individual recordings from the public domain on YouTube. Our goal was to confirm the effectiveness of biometric systems for this specific situation – identifying fraud.

What did the BiometrIQ analysis show? It turned out that the voice of one of the “Lexus” pranksters was just over 50% consistent with the voice of the President of France and as much as 97% consistent with the voice of the false president. The voice of the second one – “Vovana” – showed no similarities (0%) to the fake president.

 This clearly proves that thanks to biometric analysis we managed to:

=> detect the fact, only after 1 minute, that a fake president was involved in the conversation

=> identify the identity of the fictional president (Lexus)

=> confirm that the public domain is a very good source of voice samples, which may not always be used for noble purposes

=> strengthen the thesis that attacks using social engineering are the most effective, and in this case it was the choice of the right time when we are dealing with increased stress (rocket drop).