
Deepfake audio more dangerous than video
It turns out that deepfake audio can be more dangerous than video! According to the Pindrop report, in the two years 2023-2024 there was a 760% increase in the number of such deepfakes (audio).
In an era of increasing attacks, self-awareness seems to be a key barrier protecting humans from these types of threats. It is about:
● limited trust in voice assistants,
● knowledge of social techniques used by fraudsters,
● control over the content you publish on the Internet.
In system solutions, it is obvious to use advanced biometric technologies and methodologies to detect deepfakes in real time.
For example, Pindrop uses a technique called acoustic fingerprinting as one of its capabilities. This involves creating a digital signature for each voice based on its acoustic properties, such as pitch, tone, and cadence. These signatures are then used to compare and match voices across calls and interactions. For more on deepfakes, check out this podcast with Vijay Balasubramaniyan, CEO of Pindrop. Link below
https://www.biometricupdate.com/202504/biometric-update-podcast-digs-into-deepfakes-with-pindrop-ceo
As a reminder, Pindrop is a company based in Atlanta, USA. Their solutions are leading the way for the future of voice communications, setting the standard for identity, security, and trust in every voice interaction. More at pindrop.com