How to effectively detect voice-based fraud? How to distinguish a real voice from a fake one, e.g. one generated on the basis of AI? The answer is simple. This requires advanced voice biometrics tools and a number of analyses. We publish here two examples that we analyzed some time ago in our laboratory and, thanks to our proprietary algorithm, we assessed with a very high probability whether the voice is real or fake and to what extent it is consistent with the voice of a given person.

The analyzes concern:

● recognizing the voice of one of President Duda’s Russian pranksters pretending to be President Macron

● assessment of the similarity of the voices of actors Piotr Fronczewski and Filip Pławiak in the film Rojst. In the play, men play the role of the same person (Kociołek) in adulthood and youth, respectively.

We share with you the conclusions from these experiments.

Biometric comparison of the voices of Fronczewski and Pławiak.

For this purpose, we used 25 seconds of total speeches by both characters, composed of several fragments of their original speech, based on the original film soundtrack. What compliance did we achieve?
The results of the analysis showed that the actors’ biometric voices are NOT consistent. Pławiak statement vs. Fronczewski VP – only 15% agreement, Fronczewski statement vs. Pławiak VP – 11%, but interestingly, these differences are not noticed at the level of the ear. In our opinion, the voices of Pławiak and Fronczewski are almost identical. And that is ultimately what this is all about.



For both characters, gender and nationality were recognized with minimal uncertainty (score of almost 100%). An age difference between the characters was also detected, estimated at 20 years.

Analysis of the voices of Russian pranksters Vladimir Kuznetsov (Vovan) and Alexei Stolyarov (Lexus) impersonating President Macron.


In this case, we biometrically analyzed the recordings of the pranksters’ voices and compared them with the voice of the real Macron (in both Polish and English versions). We downloaded all voice samples in the form of individual recordings from the public domain on YouTube. Our goal was to confirm the effectiveness of biometric systems for this specific situation – identifying fraud.

It turned out that the voice of one of the “Lexus” pranksters was just over 50% consistent with the voice of the President of France and as much as 97% consistent with the voice of the false president. The voice of the second one – “Vovana” – showed no similarities (0%) to the fake president.

 This clearly proves that thanks to biometric analysis we managed to:

●detect the fact, only after 1 minute, that a fake president was involved in the conversation
● identify the identity of the fictional president (Lexus)
● confirm that the public domain is a very good source of voice samples, which may not always be used for noble purposes
● strengthen the thesis that the most effective attacks are those using social engineering, and in this case it was the choice of the right time when the President was faced with increased stress (rocket fall).

These are just selected examples of the use of specialized biometric tools to confirm the identity of people. If implemented in the future, they may help detect voice-based abuse.

As many as 230 million stolen traditional passwords were registered despite meeting the standard requirements regarding their complexity (min. 8 characters, 1 capital letter, 1 digit, and a special character), according to Specops Breached Password Report z 2025 r. This means that the level of traditional security is insufficient and more effective protection tools are needed. Can a biometric password prove to be a more secure password? Absolutely yes!


Biometrics is one of the safer ways of logging in because it is based on biometric features of people such as face, eye pupil or voice. Biometric identifiers are unique to a given person and distinguish them from others.

The advantage of biometrics lies in its unrivaled accuracy and convenience. Unlike traditional methods such as passwords or PINs, which can be easily forgotten or stolen, biometric identifiers are inextricably linked to people. This inherent link between individuals and their biometric characteristics makes it much more difficult for unauthorized individuals to impersonate another person.


An example of secure login using biometrics is VoiceToken from BiometrIQ, a voice authentication tool that provides very strong, two-step authentication. We remind you how it is done.


When speaking words, the compliance of the read words (first level) with the pattern is verified as well as the biometric compliance of the speaker’s voice with his VoicePrint (second level).


Extremely high security is ensured by an algorithm for selecting words to be read, which reduces the possibility of guessing the sequence of words that will be displayed on the screen to almost zero. The Speech To Text (STT) mechanism combined with an innovative biometric engine guarantee high effectiveness, even in the case of attacks based on speech synthesis.

 More about VoiceToken

Are you ready for changes?


We start the New Year with something interesting for fans of football and other live events. The typical, traditional ticket may soon be replaced by biometric identity verification. Research shows that nearly 50% of sports facilities have such a plan.

MLB’s ticketless “Go-Ahead Entry” system with biometric verification can reduce waiting times to enter the facility by almost 70%.

That’s why biometric authentication at sporting events could become a hit. The challenge, as always with large projects, is the financial aspect related to the costs of system implementation and investments in specialized equipment, training and tool integration. It is this aspect, according to the PYMNTS report, that may determine the implementation of biometric solutions by smaller facilities.

Go-Ahead Entry is MLB’s patented ticketless entry system using biometrics provided by NEC. The statistics come from a 2023 pilot at Citizens Bank Ballpark in Philadelphia, which showed biometric lanes move 68% faster and allow 2.5 times more people to pass than the fastest lane using physical or smartphone tickets.

We are curious how this situation will develop. Do you think this is a good idea?

More you find out here article


At the end of the year, Radio Lublin hosted an interesting program on network security. It covered, among others, issues related to digital identity, privacy and its protection, as well as the phenomenon of the rapidly growing amount of data on the Internet and the impact of this phenomenon on the development of tools, AI and the life of society. The conversation took place as part of the series “XX/XXI – a garden with forking paths”, and the participant of this inspiring conversation was Andrzej Tymecki, Managing Director of BiometrIQ. Below we share some selected statements that constitute a short summary of this meeting.

👉 The amount of data is growing incredibly fast. To illustrate the point, if we recorded the data produced in the world in one day onto DVDs and arranged them one by one, they would cover 106,000 km, which means their length would be enough to circle the Earth more than twice.

👉 The concept of privacy is constantly evolving as technology advances and new opportunities emerge. It is no longer just traditional personal data such as name, PESEL or address, but also photos and biometric features such as voice, finger touch, iris. This makes protecting privacy more and more difficult.

👉Nowadays, it is very easy to commit identity theft due to the advancement of technology and the development of AI. Protection against loss of privacy and identity theft forces caution when posting any content online.

👉The right to be forgotten, introduced into the GDPR Act in 2014, allows us to delete our data from the database.

👉Using advanced methods to protect recordings, e.g. using watermarks. may not be enough to protect against identity theft. What is needed is the good will of people on the other side (platform managers) who will verify the recordings in terms of their authenticity.

👉Procedures are a very important issue in ensuring safety. Procedures cannot be replaced by technology. Technology is supposed to support and facilitate their implementation.


👉 Data centers play a key role in terms of security, of which there are as many as 144 in Poland. The United States is undoubtedly a power in this respect, with approximately 5,300 facilities.

you can read about the program itself, as well as the entire series here

Voice payments? Why not! Nearly 50% of Poles would like to use innovative payment methods based on biometrics. Authorizing expenses by voice was indicated by 7% of Poles, by face by 8% and by iris of the eye by 9%. Payments using fingers and palms turned out to be the most popular, indicated by 20% of the respondents.

T This is according to the report “Payment references of Poles 2024” conducted this year (2024) by PolCard from Fiserv. You can read more about this study here

In our opinion, the use of biometric authorization will increase year by year, mainly due to its high effectiveness. Biometric security is simply safer than traditional ones such as passwords and is much more difficult to bypass. The main factors determining the development of biometric technologies will be:


👉 security
👉 privacy protection
👉 trust in technology and
👉 implementation costs

And You? Which method would you most like to use?

The incident with Scarlett Johansson confirms the fact that the issue of legal regulations and effective tools is currently the highest priority in the context of preventing deepfaks, i.e. voice-based attacks. Illegal use of the voices of famous people to promote or discredit them is common and constitutes quite a challenge in the world of social media.

The dispute that Johansson is having with Open AI, which allegedly used her voice from the movie “Her” to create the GPT Chat assistant, is a perfect example here showing how easily a voice can be used and how difficult it is to prove that the voice belongs to a given person and not another person .

In short, Open AI, despite Johansson’s lack of consent to license her voice to create a chat voice assistant, GPT presented its voice-using product called “Sky”, confusingly similar to it.
The lack of legal protection in this area unfortunately does not work to the actress’s advantage. However, it clearly draws attention to the need to protect the creative work of artists to power artificial intelligence tools.

You can read about the use of S. Johansson’s voice in the original article

https://www.npr.org/2024/05/20/1252495087/openai-pulls-ai-voice-that-was-compared-to-scarlett-johansson-in-the-movie-her

We’re talking about this for a reason. When it comes to anti-deepfake tools, BiometrIQ, as a research company, specializes in creating algorithms that help detect fraud by comparing real voices with those generated by AI. Using proprietary tools, we can assess with very high certainty whether a voice has been faked or not. Using a biometric-based algorithm is certainly the most effective way to combat deepfakes on the Internet.

We also have an algorithm that helps, already at the stage of creating recordings, mark them so that they cannot be effectively used for further conversion or voice synthesis. Such a tool would certainly help reduce voice theft cases.

Read more: Legal regulation can prevent deepfakes. Scarlett Johansson’s case

It’s already happening. Age estimation using biometrics on point for UK sales solution articles. Is security responsible for solutions in Poland?

Innovative Technology (ITL) has confirmed the main authority’s partnership with Buckinghamshire & Surrey Trading Standards for the use of their biometric age assessment technology by retailers selling age-restricted goods.

ITL’s biometric age estimation products, MyCheckr and MyCheckr Mini, anonymously estimate age at the point of sale, thereby preventing minors from accessing alcohol and cigarettes.

https://www.biometricupdate.com/202402/global-demand-for-age-estimation-verification-drives-deals-for-itl-new-entrants
Read more: Age verification for purchases of limited sale items

Is this phenomenon regulated by law in any way? And if so, are these regulations sufficient to protect the author? After all, the voice is not only an element of the image, but a unique biometric feature used for identification. Therefore, it should be protected in the same way as personal data.

Progress in the development of artificial intelligence means that voice-based fraud (impersonating other people) is becoming more and more common and is the subject of financial fraud, political attacks, data theft or unfair promotion. Do you remember the fake voice of US President Joe Biden used in automated telephone calls discouraging participation in the primary elections, or the use of Taylor Swift’s fake voice in an ad phishing for data under the guise of handing out pots? These are examples of how quickly and effectively technology can be used for unfair purposes. Unfortunately, the scale of this type of abuse will only increase.

Motivations for use are also becoming more and more popular in film productions. We’re talking about the cult Top Gun Maverick, where Val Kilmer’s voices were synthesized. This happens in the last season of the Polish series Rojst, in which Filip Pławiak (young Kociołek) talks about Piotr Fronczewski (Kociołek). The effect of use, i.e. the procedure we are dealing with when the challenge is no longer used in the era of AI invasion. While this aspect was taken into account and regulated for the needs of Rojst’s production, the question arises about the device in films synthesized by actors after their death. The analysis report also includes aspects of biometric testing manufacturers.

Having the appropriate tools at our disposal, we attempted to biometrically compare the voices of Fronczewski and Pławiak. The results of the analysis show that their voices are biometrically NOT consistent (Pławiak utterance vs. Fronczewski VP – only 15% agreement, Fronczewski utterance vs. Pławiak VP – 11%), but interestingly these differences are not noticed at the ear level. In our opinion, the voices of Pławiak and Fronczewski are almost identical. And that’s what’s going on here.

For both characters, gender and nationality were recognized with minimal uncertainty (score of almost 100%). An age difference between the characters was also detected, estimated at 20 years.
The study was carried out in our digital signal processing laboratory, for this purpose we used 25 seconds of total speeches of both characters, composed of several fragments of their original speech, based on the original film track.

The conclusions from this experiment indicate how helpful and effective biometrics can be in identifying the speaker, assessing the authenticity of his voice and, consequently, detecting voice-based fraud. Will this be enough to limit the unfair use of the voices of famous people in the future? And most importantly, are we able to regulate the market so as to take care of the voices of famous people after their death?

For several days now, the echoes of the loud prank on President Duda, who instead of President Macron, talked to Russian pranksters – Vladimir Kuznetsov (Vovan) and Alexei Stoljarov (Lexus) – have not been silent.

As part of our current research and development activity, we biometrically analyzed the recordings of the pranksters’ voices and compared them with the voice of the real Macron (Polish and English versions). We downloaded all voice samples in the form of individual recordings from the public domain on YouTube. Our goal was to confirm the effectiveness of biometric systems for this specific situation – identifying fraud.

What did the BiometrIQ analysis show? It turned out that the voice of one of the “Lexus” pranksters was just over 50% consistent with the voice of the President of France and as much as 97% consistent with the voice of the false president. The voice of the second one – “Vovana” – showed no similarities (0%) to the fake president.

 This clearly proves that thanks to biometric analysis we managed to:

=> detect the fact, only after 1 minute, that a fake president was involved in the conversation

=> identify the identity of the fictional president (Lexus)

=> confirm that the public domain is a very good source of voice samples, which may not always be used for noble purposes

=> strengthen the thesis that attacks using social engineering are the most effective, and in this case it was the choice of the right time when we are dealing with increased stress (rocket drop).