The incident with Scarlett Johansson confirms the fact that the issue of legal regulations and effective tools is currently the highest priority in the context of preventing deepfaks, i.e. voice-based attacks. Illegal use of the voices of famous people to promote or discredit them is common and constitutes quite a challenge in the world of social media.

The dispute that Johansson is having with Open AI, which allegedly used her voice from the movie “Her” to create the GPT Chat assistant, is a perfect example here showing how easily a voice can be used and how difficult it is to prove that the voice belongs to a given person and not another person .

In short, Open AI, despite Johansson’s lack of consent to license her voice to create a chat voice assistant, GPT presented its voice-using product called “Sky”, confusingly similar to it.
The lack of legal protection in this area unfortunately does not work to the actress’s advantage. However, it clearly draws attention to the need to protect the creative work of artists to power artificial intelligence tools.

You can read about the use of S. Johansson’s voice in the original article

https://www.npr.org/2024/05/20/1252495087/openai-pulls-ai-voice-that-was-compared-to-scarlett-johansson-in-the-movie-her

We’re talking about this for a reason. When it comes to anti-deepfake tools, BiometrIQ, as a research company, specializes in creating algorithms that help detect fraud by comparing real voices with those generated by AI. Using proprietary tools, we can assess with very high certainty whether a voice has been faked or not. Using a biometric-based algorithm is certainly the most effective way to combat deepfakes on the Internet.

We also have an algorithm that helps, already at the stage of creating recordings, mark them so that they cannot be effectively used for further conversion or voice synthesis. Such a tool would certainly help reduce voice theft cases.

Read more: Legal regulation can prevent deepfakes. Scarlett Johansson’s case

It’s already happening. Age estimation using biometrics on point for UK sales solution articles. Is security responsible for solutions in Poland?

Innovative Technology (ITL) has confirmed the main authority’s partnership with Buckinghamshire & Surrey Trading Standards for the use of their biometric age assessment technology by retailers selling age-restricted goods.

ITL’s biometric age estimation products, MyCheckr and MyCheckr Mini, anonymously estimate age at the point of sale, thereby preventing minors from accessing alcohol and cigarettes.

https://www.biometricupdate.com/202402/global-demand-for-age-estimation-verification-drives-deals-for-itl-new-entrants
Read more: Age verification for purchases of limited sale items

Is this phenomenon regulated by law in any way? And if so, are these regulations sufficient to protect the author? After all, the voice is not only an element of the image, but a unique biometric feature used for identification. Therefore, it should be protected in the same way as personal data.

Progress in the development of artificial intelligence means that voice-based fraud (impersonating other people) is becoming more and more common and is the subject of financial fraud, political attacks, data theft or unfair promotion. Do you remember the fake voice of US President Joe Biden used in automated telephone calls discouraging participation in the primary elections, or the use of Taylor Swift’s fake voice in an ad phishing for data under the guise of handing out pots? These are examples of how quickly and effectively technology can be used for unfair purposes. Unfortunately, the scale of this type of abuse will only increase.

Motivations for use are also becoming more and more popular in film productions. We’re talking about the cult Top Gun Maverick, where Val Kilmer’s voices were synthesized. This happens in the last season of the Polish series Rojst, in which Filip Pławiak (young Kociołek) talks about Piotr Fronczewski (Kociołek). The effect of use, i.e. the procedure we are dealing with when the challenge is no longer used in the era of AI invasion. While this aspect was taken into account and regulated for the needs of Rojst’s production, the question arises about the device in films synthesized by actors after their death. The analysis report also includes aspects of biometric testing manufacturers.

Having the appropriate tools at our disposal, we attempted to biometrically compare the voices of Fronczewski and Pławiak. The results of the analysis show that their voices are biometrically NOT consistent (Pławiak utterance vs. Fronczewski VP – only 15% agreement, Fronczewski utterance vs. Pławiak VP – 11%), but interestingly these differences are not noticed at the ear level. In our opinion, the voices of Pławiak and Fronczewski are almost identical. And that’s what’s going on here.

For both characters, gender and nationality were recognized with minimal uncertainty (score of almost 100%). An age difference between the characters was also detected, estimated at 20 years.
The study was carried out in our digital signal processing laboratory, for this purpose we used 25 seconds of total speeches of both characters, composed of several fragments of their original speech, based on the original film track.

The conclusions from this experiment indicate how helpful and effective biometrics can be in identifying the speaker, assessing the authenticity of his voice and, consequently, detecting voice-based fraud. Will this be enough to limit the unfair use of the voices of famous people in the future? And most importantly, are we able to regulate the market so as to take care of the voices of famous people after their death?