Legal regulation can prevent deepfakes. Scarlett Johansson’s case

The incident with Scarlett Johansson confirms the fact that the issue of legal regulations and effective tools is currently the highest priority in the context of preventing deepfaks, i.e. voice-based attacks. Illegal use of the voices of famous people to promote or discredit them is common and constitutes quite a challenge in the world of social media.

The dispute that Johansson is having with Open AI, which allegedly used her voice from the movie "Her" to create the GPT Chat assistant, is a perfect example here showing how easily a voice can be used and how difficult it is to prove that the voice belongs to a given person and not another person .

In short, Open AI, despite Johansson's lack of consent to license her voice to create a chat voice assistant, GPT presented its voice-using product called "Sky", confusingly similar to it.
The lack of legal protection in this area unfortunately does not work to the actress's advantage. However, it clearly draws attention to the need to protect the creative work of artists to power artificial intelligence tools.

You can read about the use of S. Johansson's voice in the original article

We're talking about this for a reason. When it comes to anti-deepfake tools, BiometrIQ, as a research company, specializes in creating algorithms that help detect fraud by comparing real voices with those generated by AI. Using proprietary tools, we can assess with very high certainty whether a voice has been faked or not. Using a biometric-based algorithm is certainly the most effective way to combat deepfakes on the Internet.

We also have an algorithm that helps, already at the stage of creating recordings, mark them so that they cannot be effectively used for further conversion or voice synthesis. Such a tool would certainly help reduce voice theft cases.