Ideas for using the latest technologies may be surprising. Deadbots (also known as griefbots or postmortem avatars) have appeared on the market, i.e. replicas of deceased people communicating with their loved ones in their native language and in their own voices. These are applications or computer programs based on data obtained from the Internet, which are intended to create the illusion of a deceased person and provide emotional support after the death of loved ones.

And although the goal seems right, scientists from the University of Cambridge draw attention to the number of threats associated with this technology. Based on the three analyzed scenarios (selling products using the image of the deceased, parent avatar for a child, purchasing a long-term deadbot subscription for loved ones), the main ones are: 

=> the possibility of manipulating and influencing people in mourning

=> using the image of people after death without their prior consent (need to regulate this aspect by obtaining consents to use, also regarding voice)

=> monetization of the experience of mourning and the desire to circumvent regulations for sales purposes by companies producing deadbots

=> unfavorable impact of technology on certain social groups, mainly children (indication of introducing an age limit for the use of this type of solutions)

According to Newseria, scientists do not completely reject this solution. They indicate the benefits:

=> public education, deadbot as an intergenerational exchange of stories and experiences (e.g. Holocaust survivors talk about their experiences)

=> source of income for families after the death of famous artists or journalists

Deadbots are another example indicating the need to implement legal regulations for services created based on AI. This would avoid infringements related to the use of their image and voice after their death.

What do you think about deadbots? Are you convinced by this type of services?

More here https://biznes.newseria.pl/news/deadboty-moga-byc,p262956223

Read more: Do deadbots have more threats or benefits?

Is this phenomenon regulated by law in any way? And if so, are these regulations sufficient to protect the author? After all, the voice is not only an element of the image, but a unique biometric feature used for identification. Therefore, it should be protected in the same way as personal data.

Progress in the development of artificial intelligence means that voice-based fraud (impersonating other people) is becoming more and more common and is the subject of financial fraud, political attacks, data theft or unfair promotion. Do you remember the fake voice of US President Joe Biden used in automated telephone calls discouraging participation in the primary elections, or the use of Taylor Swift’s fake voice in an ad phishing for data under the guise of handing out pots? These are examples of how quickly and effectively technology can be used for unfair purposes. Unfortunately, the scale of this type of abuse will only increase.

Motivations for use are also becoming more and more popular in film productions. We’re talking about the cult Top Gun Maverick, where Val Kilmer’s voices were synthesized. This happens in the last season of the Polish series Rojst, in which Filip Pławiak (young Kociołek) talks about Piotr Fronczewski (Kociołek). The effect of use, i.e. the procedure we are dealing with when the challenge is no longer used in the era of AI invasion. While this aspect was taken into account and regulated for the needs of Rojst’s production, the question arises about the device in films synthesized by actors after their death. The analysis report also includes aspects of biometric testing manufacturers.

Having the appropriate tools at our disposal, we attempted to biometrically compare the voices of Fronczewski and Pławiak. The results of the analysis show that their voices are biometrically NOT consistent (Pławiak utterance vs. Fronczewski VP – only 15% agreement, Fronczewski utterance vs. Pławiak VP – 11%), but interestingly these differences are not noticed at the ear level. In our opinion, the voices of Pławiak and Fronczewski are almost identical. And that’s what’s going on here.

For both characters, gender and nationality were recognized with minimal uncertainty (score of almost 100%). An age difference between the characters was also detected, estimated at 20 years.
The study was carried out in our digital signal processing laboratory, for this purpose we used 25 seconds of total speeches of both characters, composed of several fragments of their original speech, based on the original film track.

The conclusions from this experiment indicate how helpful and effective biometrics can be in identifying the speaker, assessing the authenticity of his voice and, consequently, detecting voice-based fraud. Will this be enough to limit the unfair use of the voices of famous people in the future? And most importantly, are we able to regulate the market so as to take care of the voices of famous people after their death?