The exhibition was marked by the ubiquitous AI. Many companies presented their latest achievements in constructing systems that communicate autonomously with people. The humanoid robot Ameca (Etisalat) interacting with its interlocutors aroused great interest. The stands with interactive agents (Amdocs) offered an almost unbelievable quality of image and speech generated by the systems.

Google has unveiled Gemini Live, its response to ChatGPT’s voice mode.  Gemini Live has function Share Screen With Live, that allows Gemini to interact with the image displayed on the phone’s screen. Deutsche Telekom has indicated a possible direction for the development of phones by turning the entire phone into a chatbot. The phone has no applications and is a personal assistant that communicates with the user by voice. The basis of the solution is a digital assistant from AI Perplexity, but it is also to be open to, among others, Google Cloud AI, ElevenLabs, and Picsart. South Korean startup Newnal has presented a new operating system for mobile phones that uses historical and current user data to create a personalized AI assistant that is to eventually become an AI avatar behaving just like the user.

All of the above solutions, as well as many others, are connected by the use of voice technologies for two-way communication. The direction indicated at MWC 2025 is clear – our actions will be supported by avatars and bots communicating with us autonomously. The possibility of quick, machine confirmation of who we are talking to is therefore becoming even more important than ever before, because the quality of autonomous voice communication systems does not guarantee correct verification of the speaker by a human.

Photos by Andrzej Tymecki

 


At the end of the year, Radio Lublin hosted an interesting program on network security. It covered, among others, issues related to digital identity, privacy and its protection, as well as the phenomenon of the rapidly growing amount of data on the Internet and the impact of this phenomenon on the development of tools, AI and the life of society. The conversation took place as part of the series “XX/XXI – a garden with forking paths”, and the participant of this inspiring conversation was Andrzej Tymecki, Managing Director of BiometrIQ. Below we share some selected statements that constitute a short summary of this meeting.

👉 The amount of data is growing incredibly fast. To illustrate the point, if we recorded the data produced in the world in one day onto DVDs and arranged them one by one, they would cover 106,000 km, which means their length would be enough to circle the Earth more than twice.

👉 The concept of privacy is constantly evolving as technology advances and new opportunities emerge. It is no longer just traditional personal data such as name, PESEL or address, but also photos and biometric features such as voice, finger touch, iris. This makes protecting privacy more and more difficult.

👉Nowadays, it is very easy to commit identity theft due to the advancement of technology and the development of AI. Protection against loss of privacy and identity theft forces caution when posting any content online.

👉The right to be forgotten, introduced into the GDPR Act in 2014, allows us to delete our data from the database.

👉Using advanced methods to protect recordings, e.g. using watermarks. may not be enough to protect against identity theft. What is needed is the good will of people on the other side (platform managers) who will verify the recordings in terms of their authenticity.

👉Procedures are a very important issue in ensuring safety. Procedures cannot be replaced by technology. Technology is supposed to support and facilitate their implementation.


👉 Data centers play a key role in terms of security, of which there are as many as 144 in Poland. The United States is undoubtedly a power in this respect, with approximately 5,300 facilities.

you can read about the program itself, as well as the entire series here

The language of biometrics is unfamiliar to most, even as the technology becomes ubiquitous.

People using biometric data do not necessarily know that it is “biometric data”.

A team of German researchers from the Bundeswehr University of Munich and the University of Duisburg-Essen conducted an online survey covering participants’ general understanding of physiological and behavioral biometrics and their perceived usefulness and security. Key research questions focused on literacy, perception and use, and usability and security.

Do people know what biometrics are?

What value do they see in using them?

What makes systems seem useful and secure?

The results show that although most participants were able to mention examples and claimed to use biometric technologies in their daily lives, they had difficulty with the definitions and description of biometric data. Only about 1/3 of participants gave specific examples of the use of biometrics such as fingerprints, facial recognition, ID cards and signatures.

more https://www.biometricupdate.com/202410/language-of-biometrics-is-unfamiliar-to-most-even-as-tech-becomes-ubiquitous

Ideas for using the latest technologies may be surprising. Deadbots (also known as griefbots or postmortem avatars) have appeared on the market, i.e. replicas of deceased people communicating with their loved ones in their native language and in their own voices. These are applications or computer programs based on data obtained from the Internet, which are intended to create the illusion of a deceased person and provide emotional support after the death of loved ones.

And although the goal seems right, scientists from the University of Cambridge draw attention to the number of threats associated with this technology. Based on the three analyzed scenarios (selling products using the image of the deceased, parent avatar for a child, purchasing a long-term deadbot subscription for loved ones), the main ones are: 

=> the possibility of manipulating and influencing people in mourning

=> using the image of people after death without their prior consent (need to regulate this aspect by obtaining consents to use, also regarding voice)

=> monetization of the experience of mourning and the desire to circumvent regulations for sales purposes by companies producing deadbots

=> unfavorable impact of technology on certain social groups, mainly children (indication of introducing an age limit for the use of this type of solutions)

According to Newseria, scientists do not completely reject this solution. They indicate the benefits:

=> public education, deadbot as an intergenerational exchange of stories and experiences (e.g. Holocaust survivors talk about their experiences)

=> source of income for families after the death of famous artists or journalists

Deadbots are another example indicating the need to implement legal regulations for services created based on AI. This would avoid infringements related to the use of their image and voice after their death.

What do you think about deadbots? Are you convinced by this type of services?

More here https://biznes.newseria.pl/news/deadboty-moga-byc,p262956223

Read more: Do deadbots have more threats or benefits?