Authentication based on lips and facial movements

Is it possible? The research experiment opens new possibilities in this area.

Researchers in the UK have created a dataset of physical movements that generate speech sounds. This collection may be used in the future to develop speech recognition systems that synthesize the voices of people with speech impediments. This may also contribute to the development of a new method for recognizing silent speech and even new behavioral biometrics.

This means that in the future,  voice-controlled devices such as smartphones will likely be able to read users' lips and be used to authenticate banking and other sensitive applications by identifying the user's unique facial expressions. In other words, a person could be authenticated based on the movements of their lips and face.

In this experiment, the database was built based on lip reading and facial movement analysis. Data from continuous wave radars were used to capture the movement of the skin on the face, tongue and larynx of the study participants while speaking. Scientists used, among others, a laser spectra detection system with a super-fast camera to capture vibrations on the skin surface, as well as a Kinect V2 camera to read changes in the shape of the lips when forming various sounds.

The database, created based on the analysis of 400 minutes of speech, will be made available to researchers free of charge in order to further develop the technology.

The research group included scientists from the University of Dundee and University College London. The experiment also used technology from the Center for Communication, Sensing and Imaging at the University of Glasgow.