Blog

Facial recognition reinforced with smiles, winks and other gestures

Facial recognition systems as a security protocol have grown in presence in recent years, mainly thanks to its implementation in some mobiles.

Like all technology, this system has certain basic vulnerabilities. Depending on the robustness of each system, these could be compromised with photographs or by using the face of the lock holder in unforeseen situations, such as while sleeping. To remedy these weaknesses, a new alternative was presented.

Gesture Face Unlock, as an enhanced two-step option

This proposal was presented by the professor of electrical and computer engineering at Brigham Young University, DJ Lee, who assures that there is a better and more reliable way to use a face as an access mechanism for restricted control.

The mechanism is called C2FIV, short for Concurrent Two-Factor Identity Verification. To validate an unlock order, the system is responsible for recognizing the facial identity of the person in front of the camera and a specific movement or gesture.

To configure this unlocking system, the user must stand in front of the camera and record a short video of one or two seconds of a specific movement with the face or lips, reading a secret phrase. The video is then processed on the device, where the facial features, plus the trajectory of the movement of the face, are analyzed and stored for later identity verification.

In a preliminary study, Professor Lee and his PhD student Zheng Sun recorded 8,000 video clips containing facial movements of 50 subjects. These movements include blinking, jaw movements, smiling or raising the eyebrows, and many other random facial gestures to train the neural network. Next, they developed a data set of pairs of positive and negative facial movements and entered a higher score for the positive pair (pair matched between the request made and the records previously configured).

C2FIV relies on an integrated neural network framework to simultaneously learn facial features and actions. The frame models dynamic and continuous data, such as facial movement, where all recorded frames must be considered, as opposed to still photos that can describe people.

With this integrated neural network framework, user actions and facial characteristics can be embedded and stored on a server or device. Once enabled, when the system receives unlock requests, the team will compare the newly generated embeds with those stored in its database to validate the requests. The user identity verification process is governed by a predefined threshold to assess similarities between new additions and previously stored ones.

Read already patent this technology. According to their own statements, the idea behind this project is not to compete with Apple, nor to seek that this system is implemented in smartphones. According to the plans of its creator, the vocation of C2FIV is broader, including access to restricted areas in a workplace, to log into online banking, to use ATMs and even to dispense with the key. of a car.

Back to top button

Ad blocker detected

You must remove the AD BLOCKER to continue using our website THANK YOU