Digital Onboarding

Product update: IDcentral’s Liveness Detection

Update

Concepts like AI-based video editing and augmented video processing are not just buzzwords anymore. Fraudsters now have easy access to plug-and-play products that enable them to use the power of deep learning and advanced AI to anonymize, mask, and alter images and videos. These spoofs also known as presentation attacks include printed photos, cutout masks, digital and video replay attacks, and 3D masks. The need for unambiguous and secure identification and authentication has motivated a massive deployment of biometric systems globally. But biometric systems still have security gaps that fraudsters have a chance to spoof a facial recognition system.

Liveness Detection:

Facial liveness has emerged as a way to fight fraud and ensure the integrity of face biometrics as a means of authentication or identity verification. Whereas face recognition can accurately answer the question “Is this the right person?” it doesn’t answer the question, “Is this a real person?”. This is the role of liveness detection.

Liveness detection for face recognition in biometrics is the ability of a computer system to detect if the person in front of the camera is alive and real. It detects if the person in front of the camera is the one he or she claims to be, if that person is acting by his or her own free will, or if someone else is trying to commit fraud by showing a picture or a video of that person or even using a face fake mask.

How does liveness detection help?

By incorporating liveness detection and other sophisticated identity verification methods, businesses can ensure that no matter how fraudsters try to capitalize on the changing situation by finding a loophole, they can significantly lower the risk of fraud to their organization and customers.

How does IDcentral’s Liveness detection work?

Face liveness detection is one of the key steps involved in user identity verification during the online onboarding processes. During Identity verification an unauthorized user may try to bypass the verification system by several means, for example, they can capture a user photo from social media and do an imposter attack using printouts of users face or using a digital photo from a mobile device and even create a more sophisticated attack like video replay attack.

We have tried to understand the different methods of attack and created an in-house large-scale dataset covering all kinds of attacks to train a robust Deep Learning model.

We proposed an ensemble method where various Deep Learning models are trained to identify global features inside the user’s face along with background region and local features to identify any kind of anomalies with imposter attacks. With these ensembles of models, we can identify more robust local features as well as global features which helps the model in domain generalization.

Deep Learning models have shown great success in solving many real-life problems which had proved to be an impossible task for classical Machine Learning algorithms, but it comes at a cost of dataset requirement. During the building of the Liveness solution, we have collected data for both real and attack scenarios from different sources eg. camera type (Mobile/Web), user ethnicity, Image acquisition environment, and Illumination, etc.

In comparison to other methods/solutions available, our solution outperforms in terms of identifying fake users (False Positives) while creating negligible friction to the real users.

Know more about IDcentral’s liveness detection.

Request a callback

Leave a Reply

Your email address will not be published. Required fields are marked *

Request a Demo