Facial Similarity reports
Start here
This is an introductory guide and overview for the Facial Similarity reports. This guide is primarily for users of API v3.
You'll find API documentation separately.
Introduction
Onfido offers 3 types of Facial Similarity report: Photo, Photo Fully Auto (API v3 only) and Video.
All Facial Similarity reports will compare the most recent photo or video provided by the applicant to the face on the most recent document provided. Where the document has two sides, we will search both sides of the document for a face. Facial Similarity reports aim to prove identity document ownership, so that only the owner of the identity document can use it to verify their identity, and access services.
Photo and Photo Fully Auto
Images and data are extracted from identity documents using machine learning and then compared to a selfie taken by the user. These reports are available via the Onfido SDKs or applicant form for low-risk users or transactions.
Both the Photo and Photo Fully Auto reports use a photo of the applicant. The photo needs to be a "live photo" taken at the time of check submission, so that it can assess whether the holder of the identity document is the same person as the one on the document.
The Photo report uses algorithms and expert review by Onfido’s super recogniser analysts to return a result. If using the applicant form, the Photo report needs to be paired with a Document report.
The Photo Fully Auto report is intended to be paired with manual review by your own team, and does not include expert review by Onfido’s super recogniser analysts. This guarantees faster results, but it also means more reports are likely to end with faces not being detected, or with users incorrectly rejected.
Photo and Photo Fully Auto API response breakdown tree
The result for both Photo and Photo Fully Auto can be clear
or consider
,
returned in the API response. This is determined by the results of individual
breakdowns and sub-breakdowns.
The following diagram shows how these breakdowns and sub-breakdowns are mapped to particular results:
Video
In addition to extracting images and data from identity documents, Facial Similarity Video provides added security for high-risk users or transactions. The user records themselves repeating numbers and performing randomised movements. The Facial Similarity Video report is available via the Onfido SDKs.
In order for a Facial Similarity Video report to complete automatically, the user needs to turn their head in the correct direction and correctly say the 3 randomly generated digits in one of our supported languages (see table below).
If the user does not say the correct digits, or speak in another language, the live video will be reviewed by an analyst for evidence of spoofing.
Language name | Language code |
---|---|
English | "en" |
Spanish | "es" |
Indonesian | "id" |
Italian | "it" |
German | "de" |
French | "fr" |
Portuguese | "pt" |
Polish | "pl" |
Japanese | "ja" |
Dutch | "nl" |
Romanian | "ro" |
Basque | "eu" |
Catalan | "ca" |
Galician | "gl" |
SDK localization
We recommend that you localize the strings if you're using one of the Onfido SDKs, so the user is more likely to understand the liveness headturn and speaking instructions.
The Onfido voice processor will attempt to detect the language the user is
speaking. This will be more successful if you pass the code for the expected
language to the locale
mechanism, in any of the Onfido
SDKs:
- iOS SDK - pass the
onfido_locale
parameter - Android SDK - pass the
onfido_locale
parameter - Web SDK - pass the
locale
parameter
Some string localisations are available out of the box, but this differs depending on the SDK.
You can also provide your own custom translations to your users.
Video API response breakdown tree
The result for Video can be clear
or consider
, returned in the API
response. This is determined by the results of individual breakdowns and
sub-breakdowns.
The following diagram shows how these breakdowns and sub-breakdowns are mapped to particular results: