Introduction
Visual face is a credential that captures the user's face with a visual camera. It is different from face information captured with an infrared camera and is only available on devices that support visual face.
Currently, FaceStation F2, BioStation 3, BioEntry W3 supports visual face as user credential.
There are 3 different ways to add a visual face credential to a user.
1) Scan face on a device.
2) Upload an image.
3) Register via email.
This article will guide you through adding a visual face credential by scanning your face on a device via BioStar 2 API.
This article also includes a sample code of C# program built using the APIs mentioned in this article. You can reference it if you are integrating this functionality via API and need some guidance.
If you'd like to learn about how to add visual face by other methods via BioStar 2 API, please check out the following articles.
Adding visual face credential by uploading an image - [BioStar 2 API] Add Visual Face Credential by Uploading an Image
Adding visual face credential by registering via email - BioStar 2 New Local API - Registering Visual Face(2) via send_email
Adding a visual face credential by scanning your face on a device require 2 steps.
1) Scan face on a device.
2) Add the scanned template to a user.
Step 1. Scan Face on a Device
This API is used to enroll face credential from a device.
GET /api/devices/<device id>/credentials/face
Path Variable:
Input the id of the device which will be used to scan face in <device id>.
Query Parameters:
Parameters | Type | Required | Description |
---|---|---|---|
pose_sensitivity | Number | Y | Set the sensitivity for the position, angle, and distance of a face when registering the face. Set the sensitivity high if you wish to obtain a detailed face template. 0 - 9 |
Postman Request Example:
When you run the API, the device will prompt you to scan your face. Once you successfully scan your face, the API will return with a 200 OK status code and a response body.
The response body will include "template_ex_normalized_image" and "templates" values.
Please take note of these values as this will be used when adding this scanned face template to a user credential.
Postman Response Example:
Step 2. Add the Scanned Template to a User
This API is used to update a user. There are a lot more body parameters you can add depending on what user information you want to edit. In this article, I will only display the parameters that are related to visual face credential.
PUT /api/users/<id>
Path Variable:
Input the id of the user you want to add the visual face in <id>.
Body Parameters:
Parameter | Type | Required | Description |
---|---|---|---|
visualFaces | Array | Y | Main container of Visual Face Credential |
:template_ex_normalized_image | Base64 | Y | Cropped image ready for visual face extraction |
:templates | Array | Y | Container of the templates |
::credential_bin_type | Number | Y | UNKNOWN = -1 FACE_TEMPLATE = 0 FACE_TEMPLATE_IMAGE = 1 FACE_RAW_IMAGE = 2 FACE_TEMPLATE_EX_VER_2 = 5 (For FaceStation F2) FACE_TEMPLATE_EX_VER_3 = 9 (For BioStation 3 and W3) FACE_TEMPLATE_EX_NORMALIZED = 7 FACE_TEMPLATE_EX_PICTURE = 8 |
::template_ex | Raw | Y | Visual face template data |
::templateEx | Raw | Y | Visual face template data removed in v2.9.6 |
::template_ex_ir | Raw | Y | removed in v2.9.6 |
::templateExIr | Raw | Y | removed in v2.9.6 |
::template_ex_picture | Base64 | Y | Parameter for uploading picture raw data (ONLY required if you are uploading image) |
:useProfile | Boolean | N | true to use image as profile |
Postman Request Example:
Copy and paste over the data from the response body of Step 1.
As you can see below, contents of "template_ex_normalized_image" and "templates" have been copied over to the request body here.
If you have successfully called the API above and added a visual face credential to a user, you'll receive a 200 OK code.
Postman Response Example:
You can also check in BioStar 2 Web UI that a new visual face credential has been added to the user.
C# Console Application Example
* Note: Below code is applicable for v2.9.0 and below. If you'd like to implement it for other versions, please change the parameter information as necessary. *
Here is a sample code of C# program using the APIs to register visual face to a user by scanning face on a device.
static async void ScanAndRegisterVisualFace() { Console.WriteLine("*****ScanAndRegisterVisualFace Task Started******"); if (sessionID == null) { Console.WriteLine("You must log in first!"); return; }
CookieContainer cookieContainer = new CookieContainer();
HttpClientHandler handler = new HttpClientHandler(); handler.CookieContainer = cookieContainer;
HttpClient client = new HttpClient(handler);
client.DefaultRequestHeaders.Add("bs-session-id", sessionID); cookieContainer.Add(new Uri("https://127.0.0.1"), new Cookie("bs-session-id", sessionID)); ListUsers(); Console.WriteLine("Select User ID for Visual Face Registration..."); string userID = Console.ReadLine(); //HttpResponseMessage httpResponse = await client.GetAsync("https://127.0.0.1/api/users"); HttpResponseMessage httpResponse = client.GetAsync("https://127.0.0.1/api/users").Result;
if (httpResponse.IsSuccessStatusCode == true) { string httpResponseBody = await httpResponse.Content.ReadAsStringAsync(); //Console.WriteLine(httpResponseBody); Console.WriteLine("Registering VISUAL FACE to the USER(" + userID + ")"); ScanVisualFace(userID);
} else { Console.WriteLine("Retrieving User List Failed"); Console.WriteLine(httpResponse.ToString()); } } |
[ScanVisualFace]
- This step scans the Visual Face via selected device.
static async void ScanVisualFace(string UserID) { Console.WriteLine("*****ScanVisualFace Task Started*****"); CookieContainer cookieContainer = new CookieContainer();
HttpClientHandler handler = new HttpClientHandler(); handler.CookieContainer = cookieContainer;
HttpClient httpClient = new HttpClient(handler);
HttpClient client = new HttpClient(handler); httpClient.DefaultRequestHeaders.Add("bs-session-id", sessionID); cookieContainer.Add(new Uri("https://127.0.0.1"), new Cookie("bs-session-id", sessionID)); ListDevices(); Console.WriteLine("Select Device ID for Visual Face Scanning...(SELECT FaceStation F2 or BioStation 3..."); string deviceID = Console.ReadLine(); string resourceAddress = "https://127.0.0.1/api/devices/" + deviceID + "/credentials/face?pose_sensitivity=0&nonBlock=true";
JavaScriptSerializer serializer = new JavaScriptSerializer();
string payload = ""; Console.WriteLine(resourceAddress); StringContent sc = new StringContent(payload, Encoding.UTF8, "application/json"); //HttpResponseMessage httpResponse = await httpClient.PutAsync(resourceAddress, sc); HttpResponseMessage httpResponse = httpClient.GetAsync(resourceAddress).Result;
Console.WriteLine("SCAN YOUR VISUAL FACE with the DEVICE(ID: " + deviceID + ")");
if (httpResponse.IsSuccessStatusCode == true) { Console.WriteLine("Scan VISUAL FACE Successful."); string httpResponseBody = await httpResponse.Content.ReadAsStringAsync(); Console.WriteLine(httpResponseBody); dynamic obj = JsonConvert.DeserializeObject(httpResponseBody); string template_ex_normalized_image = obj.credentials.faces[0].template_ex_normalized_image; string template_ex = obj.credentials.faces[0].templates[0].template_ex; string template_ex_ir = obj.credentials.faces[0].templates[1].template_ex_ir; RegisterVisualFaceToUser(UserID, template_ex_normalized_image, template_ex, template_ex_ir); } else { Console.WriteLine("Scan VISUAL FACE Failed."); Console.WriteLine(httpResponse.ToString()); } } |
[RegisterVisualFaceToUser]
- This method receives the template values from ‘ScanVisualFace’ method and places it to ‘template_ex_normalized_image’, ‘template_ex’, ‘template_ex_ir’, and ‘flag’ values.
static async void RegisterVisualFaceToUser(string UserID, string template_ex_normalized_image, string template_ex, string template_ex_ir) { Console.WriteLine("*****RegisterVisualFaceToUser Task Started******"); if (sessionID == null) { Console.WriteLine("You must log in first!"); return; } CookieContainer cookieContainer = new CookieContainer();
HttpClientHandler handler = new HttpClientHandler(); handler.CookieContainer = cookieContainer;
HttpClient httpClient = new HttpClient(handler);
HttpClient client = new HttpClient(handler); httpClient.DefaultRequestHeaders.Add("bs-session-id", sessionID); cookieContainer.Add(new Uri("https://127.0.0.1"), new Cookie("bs-session-id", sessionID)); Console.WriteLine("Registering VISUAL FACE to USER(" + UserID + ") ...");
string resourceAddress = "https://127.0.0.1/api/users/" + UserID + ""; string payload = "{\"User\": {\"credentials\": {\"visualFaces\": [{\"template_ex_normalized_image\": \"" + template_ex_normalized_image + "\",\"templates\": [{\"template_ex\": \"" + template_ex + "\",\"credential_bin_type\": \"5\"},{\"template_ex_ir\": \"" + template_ex_ir + "\",\"credential_bin_type\": \"6\"}]}]}}}"; StringContent sc = new StringContent(payload, Encoding.UTF8, "application/json"); //HttpResponseMessage httpResponse = await httpClient.PutAsync(resourceAddress, sc); HttpResponseMessage httpResponse = httpClient.PutAsync(resourceAddress, sc).Result;
if (httpResponse.IsSuccessStatusCode == true) { Console.WriteLine(httpResponse.ToString()); string httpResponseBody = await httpResponse.Content.ReadAsStringAsync(); Console.WriteLine("***** VISUAL FACE is now registered to " + " User " + UserID + " *****"); } else { Console.WriteLine("Failed to Register VISUAL FACE to User(" + UserID + ")"); Console.WriteLine(httpResponse.ToString()); }
} |
[Select User & Device]
[Scan your Visual Face]
[Registration Processed successfully]