Face Scan Web SDK - RE.DOCTOR

Face Scan Web SDK
Face Scan Web SDK

Face Scan Web SDK

By: / / Comments Off on Face Scan Web SDK


Using our Web Face Scan SDK makes it easy to integrate face scan into a new or existing web application.


Please see permissions, initialization, configuration and video measurement for further steps.

Measurement preparation

After successfully installing, initializing and optionally configuring the SDK, your app should be ready to start the video measurement.

Before starting the measurement, the user should be instructed to keep a stable position without talking and changing the facial expression.Initial conditions

After initialization, the SDK will start to analyze the video stream from the camera.

In order for the measurement to begin, the user needs to locate their face in the middle of the screen, in good lighting conditions. The embedded user interface displays face positioning hints, but you can also use the GetFaceState() and GetNormalizedFaceBbox() methods to check the current face position.

enum FaceState {  OK = 0,  NOT_CENTERED,  TOO_CLOSE,  TOO_FAR,  UNSTABLE,  INVALID,} const faceState = shenai.getFaceState(); interface NormalizedFaceBbox {  x: number;  y: number;  width: number;  height: number;} const normalizedFaceBbox = shenai.getNormalizedFaceBbox();

Starting the measurement

To allow the measurement to start, the user can click the START button in the embedded UI. Alternatively, you can attain the same effect programmatically by setting the SDK operating mode to Measure:


Note that the actual measurement will not start until the user’s face is in the correct position.

During the measurement

After the measurement has started, you can monitor the stage and progress of the measurement.

Checking the measurement state

To see what is happening underneath you can call the getMeasurementState() method. It returns the following enum informing you about the state of the measurement.

enum MeasurementState {  NOT_STARTED = 0,      // Measurement has not started yet  WAITING_FOR_FACE,     // Waiting for face to be properly positioned in the frame  RUNNING_SIGNAL_SHORT, // Measurement started: Signal is too short for any conclusions  RUNNING_SIGNAL_GOOD,  // Measurement proceeding: Signal quality is good  RUNNING_SIGNAL_BAD,   // Measurement stalled due to poor signal quality  FINISHED,             // Measurement has finished successfully  FAILED,               // Measurement has failed}

In particular, you should wait for Finished or Failed values, because they indicate that the measurement has concluded.

Tracking the measurement progress

To check the progress of the whole measurement you can call the getMeasurementProgressPercentage() method, returning a floating point number between 0 and 100.

Real-time metrics

During the measurement, you can query the SDK for real-time metrics. These metrics are updated every second.

Heart rate

You can query the real-time heart rate based on either the last 10 or 4 seconds of the measurement. The provided value is the average heart rate over the specified time interval and is expressed in beats per minute (BPM), rounded to the nearest integer.

const hr = shenaisdk.getHeartRate10s();const hr = ShenAiSdk.getHeartRate4s();

The 10 seconds value will be more stable (like a value displayed on a typical smartwatch), while the 4 seconds value will more accurately depict moment-to-moment fluctuations of the heart rate.


You can also query the SDK for real-time heartbeat intervals. The values returned are in milliseconds and are rounded to the nearest integer.

Signal quality

You can query the SDK for real-time signal quality – the value can be used to provide additional feedback to the user. A higher value means a better signal quality.

const signal_quality = shenaisdk.getCurrentSignalQualityMetric();

Measurement success

When the Face Scan SDK engine enters the Finished state it means that the measurement has concluded and computed metrics are available.


The SDK outputs results once required success conditions were satisfied for 1 minute.

Success conditions

The following conditions are required for a successful measurement:

  • the extracted photoplethysmographic signal quality must be above a threshold that enables unambigous interpretation
  • human face must be stable and properly positioned within the camera frame

Final metrics

Final measurement metrics are provided based on all full heart cycles observed during the last 1 minute of the measurement. The measurement itself may take longer if some conditions were not satisfied (such as lighting issue or user leaving the view of the camera).

Base metrics

IBI (interbeat intervals) are computed based on advanced filtering and analysis of the extracted dense photoplethysmographic signal. Start and end time of each detected heartbeat is provided, as well as it’s duration rounded to full milliseconds.

HR (heart rate) is computed based on the average duration of observed heart cycles.

HRV (heart rate variability) metrics are computed as statistical measures of the observed heart cycles. The SDNN and lnRMSDD metrics are provided.

BR (breathing rate) is computed based on advanced analysis of the observed heart cycles.


Breathing rate will only be available if the SDK has high confidence about the computed result, so it might not be returned for some measurements (for example in cases of very slow, very fast or highly irregular breath). Breathing rate will always be returned if the measurement is done in the Relaxed precision mode.

Beta metrics

BP (blood pressure) is computed using a custom-trained deep-learning AI model (beta feature).

SI (stress index) is computed based on the statistical distribution of time intervals between the successive heartbeats detected during the video measurement. It is similar to the Baevsky stress index but has been adapted for short measurements (1 minute).

Accessing the results

The results can be obtained with getMeasurementResults() call providing metrics computed for the latest measurement (invalid if no measurement was finished successfully) of the following structure:

interface MeasurementResults {  heart_rate_bpm: number;                           // Heart rate, rounded to 1 BPM    hrv_sdnn_ms: number;                              // Heart rate variability, SDNN metric, rounded to 1 ms  hrv_lnrmssd_ms: number;                           // Heart rate variability, lnRMSSD metric, rounded to 0.1 ms    stress_index: number;                             // Stress index, rounded to 0.1, 0.0-10.0  breathing_rate_bpm: number | null;                // Breathing rate, rounded to 1 BPM                systolic_blood_pressure_mmhg: number | null;      // Systolic blood pressure, rounded to 1 mmHg                          diastolic_blood_pressure_mmhg: number | null;     // Diastolic blood pressure, rounded to 1 mmHg                          heartbeats: Heartbeat[];                          // Heartbeat locations      average_signal_quality: number;                   // Average signal quality metric          } interface Heartbeat {  start_location_sec: number;  end_location_sec: number;  duration_ms: number;} const results = await shenaiSDK.getMeasurementResults();

Additional outputs

Some additional outputs are provided, which may be used to better instruct the user about how the measurement process works.

rPPG signal

You can access the final rPPG signal from the measurement by calling the getFullPpgSignal() method:

const ppgSignal = await shenaiSDK.getFullPpgSignal();

The signal will be returned as a list of floating point values, where each value represents the intensity of the signal at a given point in time. The signal is sampled at the camera frame rate, which is usually 30 FPS.

Facial regions visualizations

You can access an image of the region of the face which was used to extract the rPPG signal, as well as the signal intensity map.

const faceImage = await shenaiSDK.getFaceTexturePng();const signalImage = await shenaiSDK.getSignalQualityMapPng();

The images will be returned as PNG-encoded byte arrays which you can decode and display as you wish, for example along with explaining the measurement results.


The facial texture image may be personally identifiable, so you should not save/upload it without permission from the user. No personally identifiable data leaves the SDK by itself as all processing is done locally on the device.