3DiVi Face SDK
3.21.0
|
Interface object for encapsulation data from TrackingCallback. More...
Classes | |
enum | SampleCheckStatus |
The result of some kind of sample checking. More... | |
Public Attributes | |
int | stream_id |
Integer id of the video stream (0 <= stream_id < streams_count) | |
int | frame_id |
Integer id of the frame (that was returned by VideoWorker.addVideoFrame) | |
Vector< RawSample > | samples |
Vector of face samples, found by the tracker. Most of the samples are from the frame_id frame, but some samples can be from previous frames. Use RawSample.getFrameID to determine to what frame sample actually belongs. | |
Vector< Integer > | samples_track_id |
Vector of face IDs (track_id). track_id is equal to sample.getID() for a sample given in any VideoWorker callback. (samples_track_id.size() == samples.size()) | |
Vector< Boolean > | samples_weak |
Since this is tracking, some samples may be false, so we mark the samples with the "weak" flag if they haven’t passed one of the rechecks (see: samples_good_face_size, samples_good_angles, samples_depth_liveness_confirmed, samples_ir_liveness_confirmed, samples_detector_confirmed, samples_good_light_and_blur). "Weak" samples are not used for recognition (samples_weak.size() == samples.size()) | |
Vector< Float > | samples_quality |
Quality value for the face sample. The same as from the FaceQualityEstimator. (samples_quality.size() == samples.size()) | |
Vector< SampleCheckStatus > | samples_good_light_and_blur |
The result of checking the sample for good lighting conditions and absence of a strong blur (samples_good_light_and_blur.size() == samples.size()) | |
Vector< SampleCheckStatus > | samples_good_angles |
The result of checking the sample for absence of too high yaw/pitch angles (samples_good_angles.size() == samples.size()) | |
Vector< SampleCheckStatus > | samples_good_face_size |
The result of checking the sample for suitable face size, see the min_template_generation_face_size parameter in the configuration file. (samples_good_face_size.size() == samples.size()) | |
Vector< SampleCheckStatus > | samples_detector_confirmed |
The result of checking the sample with the frontal face detector (samples_detector_confirmed.size() == samples.size()) | |
Vector < DepthLivenessEstimator.Liveness > | samples_depth_liveness_confirmed |
The result of checking the sample with DepthLivenessEstimator, depth frames are required, see VideoWorker.addDepthFrame. See DepthLivenessEstimator.Liveness for details. (samples_depth_liveness_confirmed.size() == samples.size()) | |
Vector < IRLivenessEstimator.Liveness > | samples_ir_liveness_confirmed |
The result of checking the sample with IRLivenessEstimator, IR frames are required, see VideoWorker.addIRFrame. See IRLivenessEstimator.Liveness for details. (samples_ir_liveness_confirmed.size() == samples.size()) | |
Vector< Boolean > | samples_track_age_gender_set |
Flag indicating that age and gender were estimated for this track. (samples_track_age_gender_set.size() == samples.size()) | |
Vector < AgeGenderEstimator.AgeGender > | samples_track_age_gender |
Estimated age and gender for this track. (samples_track_age_gender.size() == samples.size()) | |
Vector< Boolean > | samples_track_emotions_set |
Flag indicating that emotions were estimated for this track. (samples_track_emotions_set.size() == samples.size()) | |
Vector< Vector < EmotionsEstimator.EmotionConfidence > > | samples_track_emotions |
Estimated emotions for this track. (samples_track_emotions.size() == samples.size()) | |
Vector < ActiveLiveness.ActiveLivenessStatus > | samples_active_liveness_status |
Face active liveness check status. See ActiveLiveness::ActiveLivenessStatus for details. (samples_active_liveness_status.size() == samples.size()) | |
Interface object for encapsulation data from TrackingCallback.