0% found this document useful (0 votes)
200 views16 pages

VeriLook Algorithm Demo

VeriLook 3. Algorithm Demo verilook 3. Algorithm Demo. Image quality control. 3.1. Pose. 3.2. Expression. 3.2.1. Examples of non-recommended Expression. 3.3. Face changes. 3.4. Lighting. 3.5. Eyeglasses. 3.6. Web cameras. 4. Liveness Detection. 5. Application. 5.1. Main application window. 5.2. Options dialog. 5.3. Menu commands. 5.4. Simple usage examples
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
200 views16 pages

VeriLook Algorithm Demo

VeriLook 3. Algorithm Demo verilook 3. Algorithm Demo. Image quality control. 3.1. Pose. 3.2. Expression. 3.2.1. Examples of non-recommended Expression. 3.3. Face changes. 3.4. Lighting. 3.5. Eyeglasses. 3.6. Web cameras. 4. Liveness Detection. 5. Application. 5.1. Main application window. 5.2. Options dialog. 5.3. Menu commands. 5.4. Simple usage examples
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

VeriLook 3.

2 Algorithm Demo

VeriLook 3.2 Algorithm Demo


Copyright 2003-2007 Neurotechnologija

Table of Contents
1. Introduction ..................................................................................................................1 2. Requirements ...............................................................................................................2 3. Image quality control ...................................................................................................3 3.1. Pose ...................................................................................................................3 3.2. Expression .........................................................................................................3 3.2.1. Examples of Non-Recommended Expressions ......................................3 3.3. Face changes .....................................................................................................3 3.4. Lighting .............................................................................................................3 3.5. Eyeglasses .........................................................................................................3 3.6. Web cameras .....................................................................................................4 4. Liveness Detection .......................................................................................................5 5. Application ...................................................................................................................6 5.1. Main window ....................................................................................................6 5.2. Options dialog ...................................................................................................7 5.3. Menu commands ...............................................................................................8 5.4. Simple usage examples .....................................................................................8 6. Matching threshold and similarity .............................................................................11

Copyright 2004 - 2007 Neurotechnologija

iv

List of Figures
5.1. Main application window .........................................................................................6 5.2. Options dialog ...........................................................................................................7

Copyright 2004 - 2007 Neurotechnologija

Chapter 1. Introduction
VeriLook 3.2 Demo application is designed with aim to demonstrate the capabilities of VeriLook face recognition engine. The program is a Windows 2000/XP compatible GUI application. Evaluation software supports image acquisition from the external video source (such as Web cameras) via DirectX library. Also it can read face images from .bmp, .tif, .png, .jpg, .gif files. The application has 3 operation modes: 1. Enrollment. Software processes the face image, extracts features and writes them to the database. 2. Face enrollment with features generalization. This mode generates the generalized face features collection from a number of the face templates of the same person. Each face image is processed and features are extracted. Then collections of features are analyzed and combined into one generalized features collection, which is written to the database. The face recognition quality increases if faces are enrolled using this mode. 3. Matching. This mode performs new face image matching against face templates stored in the database.

Copyright 2004 - 2007 Neurotechnologija

Chapter 2. Requirements
128 MB of RAM, 1Ghz CPU, 2MB HDD space for the installation package. Microsoft Windows 2000/XP. DirectX 8.1 or later. You can download DirectX upgrade from Microsoft web site Microsoft GDI+. This library is supplied with Windows XP and Windows .NET Server family. If you are using any other modern Windows platform (Windows 98/Me and Windows NT 4.0/2000) you should download and install it from Microsoft web site. The Microsoft XML Parser (MSXML) 3.0 SP4 is required so if it is not already in the system you should download and install it from Microsoft web site. Optionally, video capture device (web camera).

Copyright 2004 - 2007 Neurotechnologija

Chapter 3. Image quality control


Face recognition is very sensitive to image quality so maximum care should be attributed to image acquisition.

3.1. Pose
The frontal pose (full-face) must be used. Rotation of the head must be less than +/- 5 degrees from frontal in every direction up/down, rotated left/right, and tilted left/right.

3.2. Expression
The expression should be neutral (non-smiling) with both eyes open, and mouth closed. Every effort should be made to have supplied images comply with this specification. A smile with closed jaw is allowed but not recommended.

3.2.1. Examples of Non-Recommended Expressions


1. 2. 3. 4. 5. 6. 7. 8. A smile where the inside of the mouth is exposed (jaw open). Raised eyebrows. Closed eyes. Eyes looking away from the camera. Squinting. Frowning. Hair covering eyes. Rim of glasses covering part of the eye.

3.3. Face changes


Beard, moustache and other changeable face features influence face recognition quality and if frequent face changes are typical for some individual, face database should contain e.g. face with beard and cleanly shaved face enrolled with identical ID.

3.4. Lighting
Lighting must be equally distributed on each side of the face and from top to bottom. There should be no significant direction of the light or visible shadows. Care must be taken to avoid "hot spots". These artifacts are typically caused when one, high intensity, focused light source is used for illumination.

3.5. Eyeglasses
There should be no lighting artifacts on eyeglasses. This can typically be achieved by increasing the angle between the lighting, subject and camera to 45 degrees or more. If lighting reflections cannot be removed, then the glasses themselves should be removed. (However this

Copyright 2004 - 2007 Neurotechnologija

Image quality control

is not recommended as face recognition typically works best when matching people with eyeglasses against themselves wearing the same eyeglasses). Glasses must be clear glass and transparent so the eyes and irises are clearly visible. Heavily tinted glasses are not acceptable.

3.6. Web cameras


As web cameras are becoming one of the most common personal video capturing devices, we have conducted small video image quality check. Most of cheap devices tend to provide 320x240 images of low quality, insufficient for biometrical use. As a general rule, true 640x480 resolution (without interpolation) and a known brand name are recommended. Images should be enrolled and matched using the same camera, as devices have different optical distortions that can influence face recognition performance.

Copyright 2004 - 2007 Neurotechnologija

Chapter 4. Liveness Detection


VeriLook algorithm is capable to differentiate live faces from non live faces (e.g. photos). "Use liveness check" checkbox and "Livenes threshold" parameter in the options dialog controls the behavior of liveness check. When "Use liveness check" checkbox is marked, the liveness check is performed while matching. That is the liveness score of collected stream is calculated and checked against the liveness score threshold set in the "Liveness threshold" parameter. Using liveness check requires a stream of consecutive images. (This check is intended to be used mainly with video stream form a camera). The stream must be at least 10 frames length and the recommended length is 10 - 25 frames. Only one person face should be visible in this stream. If the stream does not qualify as "live" and "Extraction failed" message is displayed in the log window. To maximize the liveness score of a face found in an image stream, user should move his head around a bit, tilting it, moving closer to or further from the camera while slightly changing his facial expression. (e.g. User should start with his head panned as far left as possible but still detectable by face detection and start panning it slowly right slightly changing his facial expression until he pans as far right as possible (but still detectable by face detector)).

Copyright 2004 - 2007 Neurotechnologija

Chapter 5. Application
VeriLook demo application demonstrates VeriLook face recognition algorithm using video and still images.

5.1. Main window


Main application window has four-pane layout, where two top panes are used for image display and two bottom panes are used for message logging. Menu commands and two toolbar buttons, used as shortcuts for most accessed commands, control application.

Figure 5.1. Main application window


Main window panes: 1. Face detection pane, used to display video or still images and result of face detection algorithm overlaid on image. 2. Matching/enrollment pane, used to display images enrolled to face database or used for matching. 3. Application log, used for system information and application progress messages. 4. Match results pane for listing id of the subject in the database, most similar to matched image. Subjects are considered similar if their similarity value exceeds matching threshold set via Options dialog. This value is displayed in the second list view column.

Copyright 2004 - 2007 Neurotechnologija

Application

5.2. Options dialog

Figure 5.2. Options dialog


Face confidence threshold value which controls the requirements for face detection. The greater this value is the more strict rules are applied when looking for faces in an image. Minimum IOD minimum distance between eyes. Maximum IOD maximum distance between eyes. Face quality threshold controls how strict rules are applied when determining the quality of a found face for extraction. If face quality score does not outscore Matching threshold threshold that separates identical and different subjects. Matching threshold is linked to false acceptance rate (FAR, different subjects erroneously accepted as of the same) of matching algorithm. The higher is threshold, the lower is FAR and higher FRR (false rejection rate, same subjects erroneously accepted as different) and vice a versa. You can find all these thresholds in "Matching threshold" table Matching attempts specifies how many times face database will be searched for a match for each newly detected face. Matching will be terminated after finding first subject with similarity value greater than matching threshold. Use liveness check Controls if liveness check is used while matching. Liveness threshold controls the requirements for live faces. The greater this value is the more strict rules are applied to check if face in an image stream is live. (If this value is set to 0 all faces are considered to be live).

Copyright 2004 - 2007 Neurotechnologija

Application Matching stream length maximum number of frames to process with face detection algorithm while matching subject using video camera. When liveness check is used this value must be at least 10 or more (Recommended range is 10 - 25 )! Enroll stream length maximum number of frames to process with face detection algorithm while enrolling subject using video camera. Generalization threshold similarity value that has to be mutually exceeded by each feature template used for generalization. Generalization image count number of images to use for enrollment with generalization. Save enrolled images* write to disk images of subjects enrolled to face database. Flip video image horizontally mirror horizontally image received from video camera. File name as record ID when enrolling still image files, use file name without extension as face database record identifiers.

Important
* It is recommended to save all enrolled images to allow re-enrolling in case of changes in internal feature template extraction algorithm in upcoming versions of VeriLook SDK.

5.3. Menu commands


Menu command Source Camera name Source File Jobs Enroll Jobs Enroll with generalization Jobs Match Tools Face detection preview Tools Save image Tools Clear logs Tools Empty database Tools Options Help About VeriLook Description Choose selected camera as video source. Select an image file as a source. Enroll image to face database. Enroll several generalized images to face database. Search for matching image in face database. View face detection result overlaid on images. Save image to disk. Clear application log windows. Empty face database. Display options dialog. Display information about VeriLook demo application.

5.4. Simple usage examples


In this section simple basic scenarios of using VeriLook algorithm demo application are deCopyright 2004 - 2007 Neurotechnologija 8

Application

scribed in a step by step fashion.

Enrolling from camera


1. First, camera to be used as the capture device must be selected from "source" menu in the toolbar. After that camera video input should be visible in the upper left pane of the program. 2. Faces found in the video stream are outlined in the capture image by colorfull rectangles (the green rectangle outlines the face that best fits the matching requirements of the VeriLook algorithm in addition this face has its eyes marked by the program, and yellow rectangles show other faces found in the image). 3. To enroll a face from a video stream, "enroll" button in the toolbar can be used or option "enroll" from a system menu "jobs" can be selected. For this operation to succeed at least one face in the image must be present. Program will process a few frames and will enroll face into the database of the demo program from these frames and a dialog asking for the person to be enrolled id will be shown.

Matching from camera


1. First, camera to be used as the capture device must be selected from "source" menu in the toolbar. After that camera video input should be visible in the upper left pane of the program. 2. Faces found in the video stream are outlined in the capture image by colorfull rectangles (the green rectangle outlines the face that best fits the matching requirements of the VeriLook algorithm in addition this face has its eyes marked by the program, and yellow rectangles show other faces found in the image). 3. To identify a face "match" button must be clicked or option "match" must be selected from a system menu "jobs". After this the face, that best suits the matching requirements of the VeriLook algorithm (it should be outlined by a green rectangle in the video inuput pane) will be matched against the database of the demo program and most probable candidate will be displayed in the bottom right pane of the program window.

Enrolling from file


1. First, file input as the capture device must be selected from "source" menu in the toolbar. 2. To enroll a face from a file "enroll" button must be pressed or "enroll" option selected from the system menu "jobs". After that a file open dialog should open in which a file to be opened must be selected. The image in the file will be displayed in the upper left pane of the window, with the found face outlined by a greeen rectangle (if such rectangle is ab-

Copyright 2004 - 2007 Neurotechnologija

Application

sent it means that no face suitable for enrollement was found in the image) and a dialog asking for the person to be enrolled id will be shown. The outlined face will be enrolled to the demo program databse.

Matching from file


1. First, file input as the capture device must be selected from "source" menu in the toolbar. 2. To identify a face from a file "match" button must be pressed or "match" option selected from the system menu "jobs". After that a file open should open in which a file to be opened must be selected. The image in the file will be displayed in the upper left pane of the window, with the found face outlined by a green rectangle (if such rectangle is absent, it means that no face, suitable for matching was found in the image). The outlined image will be matched agains the demo program database and most probable candidate will be displayed in the bottom right pane of the window.

Enrolling with generalization


Generalization enables face feature extraction from multiple images of the same person thus allowing more details to be precisely extracted, increasing the reliability of matching operations. To perform enrollement using generalization follow these steps: 1. First, select your desired input either file or web camera from the "source" menu in the toolbar. 2. From "jobs" menu in the toolbar select "enroll with generalization". 3. If you chose camera as your input source, the program will attempt number of distinct face detections from the video stream. If file as input was selected, program will open a standard file open dialog asking to select number of images of the same person. The number of files the program will ask or try to capture from video stream is set in the options dialog "generalization image count". 4. After the input images have been captured, the program will proccess all of them and extract generalized features. The last input image will be displayed in the top left pane of the window with the found face outlined by a greeen rectangle (if such rectangle is absent it means that no face suitable for enrollement was found in the images) and a dialog asking for the person to be enrolled id will be shown. Template generated from these input images will be enrolled to the programs database.

Copyright 2004 - 2007 Neurotechnologija

10

Chapter 6. Matching threshold and similarity


VeriLook features matching algorithm provides value of features collections similarity as a result. The higher is similarity, the higher is probability that features collections are obtained from the same person. Matching threshold is linked to false acceptance rate (FAR, different subjects erroneously accepted as of the same) of matching algorithm. The higher is threshold, the lower is FAR and higher FRR (false rejection rate, same subjects erroneously accepted as different) and vice a versa. FAR 1% 0.1% 0.01% 0.001% 0.0001% 0.00001% Threshold 0.625 0.650 0.675 0.700 0.725 0.750

In order to improve person identification one can implement multiple matching attempts. For details refer to the demo program source code and documentation.

Copyright 2004 - 2007 Neurotechnologija

11

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy