100% found this document useful (1 vote)
1K views7 pages

3D Face Reconstruction From 2D Images - A Survey

This paper surveys the topic of 3D Face Reconstruction using 2D images from a computer science perspective. Various approaches have been proposed as solutions for this problem but most have their limitations and drawbacks. A fully accurate facial reconstruction mechanism has not yet been identified due to the complexity and ambiguity involved.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
1K views7 pages

3D Face Reconstruction From 2D Images - A Survey

This paper surveys the topic of 3D Face Reconstruction using 2D images from a computer science perspective. Various approaches have been proposed as solutions for this problem but most have their limitations and drawbacks. A fully accurate facial reconstruction mechanism has not yet been identified due to the complexity and ambiguity involved.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Digital Image Computing: Techniques and Applications

3D Face Reconstruction from 2D Images


A Survey

W.N. Widanagamaachchi A.T. Dharmaratne


University of Colombo School of Computing University of Colombo School of Computing
35, Reid Avenue, Colombo 7, Sri Lanka. 35, Reid Avenue, Colombo 7, Sri Lanka.
wathsy31@gmail.com atd@ucsc.cmb.ac.lk

Abstract The topic, 3D face reconstruction from 2D images has


been derived and studied separately from the more general
This paper surveys the topic of 3D face reconstruction area of 3D shape reconstruction due to its depth and the
using 2D images from a computer science perspective. Var- complexity.
ious approaches have been proposed as solutions for this Techniques for attaining facial information for 3D recon-
problem but most have their limitations and drawbacks. struction are broadly categorized into three, namely, pure
Shape from shading, Shape from silhouettes, Shape from image-based techniques, hybrid image-based techniques
motion and Analysis by synthesis using morphable models and 3D scanning techniques. The pure image-based tech-
are currently regarded as the main methods of attaining the niques perform the reconstruction using only 2D images
facial information for reconstruction of its 3D counterpart. without estimating the real 3D structure. In hybrid image-
Though this topic has gained a lot of importance and popu- based techniques both approximations and the data gained
larity, a fully accurate facial reconstruction mechanism has from images are used in the reconstruction process. The
not yet being identified due to the complexity and ambigu- 3D scanning techniques have the capability to capture the
ity involved. This paper discusses about the general ap- complete 3D structure since scanned images provide both
proaches of 3D face reconstruction and their drawbacks. It geometry and texture information of the face.
concludes with an analysis of several implementations and Human face is difficult to model even using normal 3D
some speculations about the future of 3D face reconstruc- modeling software; hence the task of reconstructing them
tion. according to features gained from 2D images and making
them realistic and accurate is, without doubt, even more in-
tricate. The individual shape and variations in the human
face, varying reflectance properties of the skin and actual
1 Introduction depth estimation of face components add up to that intri-
cacy [7]. Consequently, this topic has become one of the
The humans can perceive the 3D (3 Dimensional) shape fundamental problems in computer vision at present [10].
of a 2D (2 Dimensional) image by just looking at it, even if The need for 3D face reconstruction has grown in appli-
the object in the image is completely new to the eye. The cations like virtual reality simulations, plastic surgery simu-
human brain plays a vital role in obtaining this 3D world lations, face recognition, face morphing, 3D games, human
through 2D images. After noticing the 2D image, the hu- computer interaction and animations [7, 6]. Though exten-
man eye signals the brain about the object through a nerve sive research has being carried out, a fully accurate facial
signal. After processing the nerve signal the brain creates reconstruction mechanism has not yet being proposed [8].
the 3D shape of the 2D object. Appearance of the object, The work in the early mechanisms of facial reconstruc-
familiarity with shapes of similar 3D objects and other sim- tion focused only on producing realistic faces, however to-
ilar factors assist in creating the aforementioned 3D shape day, accurate reconstructions for facial plastic surgery sim-
[10, 2]. Though this is an unconscious act for the humans, ulations and fast and simpler reconstructions for 3D games
when tried to simulate with computers, efficient and effec- have also become a necessity.
tive ways have to be explored for identifying object features The rest of the report is organized as follows. Section 2
to assist the reconstruction of the 3D face. Thus it makes the provides a detailed description about the general approaches
area of 3D shape reconstruction from 2D images a complex for 3D face reconstruction from 2D images. In spite of the
and a problematic one. varied differentiations of implementation, there are some

978-0-7695-3456-5/08 $25.00 © 2008 IEEE 365


DOI 10.1109/DICTA.2008.83
preliminary steps which should be included in such a re- input image contains such a face it can have different light-
construction process. Section 3 is devoted to describing ing and viewing conditions. Therefore techniques to exploit
those steps. The limitations and complications faced in a 3D the stored images to produce novel 3D faces and techniques
face reconstruction are summarized in section 4. Though to reconstruct a face in different lighting conditions should
there are numerous reconstruction techniques, section 5 fo- be thoroughly explored.
cuses only on a chosen few to highlight their divergent ap- Human face has a basic structure with features such as
proaches. Finally, section 6 concludes with some specula- nose, mouth and eyes, but within these features there are mi-
tions about the future of 3D face reconstruction from 2D nor differences which make a person unique. Researchers
images. have designated around 150 feature points (figure 1) that
can be used to capture these minor differences [4, 13]. Ap-
2 General Approaches proaches which use these feature points can perform auto-
matic or user driven feature point extractions.
As a result of the recent research conducted by Mi-
There are many approaches for reconstructing 3D faces
crosoft, [11] software was produced which automatically
but the choice of approach may vary according to the ap-
locates 83 feature points of the face, but the input image
plication for which the reconstruction is used. The general
has some limitations. The image should be a frontal face,
approaches are shape from shading, shape from silhouettes
having a neutral expression and should be in normal illumi-
and shape from motion and analysis by synthesis using mor-
nation.
phable models [1].
The most successful approach up-to-date is analysis by
synthesis in which the parameters of the 3D statistical
model are adjusted to increase the accuracy between the
reconstructed face and the 2D face image. The errors in
this approach are caused by 3D-2D alignment, shape differ-
ences, illumination differences and the quality of the dense
correspondence among the 3D surfaces [1].
Despite the advances in depth estimation, shape from
shading remains important because it overlooks most of the Figure 1. Feature Points. [13]
shortcomings of depth estimation. Algorithms for recover-
ing shape from shading are generally considered to yield Since the details of the face are extracted from input im-
very good results in global minimization while local ap- ages many considerations have to be made in deciding the
proaches are more erroneous but faster [7]. The Tsai-Shah number of the images required and the viewpoint of those
algorithm which is used by Fanany et al. [7] is an example images. Some argue that implementations based on mul-
of the local approach. tiple images are more liable to obtain accurate reconstruc-
A silhouette is an outline, shape or shadow of an object. tions since more data about the face can be grasped.
Silhouettes provide accurate and robust data for reconstruc- When comparing with other arbitrary viewpoint face im-
tions since they depend only on shape and pose of the object ages the frontal image captures all the face features. Due
and are illumination-independent. These silhouettes, if ex- to this reason, most implementations based on single image
tracted from the input images provide accurate data for the require the image to be a frontal face with a neutral expres-
reconstruction process. Both Samaras et al. [14] approach sion.
and Lee et al. [12] approach use silhouette images to re- Birkbeck et al. [3] took an approach which rotates the
cover shape. person on a turntable to acquire a set of images from dif-
Just as humans use prior knowledge on similar objects ferent viewpoints while Gong et al. [8] took images across
to perceive 3D images, in a computer implementation a views from minus 90 to plus 90 degrees at 10 degree in-
database and/or a generic 3D face model can be used as crements. The camera is adjusted according to a magnetic
prior knowledge [10]. Normally face images and/or depth sensor which is attached to the head.
maps and texture information can be stored in databases. In Birkbeck et al. [3] approach, all steps from image-
The input image and these stored images are compared, and capturing to 3D face reconstruction are performed through
the corresponding images are exploited in determining the a GUI (Graphical User Interface) program. The shape is
facial components of the input image. The depth maps of obtained by silhouettes and the texture is generated with the
these corresponding images assist in estimating the depth use of conformal mapping to reduce the distortion which
of the face components. occurs when 3D surfaces are flattened in to 2D space. At
However it is very unlikely for the input image to contain the time of rendering, the correct texture for each viewpoint
an image of a face which resides in the database. Even if the is modulated from the textures.

366
Rasiwasia [13] took only two images into consideration eyes, tip of the nose and the center and end points of
- a frontal and a profile view. Since limitations in the in- the mouth would prove enough.
put image’s viewpoint cause inflexibility, researchers have
• Depth estimation
currently focused on reconstructing faces from single 2D
image where the image has no limitation in pose or expres-
For an accurate and realistic reconstruction, both
sion and it can be taken in an arbitrary viewpoint. Guan’s
location and depth of the facial features of the recon-
[9] approach provides useful groundwork in that region.
structed face should be equivalent to the real face.
Constructing the depth map of the input image will
3 Steps in a regular 3D face reconstruction assist in depth estimation.
approach
• 3D face reconstruction
After considering all these approaches, a set of general
After face components’ locations and depth are
steps can be derived which will included in a regular 3D
identified the 3D face can be reconstructed. A default
face reconstruction algorithm. The following is a list of the
3D model can be deformed according to the real
identified steps.
features to obtain the final 3D face. The texture should
• Repairing the damaged areas (caused by noise, occlu- be mapped onto the 3D face. This is an intricate
sion or shadows) process since the texture information gained from
2D space has to be mapped onto a 3D space. Some
The input image’s condition might not always be approaches project the frontal image directly onto
satisfactory; they may be damaged or corrupted. the 3D face but if the approach takes multiple input
Noise pixels of the image, if exist, might lead to images these images can be warped into the texture
inaccurate reconstructions. Shadows, poor lighting space to generate a more realistic effect. The above
conditions and occlusions prevent accurate fea- mentioned Microsoft’s approach [11] projects the
ture extraction of the face. Due to these reasons frontal image directly onto the 3D face while Birkbeck
these damaged areas need to be eliminated prior to et al. [3] warps the input images to the texture space.
reconstruction.
4 Difficulties in 3D face reconstruction from
• Face localization 2D images
Few approaches like Rasiwasia’s method [13] in- The uncertainty which lies in facial component detection
volve predefined restrictions in the input images. can be eliminated by using multiple images but it might not
Although these restrictions introduce inflexibility, always be possible to attain that many images. Even if mul-
they reduce the complexity and preclude other face tiple images are available, factors like noise, occlusion and
localization difficulties. shadows and/or lack of features in images might prevent the
Since input images in non-restricted approaches may system from using them. To make the matters worse, multi-
contain other background elements apart from the hu- ple images might make the problem of time and effort even
man face, the face region should be identified and more obvious. The time issue is mainly caused by the pre-
cropped. The distinctive color of the human skin can processing phase required.
be used as a guide in identifying the face region. This As a result most researchers’ attention has narrowed
process is labeled as face localization. down to single image based 3D face reconstructions. One
In approaches where multiple images are being taken image of a face does not provide sufficient information for
as input, each input image has to be cut and resized to a 3D reconstruction, even if it’s a frontal image. If the im-
obtain face regions. In addition, all these obtained im- plementation has limitations in viewpoint, the input image
age parts should be precisely aligned with each other. may not even contain all the facial components.
Human face belongs to a particular class of similar ob-
• Facial component detection jects. This class can be used in making inferences about the
human face to assist in generating other views of the face
After the face region is isolated, the components in the aforementioned circumstance. A database which is
of the face can be easily identified. Image-based maintained within the implementation can facilitate in mak-
techniques, silhouettes and feature points can be used ing these inferences.
to detect these facial components. In identifying these In maintaining a database the main dilemma lies in de-
facial components, recognizing the two corners of the ciding the size of it. Unless the input 2D image’s viewing

367
conditions are known in advance, images of each face taken replaced with more suitable 3D objects with better viewing
under different lighting and viewing conditions have to be conditions. As a result only a small relevant subset of the
stored but large storage requirements, increased probability database is accessible to a user at any given time.
of false matching and slower reconstructions makes this op- In performing the depth estimation of the face, parts of
tion rather impractical. Basri and Hassner [2] presented a the image are compared with the image parts in the database
novel solution which answers this problem. to match the intensity patterns (figure 2). The found inten-
‘Feature points’ is a well-liked method for facial com- sity patterns are taken as the initial guess for the face’s depth
ponent detection but using countless feature points in the and later a global optimization scheme is applied for depth
application can lead to inefficiencies in the computational refinement. When using a Pentium 4, 2.8GHz computer
time taken. Therefore approaches that involve a smaller with 2GB RAM for a 200 x 150 pixel image via 12 example
number of feature points have gained recognition. Blanz images at a given time, the running time of this application
et al. [4] approach is an example for such an approach. In is around 40 long minutes.
recovering 3D facial information from multiple images the The ability to handle a large database and being appli-
relationship between feature points in different viewpoints cable to a variety of objects irrelevant of their viewing and
should be maintained. lighting conditions makes this a successful approach.

5 Recent work

Blanz et al. [4] put forth a reconstruction approach based


on a small set of feature points, a reference face and a
database. The locations of the feature points are set in the
reference face so that it can be used to automatically ex-
tract feature points from the input image. Additional feature
points are used for texture reconstruction. The reconstruc-
tion is carried out by merging the stored shapes and textures
in the database to correspond to the positions and gray val-
ues of the actual feature points.
Since 2D shape information and texture information are
considered individually, the reconstructing process has two
alternatives for the texture of the 3D face - the standard tex-
ture of the reference face or the true texture. A ‘x by x’
mask is applied on each point and the mid value is obtained
as texture information in the hope of reducing errors caused Figure 2. Visualization of the Process. [2]
by noise. In experiments 22 shape reconstruction feature
points, 3 texture reconstruction feature points and a ‘3 by 3’ Fanany et al. [7] present a neural-network learning
mask have being used. scheme for 3D face reconstruction. This system can pro-
The limitation that face should not have glasses, earrings cess the polygon’s vertices parameter of the initial 3D shape
or beard is a setback in this approach. The resolution of the based on depth maps of several images taken from multiple
images is limited to 256 x 256 pixels and colored images views. These depth maps are obtained by Tsai-Shah shape-
are converted to 8-bit gray level images. from-shading (SFS) algorithm. An appropriate initial 3D
Basri and Hassner [2] present a MATLAB-based solu- shape should be selected in order to improve model res-
tion with an underlying database which has an update mech- olution and learning stability. The texturing is performed
anism. The images are partitioned into classes assuming by mapping the texture of face images onto the initial 3D
that similar looking objects have similar shapes (e.g. fish, shape.
face) and the database was created by storing these images The NN (Neural Network) scheme can store vertices of
in the same class along with their depth maps. Since the a 3D polygonal object shape. These vertices of the object
input image’s viewing conditions are not known in advance in 3D space could be updated by the use of error back prop-
they have stored images of the same object with different agation after comparing the projected images with the real
viewing conditions in the database. Though this also results images. Since the NN could generate only flat projected
in an infinite example database the problems which arise polygonal models as its output, they have added a Gouraud
with it are eliminated by the use of an update scheme. Start- smooth shading module to post-process the output of the
ing with an initial seed of the database, it updates on-the-fly NN. Hence the whole scheme is named Smooth Projected
during processing in a way such that least used examples are Polygon Representation Neural Network (SPPRNN). Ver-

368
tex, color and camera are the three parameters of the pro- The distinctive color of the human skin is used in
jected polygon representation NN. identifying the face region within the image. The (R, G, B)
The Tsai-Shah SFS algorithm processes both input im- in the images is classified as skin if it satisfies the following
ages and NN output images in order to reconstruct the 3D conditions.
face based on the depth maps. These depth maps are con-
sidered as partial 3D shapes rather than images. R > 95 and G > 40 and B > 20 and
In Samaras et al. [14] approach, 3D shape is extracted max{R, G, B} - min{R, G, B} > 15 and
from Multi posed face images taken under arbitrary lighting |R - G| > 15 and
and the reconstruction process uses silhouette images. The R - G > 20 and R - B > 20
accuracy of this reconstruction process lies on the number
and location of cameras used to capture the input images.
A 3D face model is used as prior knowledge to assist in the
reconstruction process.
The 3D face model is constructed from a set of 3D faces
attained from 3D scanning technologies. The shape and
pose parameters are estimated by minimizing the difference
between the face model and input images. Later the illu-
mination and spherical harmonic basis parameters are ex-
tracted from the recovered 3D shape. Figure 4. Skin Detection [13]

In extracting feature points, pure image based techniques


are used. X and Y coordinates (Xf , Yf ) of a feature point
can be obtained from the frontal image while the Z coordi-
nate along with the Y coordinate (Yp , Zp ) can be attained
from the profile image. Since images are aligned, both these
Figure 3. Silhouette Extraction [14] Y coordinates are approximately the same.
So the final feature point coordinates can be achieved for
Rasiwasia [13] presents a simple and easily understood all the 35 feature points by using (2).
approach based on two orthogonal pictures - frontal view
and profile view. The input images can be obtained by a
stereo camera or a hand held camera but with the constraint (Xf [i], (Yf [i]+Yp [i])/2, Zp [i]) where i=1, 2, ...., 35
of being in normal white light with a background which is (2)
free from any skin colored objects. 35 feature points and a
generic model are used in this reconstruction process. The
complete system is implemented using MATLAB.
The user is asked to indicate four specific points in each
image - Eye, Nose, Mouth and Ear. The transformations
for aligning the two images are calculated based on those
points. When aligning, the images are scaled, rotated and
translated till the frontal and profile images are in a hori-
zontal line. Figure 5. Generic Eye Template [13]
θ = sin−1 (A/(B 2 + C 2 ) − tan−1 (C/B)) (1)
A template matching algorithm (figure 5) and prewitt op-
Theta in (1) is the angle by which the profile image erator is used in extracting the feature points of the eye from
needs to be rotated. the frontal image while horizontal and vertical histograms
are used to detect the location of the mouth. After the fea-
A = desired Y difference calculated from the ear and nose ture points of the eyes have being extracted a rectangular
point in frontal image region (figure 6) is cropped out from the frontal face. This
B = actual X difference between the ear and nose in the rectangular region’s left and right boundaries are the far-
profile image thest point of the eyes and the upper boundary is the lower
C = actual Y difference between the ear and nose in the part of the eyes. The horizontal histogram is drawn on this
profile image cropped region and the first peak from the top after a cer-
tain threshold is used to identify the location of the mouth.

369
The center of the mouth is identified by drawing a vertical
histogram in this localized mouth region.

Figure 8. 3D face reconstruction with an open


mouth expression [9]

Figure 6. Rectangular Region and the Hori- model which is 2D view-dependent but has no reference
zontal Histogram for Mouth [13] to 3D structures. They have used a Kernel PCA (Principal
Components Analysis) based on Support Vector Machines
for nonlinear shape model transformation.
Though all the 35 features can be automatically identi- This method has found remedies for two main drawbacks
fied, at the end of the extraction process, this method of- which occurred because of the large pose variations of hu-
fers the capability for any user modifications if required. man face. Nonlinear shape transformations across views us-
These feature points that are found are then used to deform ing Kernel PCA based on support vector machines is used
the generic model. This deformation is done in two steps to address the first problem which is highly nonlinear shape
- Globally and Locally. Finally the texturing of the face is variations across views. The second drawback of unreli-
performed using the frontal image in a manner that actual able relationships among feature points across views (based
features in the reconstructed face overlap with the features solely on local gray-levels) was addressed by improving a
in the frontal image. nonlinear 2D active shape model with pose constraint.
The following image (figure 7) presents some recon-
structed faces of this approach.

Figure 9. Shapes fitted to Images of an un-


known face across Views using the view-
context based nonlinear ASM (Active Shape
Models) [8]
Figure 7. Example Reconstructed Faces [13]

Recently an automatic reconstruction based on a 3D Darrell et al. [5] present a method based on cubical
generic face and a single image (irrelevant of pose and ex- ray projection. This algorithm uses a novel data struc-
pression) was presented by Guan [9]. The only condition ture named ‘linked voxel space’. A voxel space is used
required in the image of the face was for the head rotation to to maintain an intermediate representation of the final 3D
be in the interval +30 degrees to -30 degrees. This method model. Since connectivity of the meshes cannot be repre-
is said to reconstruct 3D faces with standard and low cost sented and converting a volumetric model to a mesh is dif-
equipments. The features extracted from the images serve ficult, a linked voxel space is used instead of a voxel space.
as geometric information which helps in deforming the 3D
generic face. The feature points are detected by using Eu- First the 3D views obtained from stereo cameras are
clidean angles. It is assumed that the head is not rotated registered based on a gradient-based registration algorithm.
with respect to the X axis. The result of this registration is a 3D mesh where each ver-
The texturing of the face (figure 8) is performed by tex corresponds to a valid image pixel. The location of each
orthogonally projecting the 2D images onto the 3D face. vertex in the mesh is calculated and mapped in to a voxel.
When the 2D image is orthogonally projected to form the This voxel space is reduced using a cubic ray projection
texture, some vertices contain no corresponding color since merging algorithm. This reduction is done by merging the
they are occluded. Those vertices generate blank areas in voxels which fall on the same projection ray.
the texture. As a result a thin-plate relaxation method is Since this method uses stereo cameras to get the synchro-
used in interpolating those blank areas with known colors. nized range and intensity 3D views texture alignment might
Gong et al. [8] put forth a multi-view nonlinear shape not be a necessity.

370
be extended to reconstruct a face with realistic hair and ears.
When an arbitrary image is given the system should be able
to draw out necessary inferences to obtain other views of
the face.
The topic 3D face reconstruction from 2D images has
retained its significance in the computer world and with
the recent development; applications like human expres-
sion analysis and video conferencing have been added to the
long list of its applications. Virtual hair and beauty salons
is one future application where 3D reconstructed faces will
Figure 10. Final 3D mesh viewed from differ- prove to be valuable. Having the opportunity of viewing the
ent directions [5] aftermath of a haircut or a facial before even getting it and
sometimes even viewing the face of a long-gone person is
without doubt a priceless reward. The 3D face reconstruc-
tion can be extended to produce aging software which have
6 Conclusion the capability to produce younger or the older face of the
input image.
The 2D image of a face is very sensitive to changes in
head pose and expressions so a successful reconstruction References
approach should be able to extract these face details in spite
of these changes. Approaches based on silhouettes and prior [1] S. Amin and D. Gillies. Analysis of 3d face reconstruction.
knowledge can be advantageous in addressing this prob- In Proceedings of the 14th IEEE International Conference
lem. When reconstructing 3D faces from 2D images the on Image Analysis and Processing, 2007.
[2] R. Basri and T. Hassner. Example based 3d reconstruction
key source of information is the intensity based features and from single 2d images.
landmarks of the image. But intensity alone is not enough [3] N. Birkbeck, D. Cobzas, M. Jagersand, A. Rachmielowski,
in case of low intensity, noise, occlusion, illumination vari- and K. Yerex. Quick and easy capture of 3d object models
ations and/or shadows being present in the input images. from 2d images.
The anatomical landmarks are argued to be a more accurate [4] V. Blanz, B. Hwang, S. Lee, and T. Vetter. Face reconstruc-
source of information, but they are rather thin and difficult tion from a small number of feature points.
[5] T. Darrell, L. Morency, and A. Rahimi. Fast 3d model ac-
to locate.
quisition from stereo images.
Most traditional face reconstructions require a special [6] E. Elyan and H. Ugail. Reconstruction of 3d human facial
setup, expensive hardware, predefined conditions and/or images using partial differential equations. Journal of Com-
manual labor which make them impractical for use in puters, 2(8), 2007.
general applications. Though recent approaches have tri- [7] M. Fanany, I. Kumazawa, and M. Ohno. Face reconstruction
umphed over some of these setbacks, quality and speed are from shading using smooth projected polygon representa-
not still up to the expected levels. More realistic 3D char- tion nn.
[8] S. Gong, A. Psarrou, and S. Romdhani. A multi-view non-
acter modeling software could be used in reconstructing the
linear active shape model using kernel pca. BMVC99, pages
final 3D face or the default 3D model can be created from 483–492.
such software. [9] Y. Guan. Automatic 3d face reconstruction based on single
Strategies like supervised learning and unsupervised 2d image. In Proceedings of the IEEE International Confer-
learning in neural networks can be applied in facial com- ence on Multimedia and Ubiquitous Engineering, 2007.
ponent identification. Fuzzy systems can be used in feature [10] F. Han and S. Zhu. Bayesian reconstruction of 3d shapes
extraction processes for a more fruitful result. and scenes from a single image.
[11] Y. Hu, D. Jiang, S. Yan, H. Zhang, and L. Zhang. Automatic
Prior knowledge of a face in different viewing and light- 3d reconstruction for face recognition. Journal of Pattern
ing conditions can be stored in the database with efficient Recognition.
update schemes which would eliminate the uncertainty in- [12] J. Lee, R. Machiraju, B. Moghaddam, and H. Pfister.
volved in reconstruction from a single arbitrary image. The Silhouette-based 3d face shape recovery. Graphics Inter-
recent successful approaches should be continued and re- face, 2003.
fined to adhere to the changing requirements of the modern [13] N. Rasiwasia. The avatar: 3-d face reconstruction from two
society. Limitations like not having a beard, not wearing orthogonal pictures with application to facial makeover.
[14] D. Samaras, S. Wang, and L. Zhang. Face reconstruction
earrings and glasses should also be eliminated.
across different poses and arbitrary illumination conditions.
Most present reconstructions are limited to reconstruct a AVBPA, LNCS, pages 91–101, 2005.
face with just the front area. These reconstructions should

371

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy