Face Recognition: Eigenface and Fisherface Performance Across Pose
Computer Vision Research in General
The computer vision field has recently become a very interesting area to research. Technological improvements in computers and imaging hardware have enabled many novel computer vision applications.
Automated computer vision systems that capture and analyze images have found mostly military applications to date: in surveillance, targeting, and biometrics. In the near future, computer vision is likely to be used in consumer applications such as:
Also, some useful commercial applications that are currently or soon-to-be in development include:
Computer vision makes use of multidisciplinary knowledge from many different fields, including optical physics, machine learning, pattern recognition, signal processing, and computer graphics. It can be useful to divide computer vision research into four levels:
While theoretical knowledge is well-developed for image formation and low-level vision, many mid-level and high-level vision problems lack theoretical descriptions or solutions. Because so much remains to be learned, the next several years should be an exciting time to research computer vision.
As a researcher beginning study in the computer vision field, I wanted to start with study of an established research topic. The facial recognition problem has been studied extensively, with research beginning in the 1960-70’s that was mostly influenced by early works authored by Bledsoe , Kelly , and Kanade .
Broad Problem Statement
Pictorially, given this database of photos
and this new picture,
can an automatic algorithm be developed that can match the identity of the person from the new picture with a previously stored image in the database?
While the broad goal is appropriate for a research group, the scope must be narrowed a bit for a class project. By assuming that the face detection problem is already solved, we can expect to use pre-processed (in scale, rotation, and alignment) face images as inputs to our face recognition software.
Our (Specific) Problem Statement
Why is Face Recognition Interesting?
Face recognition is interesting to study because it is an application area where computer vision research is being utilized in both military and commercial products. Much effort has been spent on this problem, yet there is still plenty of work to be done.
Basic research related to this field is currently active. For example, research searching for a fundamental theory describing how light and objects interact to produce images (via the plenoptic function) was recently published in April 2004 . Often, practical applications can grow out of improvements in theoretical understanding and it seems that this problem will continue to demonstrate this growth.
Personally, I’m interested in this project because it’s a high-level pattern recognition problem in which humans are very adept, whereas it can be quite challenging to teach a machine to do it. The intermediate and final visual results are interesting to observe in order to understand failures and successes of the various approaches.
The state of the art research of face recognition has progressed much beyond the eigenface and fisherface approaches we study in this project. The history has developed from the 1960’s until now (2004) with many significant advances happening within the last eight years.
Influential contributions (with our focus emphasized)
For more historical detail, refer to R. Chellappa’s survey  of this topic’s early history, covering the 1960’s until 1995.
We have implemented the eigenface and fisherface algorithms and tested them against two face databases, observing results across pose (out-of-plane face rotation). We evaluated performance against databases with both densely-sampled and sparsely-sampled facial poses.
Our new ideas include:
More detail describing the new ideas can be found in the new contributions section.
This section reviews the basic mathematics for the eigenface and fisherface approaches. We use MATLAB-like pseudo-code notation.
Results and Comparisons
Facial recognition software was developed using the MATLAB programming language by the MathWorks. This environment was chosen because it easily supports image processing, image visualization, and linear algebra.
The software was tested against two databases: ALAN & UMIST.
The UMIST database images, displayed below, has uniform lighting and pose varying from side to frontal.
UMIST Results Using Eigenface in Densely Sampled Database
For these results, 20 recognition faces (one for each person) were randomly picked from the database, leaving 545 photos to use as training faces. Mp, the number of principal components to use, was chosen as 20.
The average face, eigenvalue strengths, and the 20 eigenfaces (eigenvectors) corresponding to the 20 strongest eigenvalues are displayed.
Once these eigenfaces have been found, they can be used as a basis for face-space. All of the training images are projected into this face-space and the projection weights are stored as training data.
In order to recognize a face, it is projected into face-space, producing weights that are then used as features in a nearest neighbor classifier that simply finds the minimum euclidean distance between the recognition weights and the training weights.
The resulting plots follow. Note that blue text over a face denotes a correct match while red text indicates a incorrect recognition.
All 20 of 20 images were correctly recognized, confirming the very good performance of eigenface with densely and uniformly sampled inputs. For this same database and setup, fisherface performs very similarly.
UMIST Results Using Fisherface in Sparsely Sampled Database
For these results, 20 recognition faces (one for each person) were randomly picked from the database, then 60 more photos were used as training faces. Three training faces were picked for each person: a frontal, side, and 45-degree view.
The resulting plots follow.
Out of the 20 faces, 16 were correctly classified in the 1st match. Also notice that this approach is rather pose invariant — it often (13 times) picks out all 3 training images from the database.
For comparison, the next plots show the same setup run using the eigenface algorithm. Note that 14 of the 20 faces are correctly classified, and all 3 correct images are never found.
Clearly, the fisherface algorithm performs better under pose variation when only a few samples across pose are available in the training set.
ALAN Database Results Using Eigenface
The following images depict the original captured photos and the faces generated by manual pre-processing.
Pre-processing was used to attempt to remove differences among images in lighting (by normalizing skin tone), scale, alignment, and background.
For these results, 11 recognition faces (one for each person) were randomly picked from the database, then the remaining 26 photos were used as training faces. The results before and after pre-processing are displayed.
While performance was still not perfect, the pre-processing improved the number of correct classifications from 6 to 9 (of 13). Fisherface performance was similar to eigenface for this database.
Our New Contributions
Eigenface vs. Fisherface for Training Data that is Densely and Sparsely Sampled Across Pose
As displayed by the UMIST results, we found that when only a few training samples across pose are available, fisherface clearly performs better than eigenface. This is not unexpected because fisherface is minimizing the within-class scatter. However, this particular detail not been clearly stated in past literature.
Fisherface Algorithm Tweak
In experimenting with the fisherface approach, we also implemented a slightly modified version of the algorithm. Our change tweaks the principal component analysis dimension reduction step in the fisherface approach.
The original algorithm, detailed in , suggests picking the number of principal components to keep in the PCA dimension reduction step as Mp = N - c, where N is the number of training images and c is the number of unique identities. While this choice of Mp guarantees that the resulting matrix will be non-singular, it is not the only possible choice.
By choosing Mp less than N - c, we can further reduce the dimensionality before employing the fisherface approach. Our rationale is that the “face-space” does not need the higher dimensional eigenvectors to be well represented. This adds flexibility in making the trade between using the strongest principal components versus classifying based on the within-class and between-class scatter.
This modification should improve upon both fisherface and eigenface by:
More thorough testing on larger databases would be necessary to verify these improvements.
Table comparing eigenface to fisherface (* denotes new result)
We find that both the eigenface and fisherface techniques work very well for a uniformly and densely sampled data set varied over pose. When a more sparse data set across pose is available, the fisherface approach performs better than eigenface.
Further Work Ideas
Given more time to improve and expand our results, we would suggest getting better databases, trying other recognition techniques, and using the formal FERET test methodology. The eventual goal might be a submission to the Face Recognition Vendor Test.
We could augment our results by:
Comments on ECE432
I enjoyed this class, ECE 432 Advanced Computer Vision. Adaption of the lecture schedule to better fit the projects people chose was a good idea because it was very helpful to get a detailed explanation about the problem and solution from the professor.
One minor improvement I’d like to see made in the future would be a bit more intermediate feedback on the final project. Maybe some earlier demo days where we show running code to classmates and ask for input in class would be useful.
MATLAB Code and Tools Used
 W. W. Bledsoe, “The model method in facial recognition,” Panoramic Research Inc., Tech. Rep. PRI:15, Palo Alto, CA, 1964.
 M. D. Kelly, “Visual identification of people by computer,” Tech. Rep. AI-130, Stanford AI Proj., Stanford, CA, 1970.
 T. Kanade, Computer Recognition of Human Faces. Basel and Stuttgart: Birkhauser, 1977.
 M. Turk and A. Pentland, “Eigenfaces for recognition,” J. Cognitive Neuroscience, vol. 3, no. 1, 1991.
 M. Turk and A. Pentland, “Face recognition using eigenfaces,” Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 1991, pp. 586-591.
 P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, “Eigenfaces vs. fisherfaces: recognition using class specific linear projection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 711-720, July 1997.
 R. Gross, I. Matthews, S. Baker, “Appearance-based face recognition and light-fields,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 4, pp. 449-465, April 2004.
 R. Chellappa, C. L. Wilson, and S. Sirohey, “Human and machine recognition of faces: A survey,” Proc. IEEE, vol. 83, pp. 705-740, 1995.
 R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification. New York: Wiley, 2001.
 D. A. Forsyth, J. Ponce, Computer vision: a modern approach. New Jersey: Prentice Hall, 2003.