EIGENFACES TUTORIAL PDF

Face recognition is the challenge of classifying whose face is in an input image. With face recognition, we need an existing database of faces. There are several downsides to this approach. First of all, if we have a large database of faces, then doing this comparison for each face will take a while!

Author:Samusida Arashakar
Country:Kosovo
Language:English (Spanish)
Genre:Photos
Published (Last):27 September 2006
Pages:143
PDF File Size:19.30 Mb
ePub File Size:7.25 Mb
ISBN:452-3-58529-988-3
Downloads:22860
Price:Free* [*Free Regsitration Required]
Uploader:Mirr



The cross-platform library sets its focus on real-time image processing and includes patent-free implementations of the latest computer vision algorithms. OpenCV is released under a BSD license so it is used in academic projects and commercial products alike. OpenCV 2. It shows you how to perform face recognition with FaceRecognizer in OpenCV with full source code listings and gives you an introduction into the algorithms behind.

If you have built OpenCV with the samples turned on, chances are good you have them compiled already! All code in this document is released under the BSD license , so feel free to use it for your projects. Experiments in [Tu06] have shown, that even one to three day old babies are able to distinguish between known faces. So how hard could it be for a computer? It turns out we know little about human recognition to date.

Are inner features eyes, nose, mouth or outer features head shape, hairline used for a successful face recognition? How do we analyze an image and how does the brain encode it? It was shown by David Hubel and Torsten Wiesel , that our brain has specialized nerve cells responding to specific local features of a scene, such as lines, edges, angles or movement.

Automatic face recognition is all about extracting those meaningful features from an image, putting them into a useful representation and performing some kind of classification on them. Face recognition based on the geometric features of a face is probably the most intuitive approach to face recognition. One of the first automated face recognition systems was described in [Kanade73] : marker points position of eyes, ears, nose, The recognition was performed by calculating the euclidean distance between feature vectors of a probe and reference image.

Such a method is robust against changes in illumination by its nature, but has a huge drawback: the accurate registration of the marker points is complicated, even with state of the art algorithms. Some of the latest work on geometric face recognition was carried out in [Bru92]. A dimensional feature vector was used and experiments on large datasets have shown, that geometrical features alone my not carry enough information for face recognition.

The Eigenfaces method described in [TP91] took a holistic approach to face recognition: A facial image is a point from a high-dimensional image space and a lower-dimensional representation is found, where classification becomes easy. The lower-dimensional subspace is found with Principal Component Analysis, which identifies the axes with maximum variance.

Imagine a situation where the variance is generated from external sources, let it be light. The axes with maximum variance do not necessarily contain any discriminative information at all, hence a classification becomes impossible. The basic idea is to minimize the variance within a class, while maximizing the variance between the classes at the same time.

Recently various methods for a local feature extraction emerged. To avoid the high-dimensionality of the input data only local regions of an image are described, the extracted features are hopefully more robust against partial occlusion, illumation and small sample size.

All the images were taken against a dark homogeneous background with the subjects in an upright, frontal position with tolerance for some side movement. Yale Facedatabase A , also known as Yalefaces. The Yale Facedatabase A also known as Yalefaces is a more appropriate dataset for initial experiments, because the recognition problem is harder. The database consists of 15 people 14 male, 1 female each with 11 grayscale images sized pixel.

There are changes in the light conditions center light, left light, right light , facial expressions happy, normal, sad, sleepy, surprised, wink and glasses glasses, no-glasses. The original images are not cropped and aligned. Please look into the Appendix for a Python script, that does the job for you. I personally think, that this dataset is too large for the experiments I perform in this document.

A first version of the Yale Facedatabase B was used in [BHK97] to see how the Eigenfaces and Fisherfaces method perform under heavy illumination changes. In the demo applications I have decided to read the images from a very simple CSV file. However, if you know a simpler solution please ping me about it.

Then there is the separator ; and finally we assign the label 0 to the image. Think of the label as the subject the person this image belongs to, so same subjects persons should have the same label. You can do that in an editor of your choice, every sufficiently advanced editor can do this. The problem with the image representation we are given is its high dimensionality. Two-dimensional grayscale images span a -dimensional vector space, so an image with pixels lies in a -dimensional image space already.

The question is: Are all dimensions equally useful for us? The Principal Component Analysis PCA was independently proposed by Karl Pearson and Harold Hotelling to turn a set of possibly correlated variables into a smaller set of uncorrelated variables. The idea is, that a high-dimensional dataset is often described by correlated variables and therefore only a few meaningful dimensions account for most of the information. The PCA method finds the directions with the greatest variance in the data, called principal components.

CANON JX200 PDF

Navigation

The green line in Figure 2 shows the projection of the point onto the axis. Calculating PCA weights for a new facial image As we had seen in the previous post, to calculate the principal components of facial data, we convert the facial images into long vectors. Just like a tuple of three numbers represents a point in 3D, we can say that a vector of length 30, is a point in a 30, dimensional space. The axes of this high dimensional space are perpendicular to each other just like the axes , and of a 3D dimensional space are perpendicular to each other. And just like , the principal components Eigenvectors form a new coordinate system in this high dimensional space with the new origin being the mean vector. Given a new image, here is how we can find the weights Vectorize image : We first create a long vector from image data.

TIERNAY METALS CATALOG PDF

Eigenface using OpenCV (C++/Python)

History[ edit ] The eigenface approach began with a search for a low-dimensional representation of face images. Sirovich and Kirby showed that principal component analysis could be used on a collection of face images to form a set of basis features. These basis images, known as eigenpictures, could be linearly combined to reconstruct images in the original training set. The reconstruction error is reduced by increasing the number of eigenpictures; however, the number needed is always chosen less than M. In M.

Related Articles