Jump to content

Talk:Eigenface

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

To do

[edit]
  • handling variations in facial expression and lighting conditions
  • other applications of eigenimaging- e.g., handwriting, voice, medical

As a mathematician, I think the first line is nonsense:

"Eigenfaces are eigenvectors in the high-dimensional vector space of possible faces of human beings."

You can't have eigenvectors without first defining an operator for which they are eigenvectors, see eigenvector. Without an operator, saying that a vector is an eigenvector has exactly zero meaning. - Andre Engels 22:01, 17 May 2004 (UTC)[reply]

I agree with you, eigenfaces are more a basis than eigenvectors.

I hope what I've done with that secon paragraph is ok. I understand eigenvectors and I've heard of eigenfaces before, but I didn't understand that first paragraph at all. My explanation is very hand-wavy, so feel free to make it more accurate (but keep it readable for non-mathematicians). - G 16:10, 20 May 2004 (UTC)[reply]

I agree; the first paragraph as it now appears is somewhat vague. Maybe I'll do something with it at some point. Michael Hardy 21:34, 20 May 2004 (UTC)[reply]

Number of Eigenfaces

[edit]

Is the following explanation completely right?

For instance, if we are working with a 100 x 100 image, then this system will create 10,000 eigenfaces. Since most individuals can be identified using a database with a size between 100 and 150, most of the 10,000 can be discarded, and only the most important should remain.

Instead, shouldn't we give an explanation such as:

If we have a set of 100x100 images, and the set contains M such images, we can imagine any other image data as a point in a 100x100=10,000=N dimensional space. Thanks to the eigenface decomposition, we do not need all these dimensions, and we can fully represent data that already belongs to the set in terms of M (=number of images) eigenfaces. We can even further reduce the number of dimensions to M' with a reasonable loss, where M' can be down to 0.1M, that is, if we have 300 images with 10,000 pixels each, we can have reasonable results with only 300/10= 30 eigenvectors.

I am not an expert in the subject, so I did not change the article, but the explanation there looks unclear or incorrect to me.

--spAs 12:41, 17 August 2007 (UTC)[reply]

Number of Eigenfaces

[edit]

I agree with User:Spas, putting it as "If we are working with a 100X100 image, then this system will have 10,000 dimensions resulting in 10,000 eigen vectors" will make it more clear for the technically uninitiated. —Preceding unsigned comment added by 122.170.25.146 (talk) 15:51, 18 August 2008 (UTC)[reply]

Issues

[edit]

I think (I'm no expert) the article has the following problems:

  • It talks about the mean matrix A, as though the images are represented as matrices. In fact, the images are represented as vectors and you should subtract the mean vector A (Although something like would be more appropriate for vectors).
  • Since the dimension of the data, (ie. the collection of image-vectors) is 10000, computing the covariance matrix would cost far too much memory (to the order of 50Gb). The actual algorithm uses a different way to find the eigenvectors.

146.50.8.103 (talk) 16:16, 20 November 2008 (UTC)[reply]

They are faces, 2-D not 1D. A vector is a linear series of terms, to me, where a matrix has dimension 2: NxM . Can you clarify? --Ancheta Wis (talk) 16:48, 20 November 2008 (UTC)[reply]
The vector is the set of pixel values in order. If there are 10,000 pixels, you get a 10,000 element vector. The image is not represented in 2D internally, it is just a long series of pixels. Visualizing them in rows as an image has nothing to do with the way the algorithm works. - Rainwarrior (talk) 17:06, 20 November 2008 (UTC)[reply]

Content of matrix T

[edit]

In the section Practical Implementation, the matrix T is defined as having the flattened images in its rows, meaning each row represents an image. In the next section Computing the eigenvectors, this assumption however is suddenly changing: Now the matrix T has the images in its columns. For a reader that is new to the subject (such as me). This can be rather confusing if one doesn't read very attentively. Would it be possible to unify the definitions for T? --131.152.224.31 (talk) 12:40, 3 April 2012 (UTC)[reply]

Use in facial recognition

[edit]

The statistics in the Use in facial recognition are completely meaningless without some sort of context (i.e. how big was the data set). 81.98.54.232 (talk) 19:43, 18 October 2010 (UTC)[reply]