1

What are the difference between the following two functions?

prepTransform.m

function [mu trmx] = prepTransform(tvec, comp_count)
% Computes transformation matrix to PCA space
% tvec - training set (one row represents one sample)
% comp_count - count of principal components in the final space
% mu - mean value of the training set
% trmx - transformation matrix to comp_count-dimensional PCA space

% this is memory-hungry version
% commented out is the version proper for Win32 environment

tic;
mu = mean(tvec);
cmx = cov(tvec); 

%cmx = zeros(size(tvec,2));
%f1 = zeros(size(tvec,1), 1);
%f2 = zeros(size(tvec,1), 1);
%for i=1:size(tvec,2)
%  f1(:,1) = tvec(:,i) - repmat(mu(i), size(tvec,1), 1);
%  cmx(i, i) = f1' * f1;
%  for j=i+1:size(tvec,2)
%    f2(:,1) = tvec(:,j) - repmat(mu(j), size(tvec,1), 1);
%    cmx(i, j) = f1' * f2;
%   cmx(j, i) = cmx(i, j);
%  end
%end
%cmx = cmx / (size(tvec,1)-1);

toc
[evec eval] = eig(cmx);
eval = sum(eval);

[eval evid] = sort(eval, 'descend');
evec = evec(:, evid(1:size(eval,2)));

% save 'nist_mu.mat' mu
% save 'nist_cov.mat' evec 
trmx = evec(:, 1:comp_count);

pcaTransform.m

function [pcaSet] = pcaTransform(tvec, mu, trmx)
% tvec - matrix containing vectors to be transformed
% mu - mean value of the training set
% trmx - pca transformation matrix
% pcaSet -  output set transforrmed to PCA  space

pcaSet = tvec - repmat(mu, size(tvec,1), 1);

%pcaSet = zeros(size(tvec));
%for i=1:size(tvec,1)
%  pcaSet(i,:) = tvec(i,:) - mu;
%end

pcaSet = pcaSet * trmx;

Which one is actually doing PCA?

If one is doing PCA, what is the other one doing?

user366312
  • 17,582
  • 55
  • 198
  • 392

1 Answers1

6

The first function prepTransform is actually doing the PCA on your training data where you are determining the new axes to represent your data onto a lower dimensional space. What it does is that it finds the eigenvectors of the covariance matrix of your data and then orders the eigenvectors such that the eigenvector with the largest eigenvalue appears in the first column of the eigenvector matrix evec and the eigenvector with the smallest eigenvalue appears in the last column. What's important with this function is that you can define how many dimensions you want to reduce the data down to by keeping the first N columns of evec which will allow you to reduce your data down to N dimensions. The discarding of the other columns and keeping only the first N is what is set as trmx in the code. The variable N is defined by the prep_count variable in prepTransform function.

The second function pcaTransform finally transforms data that is defined within the same domain as your training data but not necessarily the training data itself (it could be if you wish) onto the lower dimensional space that is defined by the eigenvectors of the covariance matrix. To finally perform the reduction of dimensions, or dimensionality reduction as it is popularly known, you simply take your training data where each feature is subtracted from its mean and you multiply your training data by the matrix trmx. Note that prepTransform outputting the mean of each feature in the vector mu is important in order to mean subtract your data when you finally call pcaTransform.


How to use these functions

To use these functions effectively, first determine the trmx matrix, which contain the principal components of your data by first defining how many dimensions you want to reduce your data down to as well as the mean of each feature stored in mu:

N = 2; % Reduce down to two dimensions for example
[mu, trmx] = prepTransform(tvec, N);

Next you can finally perform dimensionality reduction on your data that is defined within the same domain as tvec (or even tvec if you wish, but it doesn't have to be) by:

pcaSet = pcaTransform(tvec, mu, trmx);

In terms of vocabulary, pcaSet contain what are known as the principal scores of your data, which is the term used for the transformation of your data to the lower dimensional space.

If I can recommend something...

Finding PCA through the eigenvector approach is known to be unstable. I highly recommend you use the Singular Value Decomposition via svd on the covariance matrix where the V matrix of the result already gives you the eigenvectors sorted which correspond to your principal components:

mu = mean(tvec, 1);
[~,~,V] = svd(cov(tvec));

Then perform the transformation by taking the mean subtracted data per feature and multiplying by the V matrix, once you subset and grab the first N columns of V:

N = 2;
X = bsxfun(@minus, tvec, mu); 
pcaSet = X*V(:, 1:N);

X is the mean subtracted data which performs the same thing as doing pcaSet = tvec - repmat(mu, size(tvec,1), 1);, but you are not explicitly replicating the mean vector over each training example but letting bsxfun do that for you internally. However, taking advantage of MATLAB R2016b, this repeating can be done without the explicit call to bsxfun:

X = tvec - mu;

Further Reading

If you fully want to understand the code that was written and the theory behind what it's doing, I recommend the following two Stack Overflow posts that I have written that talk about the topic:

What does selecting the largest eigenvalues and eigenvectors in the covariance matrix mean in data analysis?

How to use eigenvectors obtained through PCA to reproject my data?

The first post brings the code you presented into light which performs PCA using the eigenvector approach. The second post touches base on how you'd do it using the SVD towards the end of the answer. This answer I've written here is a mix between the two posts above.

Community
  • 1
  • 1
rayryeng
  • 96,704
  • 21
  • 166
  • 177