Principal component analysis (PCA) is a linear dimensionality reduction technique. It reduces a multivariate dataset to a smaller set of constructed variables preserving as much information (as much variance) as possible. These variables, called principal components, are linear combinations of the input variables.

# Questions tagged [principal-component-analysis]

113 questions

**439**

votes

**4**answers

### What is the intuitive relationship between SVD and PCA?

Singular value decomposition (SVD) and principal component analysis (PCA) are two eigenvalue methods used to reduce a high-dimensional data set into fewer dimensions while retaining important information. Online articles say that these methods are…

wickedchicken

- 4,501
- 3
- 14
- 5

**10**

votes

**2**answers

### Connection between PCA and linear regression

Is there a formal link between linear regression and PCA? The goal of PCA is to decompose a matrix into a linear combination of variables that contain most of the information in the matrix. Suppose for sake of argument that we're doing PCA on an…

user9576

- 335
- 1
- 3
- 17

**10**

votes

**1**answer

### Why is SVD on $X$ preferred to eigendecomposition of $XX^\top$ in PCA?

In this post J.M. has mentioned that ...
In fact, using the SVD to perform PCA makes much better sense numerically than forming the covariance matrix to begin with, since the formation of $XX^\top$ can cause loss of precision. This is detailed in…

S. P

- 273
- 3
- 8

**10**

votes

**4**answers

### Why eigenvectors with the highest eigenvalues maximize the variance in PCA?

I'm learning Principal Component Analysis (PCA) and came to know that eigenvectors of the covariance matrix of the data are the principal components, which maximizes the variance of the projected data. I understand the intuition behind why we need…

Kaushal28

- 595
- 5
- 14

**8**

votes

**3**answers

### How to prove PCA using induction?

In Deep Learning (Goodfellow, et al), the optimization objective of PCA is formulated as
$$D^* = \arg\min_D ||X - XDD^T||_F^2 \quad \text{s.t.} \quad D^T D=I$$
The book gives the proof of the $1$-dimension case, i.e.
$$\arg\min_{d} || X - X…

Lifu Huang

- 457
- 3
- 11

**8**

votes

**1**answer

### PCA produces sinusoids — what is the underlying cause?

Background
I'm analysing a data set of $M$ flow measurements (volume per time). The flows go from zero mL/s gradually to higher values and back to zero again, thus: their shapes ideally look like a Gaussian (or bell-shaped) curve. However, their…

Erik

- 575
- 1
- 3
- 14

**7**

votes

**2**answers

### Relationship between the singular value decomposition (SVD) and the principal component analysis (PCA). A radical result(?)

I was wondering if I could get a mathematical description of the relationship between the singular value decomposition (SVD) and the principal component analysis (PCA).
To be more specific I have some point which I don't understand very well, at…

Sergio Sarmiento

- 432
- 3
- 12

**5**

votes

**4**answers

### PCA and image compression

I have two questions related to principal component analysis (PCA):
How do you prove that the principal components matrix forms an orthonormal basis? Are the eigenvalues always orthogonal?
On the meaning of PCA. For my assignment, I have to…

Gonçalo Silva Santos

- 103
- 1
- 5

**4**

votes

**2**answers

### Fast PCA: how to compute and use the covariance of x

I'm trying to understand the paper Fast principal component analysis using
fixed-point algorithm by Alok Sharma and Kuldip K. Paliwal (1151–1155), and
especially what is said about $\Sigma_x$, the covariance of x.
But before being specific, let me…

ubitux

- 41
- 2

**4**

votes

**1**answer

### Why do eigenvectors arise as the solution of PCA?

I have very limited knowledge of linear algebra and therefore I don't have an geometrical intuition behind PCA.
Why the eigen vectors (which are simply defined as vectors whose direction doesn't change after a linear transformation) are also the…

**4**

votes

**1**answer

### What is the principal components matrix in PCA with SVD?

Doing PCA on a matrix using SVD yields a result of three matrices, expressed as:
$$
M = U \Sigma V^T
$$
where $M$ is our initial data with zero mean.
If we want to make a plot of the two principle components we project the data onto principal…

Paul Hunter

- 141
- 4

**3**

votes

**1**answer

### How to compute principal components for a curvature found given XYZ points?

I have a certain XYZ set of points that make up an object. I chose a random point and make the nearest radius analysis and find the neighbors. From these neighbors, I get the green pointcloud curve which can be seen in the figure.
My goal now is to…

Hamzalihi

- 49
- 8

**3**

votes

**1**answer

### PCA for data compression

I would like to use PCA (Principal Component Analysis) to compress a sequence of vectors, $v_0 \ldots v_n$.
My plan is to concatenate these vectors into a matrix: $M = [ v_0 \ldots v_n ]$
I will then use PCA to create a smaller representative set of…

Gary Snethen

- 68
- 6

**3**

votes

**0**answers

### unit vector after transformation with longer length than the eigenvector with highest eigenvalue

my question is a general one, can a unit vector after transformation have the length longer than length of the eigenvector which highest eigenvalue? is there any proof?
purpose is that I want to know in the principal component analysis is the…

Farhang Amaji

- 131
- 1

**3**

votes

**1**answer

### Why does singular value decomposition simultaneously diagonalize a symmetric matrix and its square?

So I took an online course on machine learning and in this course the instructor said that the eigenvectors of a covariance matrix (for principal components analysis) can be computed by a singular value decomposition.
Say the covariance matrix is…

chris kamper

- 33
- 3