The rotation matrix $$\pmatrix{ \cos \theta & \sin \theta \\ -\sin \theta & \cos \theta}$$ has complex eigenvalues $\{e^{\pm i\theta}\}$ corresponding to eigenvectors $\pmatrix{1 \\i}$ and $\pmatrix{1 \\ -i}$. The real eigenvector of a 3d rotation matrix has a natural interpretation as the axis of rotation. Is there a nice geometric interpretation of the eigenvectors of the $2 \times 2$ matrix?

Rodrigo de Azevedo
  • 18,977
  • 5
  • 36
  • 95
  • 2,337
  • 1
  • 22
  • 26
  • 1
    Sure --- if you're good at visualizing four dimensions. 2-dimensional complex space can be seen as 4-dimensional real space, and those two eigenvectors are axes of a "rotation" in that 4-dimensional space. I admit, it doesn't work all that well for me. – Gerry Myerson Nov 20 '12 at 04:38
  • Idk if it's that simple though. Because rotations happen in a 2D plane. In 3D space, there is a nice clear isomorphism between an axis of rotation and the plane orthogonal to it. That's not true in 4D, there is a whole 3D space orthogonal to any given axis. – Dion Silverman Mar 27 '21 at 10:21

6 Answers6


Tom Oldfield's answer is great, but you asked for a geometric interpretation so I made some pictures.

The pictures will use what I called a "phased bar chart", which shows complex values as bars that have been rotated. Each bar corresponds to a vector component, with length showing magnitude and direction showing phase. An example:

Example phased bar chart

The important property we care about is that scaling a vector corresponds to the chart scaling or rotating. Other transformations cause it to distort, so we can use it to recognize eigenvectors based on the lack of distortions. (I go into more depth in this blog post.)

So here's what it looks like when we rotate <0, 1> and <i, 0>:

Rotating 0, 1 Rotating i, 0

Those diagram are not just scaling/rotating. So <0, 1> and <i, 0> are not eigenvectors.

However, they do incorporate horizontal and vertical sinusoidal motion. Any guesses what happens when we put them together?

Trying <1, i> and <1, -i>:

Rotating 1, i Rotation 1, -i

There you have it. The phased bar charts of the rotated eigenvectors are being rotated (corresponding to the components being phased) as the vector is turned. Other vectors get distorting charts when you turn them, so they aren't eigenvectors.

Craig Gidney
  • 1,902
  • 15
  • 24

Lovely question!

There is a kind of intuitive way to view the eigenvalues and eigenvectors, and it ties in with geometric ideas as well (without resorting to four dimensions!).

The matrix, is unitary (more specifically, it is real so it is called orthogonal) and so there is an orthogonal basis of eigenvectors. Here, as you noted, it is $\pmatrix{1 \\i}$ and $\pmatrix{1 \\ -i}$, let us call them $v_1$ and $v_2$, that form a basis of $\mathbb{C^2}$, and so we can write any element of $\mathbb{R^2}$ in terms of $v_1$ and $v_2$ as well, since $\mathbb{R^2}$ is a subset of $\mathbb{C^2}$. (And we normally think of rotations as occurring in $\mathbb{R^2}$! Please note that $\mathbb{C^2}$ is a two-dimensional vector space with components in $\mathbb{C}$ and need not be considered as four-dimensional, with components in $\mathbb{R}$.)

We can then represent any vector in $\mathbb{R^2}$ uniquely as a linear combination of these two vectors $x = \lambda_1 v_1 + \lambda_2v_2$, with $\lambda_i \in \mathbb{C}$. So if we call the linear map that the matrix represents $R$

$$R(x) = R(\lambda_1 v_1 + \lambda_2v_2) = \lambda_1 R(v_1) + \lambda_2R(v_2) = e^{i\theta}\lambda_1 (v_1) + e^{-i\theta}\lambda_2(v_2) $$

In other words, when working in the basis ${v_1,v_2}$: $$R \pmatrix{\lambda_1 \\\lambda_2} = \pmatrix{e^{i\theta}\lambda_1 \\ e^{-i\theta}\lambda_2}$$

And we know that multiplying a complex number by $e^{i\theta}$ is an anticlockwise rotation by theta. So the rotation of a vector when represented by the basis ${v_1,v_2}$ is the same as just rotating the individual components of the vector in the complex plane!

  • 30,396
  • 2
  • 67
  • 110
Tom Oldfield
  • 12,372
  • 1
  • 35
  • 72
  • 1
    The vectors $v_1$ and $v_2$ are not a basis of ${\mathbf R}^2$ since they don't lie in there. That doesn't mean you can't write elements of ${\mathbf R}^2$ in terms of them, using complex coefficients, but calling them a basis is incorrect (also ${\mathbf R}^2$ is not a subspace of ${\mathbf C}^2$ as a complex vector space (you're using complex coefficients). – KCd Nov 20 '12 at 16:09
  • @KCd Thanks for the clarification, is this better now? – Tom Oldfield Nov 20 '12 at 16:19
  • Yes, it's fine. – KCd Nov 20 '12 at 21:15
  • 1
    But why does this "rotation" of the coefficients, which actually goes in different directions for the two of them, lead to a _clockwise_ rotation of the vector in $\Bbb R^2$? – Marc van Leeuwen Oct 17 '13 at 15:36
  • @Marc The vector $(x,y)\in\mathbb{R}^2$ in the standard basis for $\mathbb{C}^2$ is represented in the basis in the answer by $(\frac12(x-iy),\frac12(x+iy))$. Thus a direct rotation of the coefficients $\lambda_1,\lambda_2$ doesn't directly lead to a rotation of the vector $(x,y)$, because we are rotating the coefficients but not the vectors themselves. The rotation this then leads to is due to how the rescaling of the orthogonal basis vectors affects the representation of the real vector in the usual basis, is entirely mechanical and due to how we swap between bases. – Tom Oldfield Oct 17 '13 at 18:48
  • (The change of basis matrix being the rotation matrix). The rotation of the coefficients should not actually be thought of as a rotation at all (in terms of vectors) since we are working in $\mathbb{C}^2$ as a $2$ dimensional vector space so it is really a rescaling of vectors. The rotation comes in depending on how we choose to picture things, I think. – Tom Oldfield Oct 17 '13 at 18:49

The simplest answer to your question is perhaps yes. The eigenvectors of a genuinely complex eigenvalue are necessarily complex. Therefore, there is no real vector which is an eigenvector of the matrix. Ignoring of course the nice cases $\theta=0, \pi$ the rotation always does more than just rescale a vector.

On the other hand, if we view the matrix as a rotation on $\mathbb{C}^2$ then the eigenvectors you give show the directions in which the matrix acts as a rescaling operator in the complex space $\mathbb{C}^2$. I hope someone has a better answer, I would like to visualize complex two-space.

James S. Cook
  • 16,540
  • 3
  • 42
  • 100

Here is a geometric interpretation, in which the eigenvector condition appears as a commutation relation: http://www-po.coas.oregonstate.edu/~rms/notes/rot_eig.html


A related curious fact is that moving from affine to projective, the two complex directions fixed by any rotation define two points at infinity (of projective coordinates $[0:\pm i:1]$) called cyclic points.

Then it is easy to check that every circle passes through these cyclic points. This should be compared to Bezout's theorem--that says (in particular) that any two conics intersect in 4 points (counting multiplicities)--and the fact that it is impossible to get more then 2 points intersecting affine circles.

Andrea Mori
  • 24,810
  • 1
  • 41
  • 76

One can also directly show that a real matrix $A$ with complex eigenvalues describes a rotation (possibly followed by a rescaling) in the basis of the real and imaginary parts of the corresponding eigenvectors. More precisely:

  1. Observe that if $A$ is real, and has a complex (non-real) eigenvalue $z\in\mathbb C\setminus\mathbb R$, with eigenvector $\mathbf v$, then $\mathbf v$ must also be complex (and not real). Furthermore, $\bar z$ must also be an eigenvalue, with eigenvector $\bar{\mathbf{v}}$ (complex conjugate of $\mathbf v$).

  2. Observe that $A\mathbf v=z\mathbf v$, decomposed in its real and imaginary components, corresponds to $$\begin{cases} A \mathbf v_R &= z_R \mathbf v_R - z_I \mathbf v_I, \\ A \mathbf v_I &= z_I \mathbf v_R + z_R \mathbf v_I. \end{cases}$$ Observe the similarity between this and the expression for a 2x2 rotation. Furthermore, write the polar decomposition of the eigenvalue as $z=r e^{i\theta}$. Then the above becomes $$\begin{cases} A \mathbf v_R &= r[\cos(\theta) \mathbf v_R - \sin(\theta) \mathbf v_I], \\ A \mathbf v_I &= r[\sin(\theta) \mathbf v_R + \cos(\theta) \mathbf v_I]. \end{cases}$$ This is clearly a rotation matrix (up to the scaling factor). Or rather, it would be, if $\mathbf v_R$ and $\mathbf v_I$ were orthogonal (as real vectors). When this is not the case, the matrix will act, in the original coordinates, as a rotation followed by a rescaling.

In conclusion, the real and imaginary parts of the eigenvector corresponding to a complex eigenvalue span a two-dimensional plane which $A$ leaves invariant, and $A$ acts as a two-dimensional rotation matrix with respect to some basis chosen in this plane. More explicitly, we are saying that $$A = r P R(\theta) P^{-1},$$ where $R(\theta)\in\mathbf{SO}(2)$ represents the rotation by an angle $\theta$, and $P$ is the matrix whose columns are $\mathbf v_R$ and $\mathbf v_I$.

  • 5,543
  • 2
  • 24
  • 46