I'm looking for an easily understandable interpretation for a transpose of a square matrix A. An intuitive visual demonstration, how $A^{T}$ relates to A. I want to be able to instantly visualize in my mind what I'm doing to the space when transposing the vectors of a matrix.

From experience, understanding linear algebra concepts in two dimensions is often enough to understand concepts in any higher dimension, so an explanation for two dimensional spaces should be enough I think.

All explanations I found so far were not intuitive enough, as I want to be able to instantly imagine (and draw) how $A^{T}$ looks like given A. I'm not a mathematician btw.

Here is what I found so far (but not intuitive enough for me)

- (Ax)$\cdot$y=$(Ax)^{T}$y=$x^{T}A^{T}$y=x$\cdot$$A^{T}$y

As far I understand dot product is a projection (x onto y, y onto x, both interpretations have the same result) followed by a scaling by the length of the other vector.

This would mean that mapping x into space A and projecting y onto the result is the same as mapping y into the space of $A^{T}$, then projecting the unmapped x into $A^{T}$y

So $A^{T}$ is the specific space B for any pair of vectors (x,y) such that Ax$\cdot$y=x$\cdot$By

This doesn't tell me instantly how $A^{T}$ drawn as vectors would look like based on A drawn as vectors.

- "reassigning dimensions"

This one is hard to explain so let me do this with a drawing:

This explanation is much more visual, but far too messy to do it in my head instantly. There are also multiple ways I could have rotated and arranged the vectors around the result $A^{T}$ which is represented in the middle. Also, it doesn't feel like it makes me truly understand the transposing of matrices, especially in higher dimensions.

- some kind of weird rotation

Symmetrical matrices can be decomposed into a rotation, scaling along eigenvectors $\Lambda$ and a rotation back

A=R$\Lambda$$R^{T}$

So in this specific case, the transpose is a rotation in the opposite direction of the original. I don't know how to generalize that into arbitrary matrices. I'm wildly guessing that if A is not symmetric any more, $R^{T}$ must also include some additional operations besides rotation.

Can anyone help me to find a way to easily and instantly imagine/draw how $A^{T}$ looks like given A in two dimensional space? (In a way of understanding that is generalizable into higher dimensions)

**Edit 1:**
While working on the problem I was curious to see what B in

$BA=A^{T}$

looks like. B would describe what needs to be done to A in order to geometrically transpose it. My temporary result looks interesting but I'm still trying to bring it to an interpretable form. If we assume the following indexing order

$$A= \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \\ \end{bmatrix} $$

and $det(A)\neq0$ then

$$B=\frac{1}{det(A)} \begin{bmatrix} a_{11} a_{22} - a_{21}^2 & a_{11} (a_{21} - a_{12}) \\ a_{22} (a_{12} - a_{21}) & a_{11} a_{22} - a_{12}^2 \\ \end{bmatrix} $$

What's visible on the first sight is that $\frac{1}{det(A)}$ causes scaling such that the area becomes exactly 1 (before applying the actual matrix).

B must also preserve the area as $det(A^{T})=det(A)$. It means that the matrix

$B'=\begin{bmatrix} a_{11} a_{22} - a_{21}^2 & a_{11} (a_{21} - a_{12}) \\ a_{22} (a_{12} - a_{21}) & a_{11} a_{22} - a_{12}^2 \\ \end{bmatrix}$

squares the area while transposing.

**Edit 2:**

The same matrix can be written as

$B'=\begin{bmatrix} \begin{bmatrix} a_{11} & a_{21} \\ \end{bmatrix} \begin{bmatrix} a_{22} \\ -a_{21} \\ \end{bmatrix} & \begin{bmatrix} a_{11} & a_{21} \\ \end{bmatrix} \begin{bmatrix} -a_{12} \\ a_{11} \\ \end{bmatrix} \\ \begin{bmatrix} a_{21} & a_{22} \\ \end{bmatrix} \begin{bmatrix} a_{22} \\ -a_{21} \\ \end{bmatrix} & \begin{bmatrix} a_{12} & a_{22} \\ \end{bmatrix} \begin{bmatrix} -a_{12} \\ a_{11} \\ \end{bmatrix} \\ \end{bmatrix}$

Which is

$B'=\begin{bmatrix} a_{1}^{T} \begin{bmatrix} a_{22} \\ -a_{21} \\ \end{bmatrix} & a_{1}^{T} \begin{bmatrix} -a_{12} \\ a_{11} \\ \end{bmatrix} \\ a_{2}^{T} \begin{bmatrix} a_{22} \\ -a_{21} \\ \end{bmatrix} & a_{2}^{T} \begin{bmatrix} -a_{12} \\ a_{11} \\ \end{bmatrix} \\ \end{bmatrix}= \begin{bmatrix} a_{1}\cdot \begin{bmatrix} a_{22} \\ -a_{21} \\ \end{bmatrix} & a_{1}\cdot \begin{bmatrix} -a_{12} \\ a_{11} \\ \end{bmatrix} \\ a_{2}\cdot \begin{bmatrix} a_{22} \\ -a_{21} \\ \end{bmatrix} & a_{2}\cdot \begin{bmatrix} -a_{12} \\ a_{11} \\ \end{bmatrix} \\ \end{bmatrix}$

I find the vectors $c_{1}=\begin{bmatrix} a_{22} \\ -a_{21} \\ \end{bmatrix}$ and $c_{2}=\begin{bmatrix} -a_{12} \\ a_{11} \\ \end{bmatrix}$ interesting. When I draw them it looks like I only need to rotate each by 90 degress in different directions to end up with the transpose column vectors.

**Edit 3:**

Maybe I fool myself, but I think I'm getting closer. The column space

$C= \begin{bmatrix} c_{1} & c_{2} \\ \end{bmatrix} = \begin{bmatrix} a_{22} & -a_{12} \\ -a_{21} & a_{11} \\ \end{bmatrix}$

is related to $A^{-1}$ because:

$AC=\begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \\ \end{bmatrix} \cdot \begin{bmatrix} a_{22} & -a_{12} \\ -a_{21} & a_{11} \\ \end{bmatrix} = \begin{bmatrix} det(A) & 0 \\ 0 & det(A) \\ \end{bmatrix} =det(A) I$

So

$C=A^{-1}det(A)$

B' can be written as well like this:

$B'=\begin{bmatrix} \begin{bmatrix} \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \\ \end{bmatrix} & \begin{bmatrix} a_{22} \\ -a_{21} \\ \end{bmatrix} \end{bmatrix} \\ \begin{bmatrix} \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \\ \end{bmatrix} & \begin{bmatrix} -a_{12} \\ a_{11} \\ \end{bmatrix} \end{bmatrix} \end{bmatrix} = \begin{bmatrix} \begin{bmatrix} \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \\ \end{bmatrix} & c_{1} \end{bmatrix} \\ \begin{bmatrix} \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \\ \end{bmatrix} & c_{2} \end{bmatrix} \end{bmatrix}$

or like this

$B'=\begin{bmatrix} \begin{bmatrix} a_{11} & a_{21} \\ \end{bmatrix} & \begin{bmatrix} a_{22} & -a_{12} \\ -a_{21} & a_{11} \\ \end{bmatrix} \\ \begin{bmatrix} a_{21} & a_{22} \\ \end{bmatrix} & \begin{bmatrix} a_{22} & -a_{12} \\ -a_{21} & a_{11} \\ \end{bmatrix} \\ \end{bmatrix} = \begin{bmatrix} a_1^{T} & \begin{bmatrix} c_{1} & c_{2} \\ \end{bmatrix} \\ a_2^{T} & \begin{bmatrix} c_{1} & c_{2} \\ \end{bmatrix} \\ \end{bmatrix} = \begin{bmatrix} a_1^{T}C \\ a_2^{T}C \\ \end{bmatrix} = det(A) \begin{bmatrix} a_1^{T}A^{-1} \\ a_2^{T}A^{-1} \\ \end{bmatrix}$

Therefore for $BA=A^{T}$ we have

$B=\begin{bmatrix} a_1^{T}A^{-1} \\ a_2^{T}A^{-1} \\ \end{bmatrix}$

**Edit 4:**

I think I will post my own answer soon. Going down the path of $A^{-1}$ had the idea that one can exploit the symmetry of of $AA^{T}$. Symmetry means that $AA^{T}$ decomposes nicer:

$AA^{T} = R_{AA^{T}} \Lambda_{AA^{T}} (R^{-1})_{AA^{T}}$

Now if you multiply both sides with $A^{-1}$ you'll get

$A^{T} = A^{-1} R_{AA^{T}} \Lambda_{AA^{T}} (R^{-1})_{AA^{T}}$

When I do an example with numbers I can also see that in my example $R_{AA^{T}} = (R^{-1})_{AA^{T}}$

$R_{AA^{T}}$ mirrors the space along y axis and then rotates by some angle $\alpha$ So my suspicion right now is:

$A^{T}=A^{-1} R_{AA^{T}} \Lambda_{AA^{T}} R_{AA^{T}}$

Now if I define

$R_{AA^{T}}^{'} = \begin{bmatrix} cos \alpha & -sin \alpha \\ sin \alpha & cos \alpha \\ \end{bmatrix}$

to get the mirroring out of the matrix $R_{AA^{T}}$ then I get

$A^{T}=A^{-1} R_{AA^{T}}^{'} \begin{bmatrix} -1 & 0 \\ 0 & 1 \\ \end{bmatrix} \Lambda_{AA^{T}} R_{AA^{T}}^{'} \begin{bmatrix} -1 & 0 \\ 0 & 1 \\ \end{bmatrix} $

So generally

$A^{T}=A^{-1} R_{\alpha} M_y \Lambda R_{\alpha} M_y$

With $M_y$ being the mirroring along the y axis, $R_{\alpha}$ some counter-clockwise rotation by $\alpha$ and $\Lambda$ some scaling