I am auditing a Linear Algebra class, and today we were taught about the rank of a matrix. The definition was given from the row point of view:

"The rank of a matrix A is the number of non-zero rows in the reduced row-echelon form of A".

The lecturer then explained that if the matrix $A$ has size $m \times n$, then $rank(A) \leq m$ and $rank(A) \leq n$.

The way I had been taught about rank was that it was the smallest of

  • the number of rows bringing new information
  • the number of columns bringing new information.

I don't see how that would change if we transposed the matrix, so I said in the lecture:

"then the rank of a matrix is the same of its transpose, right?"

And the lecturer said:

"oh, not so fast! Hang on, I have to think about it".

As the class has about 100 students and the lecturer was just substituting for the "normal" lecturer, he was probably a bit nervous, so he just went on with the lecture.

I have tested "my theory" with one matrix and it works, but even if I tried with 100 matrices and it worked, I wouldn't have proven that it always works because there might be a case where it doesn't.

So my question is first whether I am right, that is, whether the rank of a matrix is the same as the rank of its transpose, and second, if that is true, how can I prove it?

Thanks :)

Chris Tang
  • 365
  • 2
  • 13
  • 1,339
  • 4
  • 17
  • 20
  • 21
    Just a quick comment: the way you have defined rank is essentially the minimum of the row rank and the column rank. By that definition, it is obvious that rank is invariant under transposition. What is *not* obvious, but true and useful, is that "number of rows bringing new information" is equal to "number of columns bringing new information", so it is not necessary to take the minimum of the two. – Pete L. Clark Aug 13 '10 at 00:46
  • 2
    Probably the "sledgehammer" approach to a "walnut" problem, but I'd just have done a singular value decomposition of A and AT, note that one decomposition is expressible in terms of the other, and then show that the two diagonal matrices resulting from the two decompositions have the same rank (and nullity too). – J. M. ain't a mathematician Aug 13 '10 at 01:11
  • 8
    @J.M. if the lecturer has just explained that the rank must be smaller than the row size, I think it may be a bit early to assume SVD. – Willie Wong Jun 22 '11 at 11:52
  • You can easely prove that the rank is the largest size for which you can find a non-vanishing minor... And then use this result to prove that it is invariant under translation... – N. S. Nov 14 '11 at 21:31

5 Answers5


The answer is yes. This statement often goes under the name "row rank equals column rank". Knowing that, it is easy to search the internet for proofs.

Also any reputable linear algebra text should prove this: it is indeed a rather important result.

Finally, since you said that you had only a substitute lecturer, I won't castigate him, but this would be a distressing lacuna of knowledge for someone who is a regular linear algebra lecturer.

  • 40,844
  • 9
  • 59
  • 101
Pete L. Clark
  • 93,404
  • 10
  • 203
  • 348
  • He is the head of department from what I hear, but he is really young. I don't know what his field is, but I don't think it is linear algebra. I think that he probably new the answer anyway 5 min after the lecture finished, but it was too late by then. Thanks for the answer. – Vivi Aug 13 '10 at 00:36
  • 'The anwser is yes'. It's true the mathematical fact. What it's more debatable is if she was right in seeing why it was true. It's an important result, but not a very obvious result. – leonbloy Jun 22 '11 at 11:30
  • 2
    @Vivi If he's the head of the department, he should know this information... The linear algebra you're taking (I'm assuming it's a first course) is information that many mathematicians use every day - I've heard "linear algebra is the one thing we can do well". People don't really "specialize" in linear algebra: there are related higher topics, but linear algebra is very well understood (hence the quote). All in all, it's just a bit disturbing that the head of the department isn't comfortable with his linear algebra. (Although it's possible he was just put on the spot and got a bit flustered) – Stahl Apr 20 '13 at 18:54
  • 6
    @Stahl: After more than two and a half years, I think we can forgive Vivi's linear algebra lecturer for his momentary lapse. :) – Pete L. Clark Apr 20 '13 at 21:47
  • @PeteL.Clark Perhaps! I didn't realize how old this question was... and at the moment, I'm not sure how I stumbled across it :P – Stahl Apr 20 '13 at 21:54
  • 4
    The link no longer work for me. – leo Sep 24 '14 at 06:14
  • @Stahl it could be the head of some other deprtments, like engineering. Engineers do not always need to apply such a result, in contrast to more "useful" techniques like diagonalization. – SOFe Apr 16 '19 at 13:22
  • The link appears broken. Could you maybe replace it (or also just remove it). – quid Dec 30 '19 at 16:31
  • For now I just removed the broken link. As time permits you might restore one. – quid Dec 31 '19 at 16:50

There are several simple proofs of this result. Unfortunately, most textbooks use a rather complicated approach using row reduced echelon forms. Please see some elegant proofs in the Wikipedia page (contributed by myself):


or the page on rank factorization:


Another of my favorites is the following:

Define $\operatorname{rank}(A)$ to mean the column rank of A: $\operatorname{col rank}(A) = \dim \{Ax: x \in \mathbb{R}^n\}$. Let $A^{t}$ denote the transpose of A. First show that $A^{t}Ax = 0$ if and only if $Ax = 0$. This is standard linear algebra: one direction is trivial, the other follows from:

$$A^{t}Ax=0 \implies x^{t}A^{t}Ax=0 \implies (Ax)^{t}(Ax) = 0 \implies Ax = 0$$

Therefore, the columns of $A^{t}A$ satisfy the same linear relationships as the columns of $A$. It doesn't matter that they have different number of rows. They have the same number of columns and they have the same column rank. (This also follows from the rank+nullity theorem, if you have proved that independently (i.e. without assuming row rank = column rank)

Therefore, $\operatorname{col rank}(A) = \operatorname{col rank}(A^{t}A) \leq \operatorname{col rank}(A^{t})$. (This last inequality follows because each column of $A^{t}A$ is a linear combination of the columns of $A^{t}$. So, $\operatorname{col sp}(A^{t}A)$ is a subset of $\operatorname{col sp}(A^{t})$.) Now simply apply the argument to $A^{t}$ to get the reverse inequality, proving $\operatorname{col rank}(A) = \operatorname{col rank}(A^{t})$. Since $\operatorname{col rank}(A^{t})$ is the row rank of A, we are done.

  • 986
  • 5
  • 17
  • Welcome to math.SE! Please note that you can use TeX in your posts here by enclosing mathematics in `$` or `$$`. – Zhen Lin Nov 14 '11 at 21:20
  • I'm familiar with all the notation here except $Re^n$. What is that? – Joseph Garvin Jul 22 '17 at 17:41
  • @JosephGarvin it was a typo - the answerer meant $R^n$. – ttb Jan 29 '18 at 22:47
  • "one direction is trivial" - As soon as authors write something is trivial, I immediately don't know what they are talking about. The use of the word "direction" is highly niche, many people reading this don't mean you mean "If $Ax=0$, then $A^t Ax = 0$. Now prove the other direction, that $A^t Ax = 0$, then $Ax=0$." Better to leave words like trivial and obvious out of proofs entirely. – OrangeSherbet Apr 30 '20 at 08:34

Since you talked about reduced row-echelon form, I assume you know what elementary row and column operations are. The basic fact concerning these operations is the following:

Elementary (row or column) operations change neither the row rank nor the column rank of a matrix.

Now, given a nonzero matrix $A$, try the following:

  1. Bring $A$ to its reduced row-echelon form $R$ using elementary row operations.
  2. Bring $R$ to its reduced column-echelon form $B$ using elementary column operations.

Then $B$ is of the form $$ \begin{pmatrix} 1&&&0&\ldots&0\\ &\ddots&&\vdots&&\vdots\\ &&1&0&\ldots&0\\ 0&\ldots&0&0&\ldots&0\\ \vdots&&\vdots&\vdots&&\vdots\\ 0&\ldots&0&0&\ldots&0\\ \end{pmatrix}. $$ Now it is obvious that the row rank of $B$ is equal to the column rank of $B$ (which is equal to the number of ones in the above "reduced row-and-column-echelon form"). Hence the row rank of $A$ is equal to the column rank of $A$, i.e. the row rank of $A$ is equal to the row rank of $A^T$.

  • 122,076
  • 7
  • 103
  • 187

Yes, it is a fact. This is true over any commutative field. See for instance the first chapter of Emil Artin, Galois Theory for a very elementary argument.

If you need to phrase that argument in more conceptual terms, consider the matrices as linear transformations. If A is the matrix, then let $A^t$ be the transpose, and then $A^tA$ and $A$ have the same domain, and use the fact that they have the same null space, and use the dimension theorem rank + nullity = dimension of the space.

Your argument is true for real matrices only. For complex matrices they may not have the same null space.

  • 8
    George, this is the first maths subject I do. I have no maths background, and I haven't even learned about nullity yet, nor do I know what commutative field is. I was expecting something more basic... But I appreciate the answer, and +1 for you! – Vivi Aug 13 '10 at 00:38
  • @Vivi: Do not worry, you can understand the reference I cited. The book of Emil Artin proves this right away in the beginning. It should be the second theorem or so, if I remember right. The book starts with the very definition of matrix. So you should be able to understand it. Rather, as Pete Clark says, it should be there in any respectable linear algebra book. So I suspect that your favorite book will contain it. –  Aug 13 '10 at 00:41
  • OK, I will check it out if the book is available in the library (which should be the case). Thanks again for the help :) – Vivi Aug 13 '10 at 00:43
  • 2
    @Vivi: It is perhaps out of print. If you get get a copy, it is theorem 4 at page 7: http://books.google.com/books?id=BdS1D5mymwYC&lpg=PP1&pg=PA7#v=onepage&q&f=true –  Aug 13 '10 at 00:51
  • 3
    Here are 2 links to pdf copies of Artin's book. Link 1: http://www-fourier.ujf-grenoble.fr/~marin/une_autre_crypto/Livres/Artin%20M.%20Galois%20theory%20%282ed,%20London,%201944%29%2886s%29.pdf --- Link 2: http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.ndml/1175197041 – Pierre-Yves Gaillard Aug 13 '10 at 08:58
  • In a previous comment I gave 2 links to pdf copies of Artin's book. The 2nd link is broken. New link: https://projecteuclid.org/ebooks/notre-dame-mathematical-lectures/Galois-Theory/toc/ndml/1175197041 – Pierre-Yves Gaillard Feb 19 '22 at 08:37

(1) If $f:V\to W$ is a linear map and $f^*:W^*\to V^*$ is its transpose, then we have a canonical isomorphism


This can bee seen as follows:

(2) If $$ V\ \overset{p}{\twoheadrightarrow}A\ \overset{i}{\rightarrowtail}\ W\quad\text{and}\quad V\ \overset{q}{\twoheadrightarrow}B\ \overset{j}{\rightarrowtail}\ W $$ are two diagrams of linear maps such that

(a) $i$ and $j$ are injective, $p$ and $q$ are surjective,

(b) $i\circ p=j\circ q$,

then there is a unique linear map $\varphi:A\to B$ such that $\varphi\circ p=q$ and $j\circ\varphi=i$. Moreover $\varphi$ is bijective. The proof is easy.

To prove that (2) implies (1), note that the three diagrams
$$ V\ \overset{p}{\twoheadrightarrow}\ \text{Im}(f)\ \overset{i}{\rightarrowtail}\ W, $$ $$ W^*\ \overset{i^*}{\twoheadrightarrow}\ \text{Im}(f)^*\ \overset{p^*}{\rightarrowtail}\ V^*, $$ $$ W^*\ \overset{q}{\twoheadrightarrow}\ \text{Im}(f^*)\ \overset{j}{\rightarrowtail}\ V^*, $$ where $p,i,q,j$ are the obvious maps, satisfy (a). As $p^*\circ i^*=f^*=j\circ q$, we see that (2) implies (1).

Assume the rank of $f:V\to W$ is infinite. The Erdős-Kaplansky Theorem implies then


where $K$ is the ground field and $|X|$ is the cardinality of $X$ for any set $X$.

More precisely, the Erdős-Kaplansky Theorem says $$ \dim(V^*)=|K|^{\dim(V)} $$ whenever $V$ is infinite dimensional, or, equivalently $$ \dim(K^S)=|K^S|, $$ where $S$ is an infinite set and $K^S$ is the set of families $(a_s)_{s\in S}$ with $a_s$ in $K$. In words:

The dimension of an infinite dimensional dual vector space is equal to its cardinality.

For a proof of the Erdős-Kaplansky Theorem, please see this answer.

Pierre-Yves Gaillard
  • 18,672
  • 3
  • 44
  • 102