14

Let $A$ be a fixed $n\times n$ matrix over a field $F$. We can look at the subspace $$W=\{X\in M_{n,n}(F); AX=XA=0\}$$ of the matrices which fulfill both $AX=0$ and $XA=0$.

Looking a these equations we get that all columns of $X$ have to fulfill the equation $A\vec c=\vec 0$. (Let us say we're working with column vectors.) Similarly we get for the rows $\vec r^T A=\vec 0^T$. This tells us that if we are looking at the possible choices for columns/rows of the matrix $X$, they have to be in a subspace of dimension $n-\operatorname{rank}A$ (in the right/left null space of $A$).

At least in some cases it is almost immediately possible to find $W$ or at least $\dim W$.

  • Obviously, if $A$ is invertible, then $W=\{0\}$ and $\dim W=0$.
  • Another trivial case is when $A=0$, which gives us $W=M_{n,n}$ and $\dim W=n^2$.
  • Slightly less trivial but still simple case is when $\operatorname{rank} A=n-1$. In this case the condition on rows/columns give us one-dimensional spaces, so there are non-zero vectors $\vec r$, $\vec c$ such that each row has to be multiple of $\vec r^T$ and each column has to be a multiple of $\vec c$. Up to a scalar multiple, there is only one way how to get such a matrix and we get that $W$ is generated by the matrix $\vec c\vec r^T$ and $\dim W=1$.

The general case seems to be a bit more complicated. If we denote $k=n-\operatorname{rank}A$, we can use the same argument to see that there are $k$ linearly independent vectors $\vec c_1,\dots,\vec c_k$ such that the columns have to be linear combinations of these vectors. Similarly, row can be chosen only from the span of the linearly independent vectors $\vec r_1,\dots,\vec r_k$. (This is again just a direct consequence of $A\vec c=\vec 0$ and $\vec r^TA=\vec 0^T$.)

Using these vectors we can get $k^2$ matrices $$A_{ij}=\vec c_i \vec r_j^T$$ for $i,j\in\{1,2,\dots,k\}$. Unless I missed something, it seems that showing that these matrices are linearly independent is not too difficult. So we should get that $$\dim W \ge k^2 = (n-\operatorname{rank}A)^2.$$ It is not obvious to me whether these vectors actually generate $W$. (And perhaps something can be said about the dimension of $W$ without exhibiting a basis.)

You may notice that in the three trivial examples above (with $k=0,1,n$) we got the equality $\dim W=(n-\operatorname{rank}A)^2$.

Another possible way to look at this problem could be to use the linear function $$f\colon X\to(AX,XA)$$ $f\colon M_{n,n} \to M_{n,n}\oplus M_{n,n}$, then we have $W=\operatorname{Ker} f$, so we are basically asking for the dimension of the kernel of this map. So to find $\dim W$ it would be sufficient to find $\dim\operatorname{Im} f$. However, this does not seem to be easier than the original formulation of the problem.

It is also possible to see this as a system of $n^2$ linear equations with $n^2$ unknowns $x_{11}, x_{12}, \dots, x_{nn}$. If we try to use this line of thinking, the difficult part seems to be determining how many of those equations are linearly dependent.

Question: What can be said about the dimension of the subspace $W$? Is it equal to $(n-\operatorname{rank}A)^2$? Is it determined just by the rank of $A$? If not, what are best possible bounds we can get, if we know only the rank of $A$ and have no further information about $A$?


Motivation for this question was working on an exercise which asked for calculating dimensions of spaces $W_1$, $W_2$, $W_1\cap W_2$ and $W_1+W_2$, where the spaces $W_1$ and $W_2$ were determined by the conditions $AX=0$ and $XA=0$, respectively. Since the matrix $A$ was given, in this exercise it was possible to find a basis of $W_1\cap W_2$ explicitly. (And the exercise was probably intended just to make the students accustomed to some basic computations such as finding basis, using Grassmann's formula, etc.) Still, I was wondering how much we can say just from knowing the rank of $A$, without going through all the computations.

Batominovski
  • 48,433
  • 4
  • 51
  • 124
Martin Sleziak
  • 50,316
  • 18
  • 169
  • 342

4 Answers4

8

There are invertible matrices $P$ and $Q$ such that $A=PJQ$ where $J=\pmatrix{I_r&0\\0&0}$ with $I_r$ an identity matrix of size $r=\text{rank}(A)$. Then $AX=0$ iff $PJQX=0$ iff $J(QXP)=0$. Likewise $XA=0$ iff $(QXP)J=0$. Let $Y=QXP$. Then $YJ=JY=0$ iff $Y=\pmatrix{0&0\\0&*}$. So the dimension of admissible $Y$ (and so of admissible $X$) is $(n-r)^2$.

Angina Seng
  • 153,379
  • 28
  • 92
  • 192
3

Yes, the dimension is always $(n - \operatorname{rank}(A))^2$. Here's one justification.


For the convenience of eigenvalue stuff, I assume that $F$ is algebraically closed, or at least that we can appeal to the existence of its algebraic closure.

Let $V$ denote the subspace $V_0 = \{X: AX = XA\}$. That is, $V$ is the solution space to the Sylvester equation $AX - XA = 0$. By using some vectorization tricks, we can see that $V_0$ is spanned by the matrices of the form $xy^T$ such that $Ax = \lambda x$ $A^Ty = \lambda y$ for some $\lambda \in \bar F$. We can see that $\dim(V_0) = \sum d_k^2$ where $d_k$ is the geometric multiplicity of the $k$th eigenvalue.

Some care is required in showing that this basis spans $V_0$ for a non-diagonalizable $A$. One way to show that this happens is to compute the kernel of $I \otimes A - A^T \otimes I$, taking $A$ to be in Jordan canonical form.

The space $W$ that you're looking for is the intersection $V_0$ with the kernel of $X \mapsto AX$. This is spanned by the vectors $xy^T$ such that $x \in \ker(A)$ and $y \in \ker(A^T)$. Your conclusion follows.

Ben Grossmann
  • 203,051
  • 12
  • 142
  • 283
  • Hi Omno, I just read your post. I think that your proof is valid only for $A$ diagonalizable. Otherwise, the $xy^T$ does not span $V_0$ because $dim(V_0)> \sum_k d_k^2$ (you must use squares of differences of dimensions of iterated kernels). –  Dec 23 '18 at 12:54
  • @loupblanc hence the “some care is required” paragraph – Ben Grossmann Dec 23 '18 at 16:22
  • Yes, of course. –  Dec 23 '18 at 17:43
  • It is possible that I missed (or misunderstood) something, but this seems to be a counterexample for the claim about $V_0$ in the case of matrices which are not diagonalizable: [For non-diagonalizable matrices, the dimension of centralizer can be different from $\sum\limits_{j=1}^k d_j^2$](https://math.stackexchange.com/q/3270502). – Martin Sleziak Jun 23 '19 at 10:14
  • @MartinSleziak At a first glance, it appears that you're right. When I get the chance, I'll consider our posts more thoroughly. – Ben Grossmann Jun 23 '19 at 12:15
2

Here is a generalized version where you may be dealing with infinite dimensional vector spaces. For a given linear map $T:V\to V$ on a vector space $V$, I have a description of all linear maps $S:V\to V$ such that $ST=TS=0$.

Let $V$ be a vector space over a field $F$ and let $T:V\to V$ be a linear transformation. Define $L_T:\operatorname{End}_F(V)\to \operatorname{End}_F(V)\oplus \operatorname{End}_F(V)$ via $$L_T(S)=(ST,TS).$$ We claim that there exists an isomorphism $\varphi: \ker L_T\to \operatorname{Hom}_F(\operatorname{coim} T,\ker T)$ of vector spaces, where $\operatorname{coim} T$ is the coimage of $T$: $$\operatorname{coim} T=V/\operatorname{im}T.$$

Observe that $\operatorname{im}S\subseteq \ker T$ and $\operatorname{im}T\subseteq \ker S$ for all $S\in\ker L_T$. Let $\pi:V\to \operatorname{coim}T$ be the canonical projection $v\mapsto v+\operatorname{im}T$. For $S\in \ker L_T$, we see that $S:V\to\ker T$ factors through $\pi$, i.e., $S=\tilde{S}\circ \pi$ for a unique linear map $\tilde{S}:\operatorname{coim}T\to\ker T$.
We define $\varphi:\ker L_T\to \operatorname{Hom}_F(\operatorname{coim} T,\ker T)$ in the obvious manner: $S\mapsto \tilde{S}$. This map is clearly an isomorphism with the inverse map $$\varphi^{-1}(X)=X\circ\pi$$ for all $R\in \operatorname{Hom}_F(\operatorname{coim} T,\ker T)$. The claim is now justified.

The nullity $\operatorname{null} T$ of $T$ is the dimension of the kernel of $T$. The corank $\operatorname{cork}T$ of $T$ is the dimension of $\operatorname{coim} T$. In the case $\operatorname{null}T<\infty$ or $\operatorname{cork}T<\infty$,
$$\operatorname{Hom}_F(\operatorname{coim} T,\ker T)\cong (\ker T)\otimes_F (\operatorname{coim}T)^*,$$ where the isomorphism is natural, so $$\operatorname{null}L_T=\dim_F \ker L_T=(\operatorname{null}T)\big(\dim_F(\operatorname{coim}T)^*\big)$$ in this case. In particular, if $\operatorname{cork}T<\infty$, we have $(\operatorname{coim}T)^*\cong \operatorname{coim}T$, so that $$\operatorname{null}L_T=(\operatorname{null}T)\big(\dim_F(\operatorname{coim}T)^*\big)=(\operatorname{null}T)(\dim_F\operatorname{coim}T)=(\operatorname{null}T)(\operatorname{cork}T).$$ Particularly, when $V$ is finite dimensional, we have $\operatorname{cork}T<\infty$, and by the rank-nullity theorem, we get $\operatorname{cork}T=\operatorname{null}T=\dim_F V-\operatorname{rank}T$, and so $$\operatorname{null}L_T=\dim_F \ker L_T=(\dim_F V-\operatorname{rank}T)^2$$ as the OP conjectures. (But if $V$ is infinite dimensional, for any pair $(m,k)$ of non-negative integers, there exists $T\in\operatorname{End}_F(V)$ with nullity $m$ and corank $k$.)

Here is example of $T:V\to V$ with nullity $m$ and corank $k$ when $V$ is infinite dimensional. Pick a basis $B$ of $V$. Since $B$ is infinite, it has a countable subset $\{b_1,b_2,b_3,\ldots\}$. Let $Y$ be the span of $\{b_1,b_2,b_3,\ldots\}$ and $Z$ the span of $B\setminus\{b_1,b_2,b_3,\ldots\}$. Then, $V=Y\oplus Z$. Define $T:V\to V$ as follows: $$T\left(\sum_{i=1}^\infty s_i b_i+z\right)=\sum_{i=1}^\infty s_{m+i} b_{k+i}+z$$ for all $s_1,s_2,s_3,\ldots\in F$ with only finitely many non-zero terms and for all $z\in Z$. We have $\ker T=\operatorname{span}\{b_1,b_2,\ldots,b_m\}$ and $V=(\operatorname{im} T)\oplus \operatorname{span}\{b_1,b_2,\ldots,b_k\}$, so $T$ has nullity $m$ and corank $k$.

The situation is not so straightforward when $T$ has infinite corank. If $\operatorname{null}T<\infty$, then we already know that $$\operatorname{null}L_T= (\operatorname{null}T)\big(\dim_F(\operatorname{coim}T)^*\big)\,.$$ From this mathoverflow thread, $\dim_F(\operatorname{coim}T)^*=|F|^{\operatorname{cork}T}$. So, we have two cases when $\operatorname{null}T$ is finite but $\operatorname{cork}T$ is infinite: $$\operatorname{null}L_T= \begin{cases}0&\text{if}\ \operatorname{null}T=0,\\ |F|^{\operatorname{cork}T}&\text{if}\ 0<\operatorname{null}T<\infty.\end{cases}$$ If both $\operatorname{null}T$ and $\operatorname{cork}T$ are infinite, we can use the result from the same mathoverflow thread to prove that $$\operatorname{null}L_T=\operatorname{Hom}_F(\operatorname{coim} T,\ker T)=\max\left\{|F|^{\operatorname{cork}T},(\operatorname{null}T)^{\operatorname{cork}T}\right\}.$$


Even more generally, let $U$ and $V$ be vector spaces over $F$. For $R\in\operatorname{End}_F(U)$ and $T\in\operatorname{End}_F(V)$, define $L_{R}^T:\operatorname{Hom}_F(U,V)\to\operatorname{Hom}_F(U,V)\oplus \operatorname{Hom}_F(U,V)$ by $$L_R^T(S)=(SR,TS).$$ (That is, when $U=V$, we have $L_T=L_T^T$.) Then, there exists an isomorphism of vector spaces $$\varphi:\ker L_R^T\to \operatorname{Hom}_F(\operatorname{coim}R,\ker T).$$ In particular, if $U$ and $V$ are both finite dimensional, then $$\operatorname{null} L_R^T=\dim_F\ker L_R^T=(\operatorname{cork}R)(\operatorname{null} T)=(\dim_FU-\operatorname{rank}R)(\dim_FV-\operatorname{rank}T).$$ In general, $$\operatorname{null}L_R^T=\begin{cases}(\operatorname{cork} R)(\operatorname{null}T)&\text{if}\ \operatorname{cork}R<\infty,\\ 0&\text{if}\ \operatorname{null} T=0,\\ |F|^{\operatorname{cork}R}&\text{if}\ 0<\operatorname{null} T<\infty\ \wedge\ \operatorname{cork}R=\infty,\\ \max\left\{|F|^{\operatorname{cork}R},(\operatorname{null} T)^{\operatorname{cork}R}\right\}&\text{if}\ \operatorname{null}T=\infty\ \wedge\ \operatorname{cork}R=\infty. \end{cases}$$


This is my old proof that $\operatorname{null}L_T=(\operatorname{null}T)(\operatorname{cork}T)$ when $T$ has finite nullity and finite corank. Suppose that $T$ has finite nullity $m$ and finite corank $k$, I claim that $L_T$ also has finite nullity $mk$.

For $S\in\ker L_T$, we see that $\operatorname{im} S\subseteq \ker T$ and $\operatorname{im} T\subseteq \ker S$. Because $T$ has finite nullity $m$, it follows that $S$ has finite rank $r\leq m$. Therefore, $$S=v_1\otimes \phi_1+v_2\otimes \phi_2+\ldots+v_r\otimes \phi_r$$ for some linearly independent $v_1,v_2,\ldots,v_r\in \ker T$ and for some linearly independent $\phi_1,\phi_2,\ldots,\phi_r\in V^*=\operatorname{Hom}_F(V,F)$. Since $v_1,v_2,\ldots,v_r$ are linearly independent, $$\ker S=\bigcap_{i=1}^r\ker \phi_i.$$ Therefore, $\operatorname{im} T$ must be contained in $\ker \phi_i$ for all $i=1,2,\ldots,r$.

Since $T$ has finite corank $k$, $W=V/\operatorname{im} T$ is a finite dimensional vector space of dimension $k$. Note that each $\phi_i$ factors through $\operatorname{im} T$. That is, $\phi_i=\psi_i\circ \pi$, where $\pi:V\to V/\operatorname{im} T=W$ is the canonical projection and $\psi_i\in W^*=\operatorname{Hom}_F(W,F)$. We can now conclude that each $S\in \ker L_T$ is of the form $$\sum_{i=1}^r v_i\otimes (\psi_i\circ \pi),$$ where $v_1,v_2,\ldots,v_r\in \ker T$ are linearly independent and $\psi_1,\psi_2,\ldots,\psi_r\in W^*=\left(V/\operatorname{im} T\right)^*$ are linearly independent.

Define the linear map $f:(\ker T)\otimes_F W^*\to\ker L_T$ in the obvious manner: $$v\otimes \psi\mapsto v\otimes (\psi\circ\pi).$$ By the observation in the previous paragraph, $f$ is surjective. By choosing a basis of $\ker T$, say $\{x_1,x_2,\ldots,x_m\}$, we see that an element in $\ker f$ must take the form $$\sum_{i=1}^m x_i\otimes \alpha_i$$ for some $\alpha_i\in W^*$. Since $x_1,\ldots,x_m$ are linearly independent, we must have that $\alpha_i\circ \pi=0$ for all $i$. But this means $\alpha_i=0$ as $\pi$ is surjective. Thus, $\ker f=\{0\}$, and so $f$ is injective. Hence, $$\ker L_T\cong (\ker T)\otimes_F W^*=(\ker T)\otimes_F (V/\operatorname{im} T)^*.$$ This establishes the assertion that $L_T$ has nullity $mk$.

Batominovski
  • 48,433
  • 4
  • 51
  • 124
0

One can consider $U=\{(A,B)\in M_n\times M_n;AB=BA=0\},V=\{(A,B)\in M_n\times M_n;AB=0\}$.

$U,V$ are closed algebraic sets stratified by $rank(A)$.

Let $W_r$ be the algebraic set of matrices of rank $r$; from $dim(W_r)=r(2n-r)$, we deduce that the dimension of a stratum is $(n-r)^2+r(2n-r)=n^2$. In particular, the strata have same dimension and $dim(U)=n^2$.

You'd think $V$ has about the same dimension as $U$, for example, $dim(V)=dim(U)+O(n)$. This is not the case; recall that, when $AB=0$, we may have $rank(BA)=n/2$.

Using the Lord Shark the Unknown's post, we obtain that the dimension of a stratum is $d_r=[r(n-r)+(n-r)^2]+r(2n-r)=n^2+nr-r^2$ and depends on $r$.

Since $\max(d_r)$ is obtained with $r=n/2$, we deduce that $dim(V)=floor(5n^2/4)$.

Now we can seek the singular locus of $U$ or $V$.