23

I am trying to prove that:


The matrix $C = \left(\begin{smallmatrix}A& 0\\0 & B\end{smallmatrix}\right)$ is diagonalizable, if only if $A$ and $B$ are diagonalizable.


If $A\in GL(\mathbb{C}^n)$ and $B\in GL(\mathbb{C}^m)$ are diagonalizable, then is easy to check the $C\in GL(\mathbb{C}^{n+m})$ is diagonalizable. But if I suppose that $C$ is diagonalizable, then exists $S = [S_1, S_2, \ldots, S_{n+m}]$, $S_i\in\mathbb{C}^{m+n}$, such that $S^{-1}CS = \mbox{diag}(\lambda_i)$ . Now $CS_i = \lambda_iS_i$, and if $S_i = \left(\begin{smallmatrix}x_i\\y_i\end{smallmatrix}\right)$, $x_i\in\mathbb{C}^n$ and $y_i\in\mathbb{C}^m$, then $$Ax_i = \lambda_ix_i\quad\mbox{ and }\quad By_i = \lambda_iy_i.$$ So, if I can justify that $\{x_1,\ldots,x_{n+m}\}$ have exactly $n$ linear independent vectors and $\{y_1,\ldots,y_{n+m}\}$ have $m$ linear independent vectors, I will prove that $A$ and $B$ are diagonalizables, but I don't know how to prove that? Please, anyone have an idea? Thanks in advance.

Max Lipton
  • 1,265
  • 13
  • 28
FASCH
  • 1,632
  • 1
  • 16
  • 28

4 Answers4

13

Short answer: the minimal polynomial of $C$ is the monic lcm of the minimal polynomials of $A$ and $B$. And a square matrix is diagonalizable if and only if its minimal polynomial splits (which is automatic in $\mathbb{C}$ of course) with only simple roots. In other words, as pointed out by DonAntonio: if and only if its minimal polynomial is the product of pairwise distinct monic linear factors. Over the field under consideration, of course.

Now I'll give a detailed argument without explicit use of minimal polynomials.

Fact: a square matrix $M$ with coefficients in a field $K$ is diagonalizable if and only if there exists a nonzero polynomial $p(X)\in K[X]$ which splits over $K$ with simple roots and such that $p(M)=0$.

Proof: if $M$ is diagonalizable and if $\{\lambda_1,\ldots,\lambda_k\}$ is the set of its (non repeated) eigenvalues, then $p(X)=(X-\lambda_1)\cdots(X-\lambda_k)$ annihilates $M$. Conversely, if such a polynomial $p(X)$ with $\lambda_j$ pairwise distinct annihilates $M$, we have (by Bezout, essentially): $K^n=\mbox{Ker } p(M)=\bigoplus_{j=1}^k\mbox{Ker } (M-\lambda_j I_n)$. Diagonalizability follows easily. QED.

Now for every polynomial $p(X)$, you have $$ p(C)=\left(\matrix{p(A)&0\\0&p(B)}\right) $$ This gives you the annoying direction, namely $C$ diagonalizable implies $A$ and $B$ diagonalizable.

The converse is easier. Take $P$ and $Q$ invertible such that $PAP^{-1}$ and $QBQ^{-1}$ be diagonal. Then $$ R:=\left(\matrix{P&0\\0&Q}\right) $$ is invertible with $$ R^{-1}=\left(\matrix{P^{-1}&0\\0&Q^{-1}}\right)\qquad \mbox{and}\qquad RCR^{-1}=\left(\matrix{PAP^{-1}&0\\0&QBQ^{-1}}\right) $$ is diagonal.

Note: you can also do the converse with the fact above. Just take the lcm of the minimal polynomials of $A$ and $B$.

Julien
  • 42,872
  • 3
  • 69
  • 154
  • What do you mean by "polynomial which splits with simple roots"? – Pedro M. Apr 03 '13 at 12:12
  • @PedroMilet Splits means $p(x)=\lambda(x-a_1)\cdots(x-a_n)$. Simple roots means $a_i\neq a_j$ for every $i\neq j$. – Julien Apr 03 '13 at 12:13
  • Got it. A polynomial such that every root is simple. – Pedro M. Apr 03 '13 at 12:19
  • @PedroMilet In $\mathbb{C}$, yes. As every polynomial splits. But the result is true over any field. And then you need to add the assumption that it splits. – Julien Apr 03 '13 at 12:24
  • The important theorem julien means appears sometimes (perhaps "most" of the times?) in the following equivalent form: a square matrix is diagonalizable iff its minimal polynomial is the product of *different* linear factors (understood that "is the product..." over the field we're talking about, of course) – DonAntonio Apr 03 '13 at 13:24
  • This argument can be applied to show that, assuming $A$ and $C$ are square matrices, if the matrix $M=\begin{pmatrix} A & B \\ 0 & C\end{pmatrix}$ is diagonalizable, then both $A$ and $C$ are diagonalizable. Because $\det(xI-M)=\det(xI-A)\det(xI-C)$ and your argument may then be applied to $M$, $A$ and $C$. Am I right? (I don't know if the converse holds here though.) – baronbrixius Nov 04 '15 at 14:41
  • @baronbrixius You are right and the converse does not hold. Any size 2 Jordan block is a counterexample to the converse. – Marc van Leeuwen Oct 09 '16 at 04:18
10

When $C$ is diagonalisable, we have $S^{-1}CS=\Lambda$, or equivalently, $CS=S\Lambda$ for some invertible matrix $S$ and some diagonal matrix $\Lambda=\operatorname{diag}(\lambda_1,\ldots,\lambda_{m+n})$. Hence $$ \pmatrix{A&0\\ 0&B}\pmatrix{x_j\\ y_j}=\lambda_j\pmatrix{x_j\\ y_j} $$ where $\pmatrix{x_j\\ y_j}$ is the $j$-th column of $S$. Consequently $Ax_j=\lambda_jx_j$. As $S$ is invertible, the top $n$ rows of $S$, i.e. the matrix $(x_1,x_2,\ldots,x_{m+n})$, must have full row rank, i.e. rank $n$. Since row rank is always equal to column rank, it follows that the $n\times n$ submatrix $X=(x_{k_1},x_{k_2},\ldots,x_{k_n})$ is invertible for some $k_1,\ldots,k_n\in\{1,2,\ldots,m+n\}$. Hence the equality $Ax_j=\lambda_jx_j$ implies that $AX=XD$, where $D=\operatorname{diag}(\lambda_{k_1},\ldots,\lambda_{k_n})$. So $X^{-1}AX=D$, i.e. $A$ is diagonalisable. For the similar reason, $B$ is diagonalisable too.

Remark. As the OP asks for a way to find $n$ linearly independent vectors among $x_1,\ldots, x_n$, this answer directly addresses the OP's question. However, in my opinion, the preferred way to show that $A$ and $B$ are diagonalizable is to employ minimal polynomials. See Julien's answer for instance. Sadly, undergraduate linear algebra courses nowadays are not quite "algebraic" as they were.

fadi77
  • 117
  • 1
  • 8
user1551
  • 122,076
  • 7
  • 103
  • 187
0

Let $Y$ and $Z$ be vector spaces and let $X = Y \oplus Z$ be their direct sum. Let $B$ and $C$ be linear transformations of $Y$ and $Z$ respectively and let $X=Y \oplus Z$ be the corresponding "block diagonal transformation" of $X$. It's easy to see that

Fact 1: $\ker(A) = \ker(B) \oplus \ker(C)$.

Denoting the identity transformation of a vector space $V$ by $I_V$ and noting that $I_X = I_Y \oplus I_Z$, one sees, for any scalar $\lambda$, that $A - \lambda I_X = (B - \lambda I_Y) \oplus (C-\lambda I_Z)$. So, applying the above fact, we also have

Fact 2: $\ker(A - \lambda I_X) = \ker(B - \lambda I_Y) \oplus \ker(C-\lambda I_Z)$.

which says exactly that the eigenspace of $A$ associated to a scalar $\lambda$ is the direct sum of the corresponding eigenspaces of $B$ and $C$.

A linear transformation is diagonalizeable if and only if its eigenspaces add to the total space. So, $A$ is diagonalizeable if and only if

$$\bigoplus_{\lambda} \ker(A - \lambda I_X) = \left( \bigoplus_\lambda \ker(B - \lambda I_Y) \right) \oplus \left( \bigoplus_\lambda \ker(C - \lambda I_Z) \right) = X.$$

Given subspaces $U \subset Y$ and $V \subset Z$, we have $U \oplus V = X$ if and only if $U = Y$ and $V=Z$. So, we conclude that that $A$ is diagonalizeable if and only if $B$ and $C$ are diagonalizeable.

Mike F
  • 20,828
  • 3
  • 52
  • 98
0

You can also define $f_C$ as a linear operator between $V=\mathbb{C}^{n+m}$ sending $x$ to $C\cdot x$. Then, $C$ is diagonalizable iff $f_C$ is diagonalizable iff there exists a basis of eigenvectors of $f_C$.

Suppose $f_C$ is diagonalizable. Denote the basis of eigenvectors as $v_1,...,v_{n+m}$. Let $U=\operatorname{span}(e_1,...,e_n)$ and $W=\operatorname{span}(e_{n+1},...,e_{n+m})$, which are two subspaces of $V$. Note that $V=U\oplus W$ and $U, W$ are $f_C$-invariant space. Thus, $v_i$ can be written as $u_i+w_i$ with $u_i\in U$ and $w_i\in W$. By the property of direct sum, $\operatorname{span}(u_1,...,u_{n+m})=U$, $\operatorname{span}(w_1,...,w_{n+m})=W$, $f_C(u_i)=\lambda_i u_i$ and $f_C(w_i)=\lambda_i w_i$ hold. Therefore, $f\upharpoonright_U$ and $f\upharpoonright_W$ are both diagonalizable, which means that $A$ and $B$ are diagonalizable.

Here is this statement in the form of linear operator, which can be proved exactly the same as above: suppose $V$ is a vector space over field $\mathbb{F}$ and $f:V\to V$ is a linear operator. $U,W$ are $f$-invariant subspaces of $V$ s.t. $V=U\oplus W$. Then, $f$ is diagonalizable iff $f\upharpoonright_U$ and $f\upharpoonright_W$ are diagonalizable.

epimorphic
  • 3,153
  • 3
  • 22
  • 39
Louiseeeee
  • 119
  • 9