0

Possible Duplicate:
What’s an intuitive way to think about the determinant?

Could anyone give an intuitive explanation of the determinant? I know mostly what the determinant means and I can calculate it etc. But I have never got any real-world intuitive explanation of the general formula of the determinant?

How is the formula derived? Where does it come from? What I'm essentially asking is: Prove the general formula for calculating the determinant of an n-by-n matrix and explain the meaning of it

Any support = Thank you so much!! =)

jjepsuomi
  • 8,135
  • 12
  • 49
  • 89
  • In addition to the nice pointer by Mike S., you might also review http://math.stackexchange.com/questions/81521/development-of-the-idea-of-the-determinant – Amzoti Dec 18 '12 at 16:48

1 Answers1

2

So, let's recall the basic properties of the determinant (which I assume in the following), thay seem to me very reasonable from a real world-point of view:

  1. It is linear in each column, that is, for $1 \le i \le n$, $x_1, \ldots, x_n \in K^n$, $\lambda \in K$, $x_i' \in K^n$, we have $$ \det (x_1, \ldots, x_i+\lambda x_i', \ldots, x_n) = \det(x_1, \ldots, x_n) + \lambda \det (x_1, \ldots, x_i', \ldots, x_n) $$
  2. It is alternating, that is $\det(x_1, \ldots, x_n) = 0$ if $x_i = x_j$ for some $i \ne j$.
  3. $ \det(e_1, \ldots, e_n) = 1$.

We will use 1., 2. and 3. to derive the formula in what follows. So the intuitive explanation might be: If we want to calculate the volume of a general parallelepiped using only the standard cube and invariance properties, what do we get.

From 1. and 2. we can conclude, that changing two columns changes sign: \begin{align*} 0 &= \det (x_1, \ldots, x_i + x_j, \ldots, x_i + x_j, \ldots, x_n)\\ &= \det(x_1, \ldots, x_i, \ldots, x_i, \ldots, x_n) + \det(x_1, \ldots, x_i, \ldots, x_j, \ldots, x_n)\\ & {} + \det(x_1, \ldots, x_j, \ldots, x_i, \ldots, x_n) + \det(x_1, \ldots, x_j, \ldots, x_j, \ldots, x_n) \\ &= \det(x_1, \ldots, x_i, \ldots, x_j, \ldots, x_n) + \det(x_1, \ldots, x_j, \ldots, x_i, \ldots, x_n) \\ \iff \det(x_1, \ldots, x_i, \ldots, x_j, \ldots, x_n) &= -\det(x_1, \ldots, x_j, \ldots, x_i, \ldots, x_n) \end{align*} From this, writing a $\sigma \in S_n$ as a product of transpositions, one concludes by induction on the number of transpositions $$ \det(x_{\sigma(1)}, \ldots, x_{\sigma(n)}) = \mathrm{sgn}\, \sigma \cdot \det(x_1, \ldots, x_n) $$ For general $x_1, \ldots, x_n \in K^n$, we now start writing them in the standard basis (we want to use 3.), say $x_j = \sum_{i=1}^n x_{ij}e_i$. We have using multilinearity \begin{align*} \det(x_1, \ldots, x_n) &= \det\left(\sum_{i_1=1}^n x_{i_11}e_{i_1}, \ldots, \sum_{i_n=1}^n x_{i_nn}e_{i_n}\right)\\ &= \sum_{i_1=1}^n \cdots \sum_{i_n=1}^n \prod_{j=1}^n x_{i_jj}\det(e_{i_1}, \ldots, e_{i_n}) \end{align*} Now $\det(e_{i_1}, \ldots, e_{i_n})$ is by 2. different from zero only if $j \mapsto i_j$ is a permutation, so only $n!$ of the above $n^n$ summands remain, we get using what we found above for permutations \begin{align*} \det(x_1, \ldots, x_n) &= \sum_{i_1=1}^n \cdots \sum_{i_n=1}^n \prod_{j=1}^n x_{i_jj}\det(e_{i_1}, \ldots, e_{i_n})\\ &= \sum_{\pi \in S_n} \prod_{j=1}^n x_{\pi(j)j} \det(e_{\pi(1)}, \ldots, e_{\pi(n)})\\ &= \sum_{\pi \in S_n} \prod_{j=1}^n x_{\pi(j)j} \mathrm{sgn}\,\pi \cdot \det(e_1, \ldots, e_n)\\ &= \sum_{\pi \in S_n} \mathrm{sgn}\,\pi \cdot \prod_{j=1}^n x_{\pi(j)j} \end{align*}

martini
  • 80,922
  • 5
  • 88
  • 127