I learned linear algebra from books of Friedberg, Gilbert & Strang, Anton etc. by myself. I dare say, that I learned all that stuff eagerly.

Studying by myself, I could not intuitively understand the definition of the determinant (its even – odd manner). I could only memorize the definition, and then use it (or try to use it) to solve some related readers' homework problems or other exercises.

As you know, the purpose of a determinant is literally to determine whether a given system of equations has a unique solution or not.

In other words, the "determinant" will determine whether the row vectors (and equivalently, column vectors) of a given square matrix are independent or not.

If those are mutually independent, then they can geometrically represent an $n$-dimensional quantity (for example, area in 2 dimensions or volume in 3D). If not, some of them are dependent, so they cannot form the $n$-dimensional quantity, and correspondingly the determinant is zero.

A multiple of any row can be added to another, this kind of row elementary operation does not change the determinant value. The picture below illustrates an intuitive understanding of that, too.


Writing the sides of the parallelogram as rows or columns of a square matrix, this transformation transforms it to another with the same value of the determinant.

It can be transformed to Gauss–Jordan form, in this case, each of the row / column vectors are orthogonal because their inner products are all zero. (I tried this with the Gram–Schmidt process; however, intuitively, the result is surely the same.)

Those vectors are orthogonal so it is very clear that just multiplication of the diagonal terms should give directly the aforementioned $n$-dimensional quantity, so that's the determinant in such case.

I understand the determinant in this manner, and it makes sense intuitively.

However the textbook definition mentioned above (defined in the "even–odd" manner) looks very weird to me.

What is the motivation of that definition? And can it be generally derived from my intuition about the $n$-dimensional quantities? I succeeded in doing so for the $2\times2$ and $3\times3$ cases, but I cannot see any generalized relation.

It seems to me that the definition of the determinant comes down magically, without enough logic.

I was wondering if you could help me.

Thank you in advance.

The Vee
  • 2,961
  • 14
  • 33
  • 333
  • 2
  • 7
  • 8
    I guess your queries are answered here: http://math.stackexchange.com/questions/668/whats-an-intuitive-way-to-think-about-the-determinant, http://math.stackexchange.com/questions/250534/geometric-meaning-of-the-determinant-of-a-matrix?rq=1. – StubbornAtom Nov 14 '16 at 17:56
  • 6
    That image is illegible. Also, could you quote the exact definition? – StubbornAtom Nov 14 '16 at 17:58
  • The image shows the 2x2 determinant as the area of the parallelogram formed by the two vectors. – Momo Nov 14 '16 at 18:00
  • yeah the image shows the the operation (Add a row to another one multiplied by a number) conserve the "determinant" value – jotkey Nov 14 '16 at 18:01
  • my questions is little different, i already got their "n-dim quantity concept" i want to know what is the "motivation" of the even-odd def, and what is the relation even-odd def between n-dim quantity concept – jotkey Nov 14 '16 at 18:03
  • I think what you are saying is: you understand that the determinant of a $n \times n$ matrix is the (signed) $n$-dimensional volume formed by the columns of the matrix. You also understand how certain matrix operations transform this volume, and therefore are able to calculate this volume by transforming it into a "rectangular prism" aligned with the coordinate axes. But you don't understand where cofactor expansion comes from. Is that correct? – Ian Nov 14 '16 at 18:12
  • 2
    You could also watch [these videos](https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab) about the subject. He discusses the determinant about halfway in the series. – Arthur Nov 14 '16 at 18:13
  • @ lan exactly. in my guess the cofactor expansion should be derived from former concept.. however i cannot find the relation between them. – jotkey Nov 14 '16 at 18:14
  • @Arthur which one? everything? – jotkey Nov 14 '16 at 18:16
  • By the way, the "odd-even thing" is the best thing since sliced bread. Without it, solving a linear system would have been a disaster. Look at the [Permanent](https://en.wikipedia.org/wiki/Permanent) too see what happens when you take it out. – Momo Nov 14 '16 at 18:20
  • @jotkey It's a series, and they come in an order, and most of them assume that you've seen the ones that come before. If you have a spare hour some time, it's a good way to spend that hour. – Arthur Nov 14 '16 at 18:23
  • I've never seen such a scrappy picture on this site! I don't know whether to feel insulted, or to admire your chutzpah. (Actually I do know...) – TonyK Nov 14 '16 at 21:39
  • I cannot find out from your question what you refer to as the "even-odd definition". – celtschk Nov 14 '16 at 22:03
  • I upvoted this question because of the funny pictures. How did those things survive in a community which will correct you every single comma and downvote your post accordingly to show you how inferior you are? – D1X Nov 14 '16 at 22:14
  • Worth reading: http://www.maa.org/sites/default/files/images/upload_library/22/Ford/Axler139-154.pdf – polfosol Nov 15 '16 at 06:12

3 Answers3


Look at the properties that signed volume has. Think of it as a function $d : \mathbb{R}^n \times \cdots \times \mathbb{R}^n \to \mathbb{R}$.

(i) It is multilinear. (ii) If you swap two parameters, you switch the sign. (iii) $d(e_1,....,e_n) = 1$.

From these properties alone you can derive the textbook formula.

So, if you think of the above as defining the determinant, the definition is far from weird.


Here is a quick sketch of how we obtain the formula:

As Ian noted in the comments (iii) says that the determinant of the identity is one.

From (ii), we can show that if two of the parameters to $d$ are the same then the result is zero.

Take a square matrix $A$. Then the $j$th column is $\sum_{\sigma=1}^n A_{\sigma,j} e_\sigma$.

Using (i) we have $\det A = \sum_{\sigma_1 =1}^n \cdots \sum_{\sigma_n =1}^n A_{\sigma_1,1} \cdots A_{\sigma_n,n} d(e_{\sigma_1},...,e_{\sigma_n})$.

Now note that $d(e_{\sigma_1},...,e_{\sigma_n}) = 0$ whenever any index is repeated. Hence we can replace the sum $\sum_{\sigma_1 =1}^n \cdots \sum_{\sigma_n =1}^n$ by $\sum_{\sigma \in S}$, where $S$ is the set of permutations $\sigma: \{1,...,n\} \to \{1,...,n\} $.

Hence we have $\det A = \sum_{\sigma \in S} A_{\sigma_1,1} \cdots A_{\sigma_n,n} d(e_{\sigma_1},...,e_{\sigma_n})$.

Using (ii) & (iii), we can show that $d(e_{\sigma_1},...,e_{\sigma_n}) = \operatorname{sgn} \sigma$, and we end up with $\det A = \sum_{\sigma \in S} \operatorname{sgn} \sigma A_{\sigma_1,1} \cdots A_{\sigma_n,n} $.

  • 161,568
  • 8
  • 96
  • 225

Take a system of equations where the coefficients are variables, e.g.

$$a x + b y = e$$ $$c x + d y = f$$

Solve it:

$$x = \frac{d e - b f}{a d - b c}, \qquad y = \frac{a f - c e}{a d - b c}$$

Notice that each expression has the same denominator, namely $a d - b c$. This can be proven to hold for an arbitrary $n \times n$ system, and the thing in the denominator is the determinant of the system. [1]

This can be made into a formal definition; see for instance the expository paper,

  • Garibaldi, Skip. “The Characteristic Polynomial and Determinant Are Not Ad Hoc Constructions.” The American Mathematical Monthly, vol. 111, no. 9, 2004, pp. 761–778. http://www.jstor.org/stable/4145188.

[1] To be entirely fair, we could also have chosen the negative of what we normally call the "determinant." If you like to think of the determinant as a signed volume, this choice is equivalent to whether we choose a left-handed or right-handed orientation on space. Of course such a choice is completely arbitrary, but makes little difference as long as we all agree on the same one.

Daniel McLaury
  • 23,542
  • 3
  • 41
  • 105
  • 2
    I think this motivation is more natural than the others, such like signed area/volume. – Eric Nov 15 '16 at 12:22

Another way to interpret the determinant arises naturally from the alternating product construction on vector spaces. Briefly, given a vector space $V$, then $\Lambda^n V$ is a vector space defined to be generated by formal expressions of the form $x_1 \wedge \ldots \wedge x_n$ for $x_1, \ldots, x_n \in V$, subject to the relations:

  1. The wedge product is linear in each of the terms, i.e. \begin{equation} x_1 \wedge \ldots \wedge (\lambda_1 x_i + \lambda_2 x_i') \wedge \ldots \wedge x_n = \lambda_1 (x_1 \wedge \ldots x_i \wedge \ldots x_n) + \lambda_2 (x_1 \wedge \ldots x_i' \wedge \ldots x_n). \end{equation}
  2. The wedge product is zero if any two adjacent terms are equal: \begin{equation} x_1 \wedge \ldots \wedge y \wedge y \wedge \ldots x_n = 0. \end{equation} (Note that since also $\ldots \wedge (y+z) \wedge (y+z) \wedge \ldots = 0$, this implies that \begin{equation} \ldots \wedge y \wedge z \wedge \ldots = -(\ldots \wedge z \wedge y \ldots). \end{equation} This is the reason for the name "alternating product" or "antisymmetric product".)

Now, it turns out that if $V$ is an $n$-dimensional vector space, than $\Lambda^k V$ is an $\binom{n}{k}$-dimensional vector space; and in particular $\Lambda^n V$ is a 1-dimensional vector space. Also, for any linear transformation $T : V \rightarrow W$, it is easy to define a corresponding linear transformation $\Lambda^k T : \Lambda^k V \rightarrow \Lambda^k W$ such that $(\Lambda^k T)(x_1 \wedge \ldots \wedge x_k) = Tx_1 \wedge \ldots Tx_k$.

Now, the interpretation of the determinant is as follows: given a linear operator $T : V \to V$ on an $n$-dimensional vector space, then $\det T$ is simply defined to be the unique scalar such that $\Lambda^n T$ is equal to multiplication by $\det T$. And for a matrix $A \in M_{n \times n}(F)$, $\det A$ is the determinant of the corresponding linear operator on $F^n$.

This definition has some distinct advantages: for example, it's clear from it why the determinant is multiplicative: $\det(T \circ U) = \det(T) \det(U)$. It also gives a relatively natural proof that the determinant of a singular linear operator is 0: just choose a basis including a vector in the null space. On the other hand, actually proving the dimension of $\Lambda^k V$ turns out to be most straightforward using the determinant as a tool - which would make it a circular definition. However, even in that "bootstrapping" phase, keeping this other definition in mind can definitely help in motivating a formulation of the actual initial definition of determinant.

Daniel Schepler
  • 15,537
  • 1
  • 15
  • 31
  • Great answer. Suggested edit: "Now, it turns out that if $V$ is an $n$-dimensional vector space, ***then*** $Λ^kV$ is an $\tbinom{n}{k}$ -dimensional vector space" – justadzr Mar 05 '20 at 05:47