In Michael Artin's Algebra, the discussion on determinant starts from the standard recursive expansion by minors. Artin defines determinant as a function $\delta$ from a square matrix to a real number. Then Artin lists three characteristics for this function in Theorem 1.4.7 (page 20, second edition) as quoted below.

"Theorem 1.4.7 Uniqueness of the Determinant. There is a unique function $\delta$ on the space of $n\times n$ matrices with the properties below, namely the determinant.

  1. With $I$ denoting the identity matrix, $\delta(I)=1$.
  2. $\delta$ is linear in the rows of the matrix $A$.
  3. If two adjacent rows of a matrix $A$ are equal, then $\delta(A)$=0."

In his book, Artin does not explain why $\delta$ should have these properties. I suppose in history people went through a period of trial and error before such abstract concept was proposed and accepted. Can anyone refer me to any source revealing how these properties were thought of, especially, the second and the third property. Thank you! Regards.

  • 3,348
  • 19
  • 59
  • 4
    Munkres' Analysis on Manifolds has a nice couple pages about how the determinant captures the general idea of volume in $n$-dimensions. Some would say, determinants are signed-volumes. If two edges are co-linear then the $n$-piped is degenerate hence its $n$-volume is zero. On the other hand, the size of a cube has a linearity property where the scalar multiplication by a negative number has to do with reversing the handedness of the $n$-piped. Finally, the unit $n$-cube should have $n$-volume of one. The development of this spans at least 100 years, mostly 19-th century. Imho. – James S. Cook Dec 22 '13 at 04:35
  • 1
    This video says it all: http://www.youtube.com/watch?v=xX7qBVa9cQU – bolbteppa Dec 22 '13 at 04:48
  • Many thanks to both of you. – LaTeXFan Dec 22 '13 at 05:24
  • 1
    Oh, you should certainly read the comments etc... if you haven't already at http://math.stackexchange.com/q/21614/36530 much wisdom there. – James S. Cook Dec 22 '13 at 07:57
  • The third property can be used to show that $\delta$ is alternating, that is, if you swap any two rows of the matrix, then $\delta$ changes sign. – copper.hat Dec 22 '13 at 08:19

1 Answers1


Well, this is an old question, but I'll try to answer it anyway. Any of the 3 requirements is necessary to obtain a unique function. So, let's construct this function using the properties we have!

Let $A \in M_{n,n}(\mathbb{F}) $ and write $A = (A_1, A_2, \dots, A_n)$ where $A_i$ is the i'th column of $A$ and $[A]_{ij} = a_{ij}$

Note that we can write $$A_i = \sum_{k = 1}^n a_{ki}E_{k}$$ where $$E_i = \begin{pmatrix} 0 \\0 \\ \vdots \\1 \\\vdots\\0\end{pmatrix} $$ the column vector that is $1$ on the i'th row.


$$\delta(A) = \delta (A_1, \dots , A_n)$$ $$= \delta \left(\sum_{k_1 = 1}^n a_{k_11}E_{k_1}, \dots , \sum_{k_n = 1}^n a_{k_nn}E_{k_n}\right)$$ $$= \sum_{k_1 = 1}^n a_{k_11} \dots \sum_{k_n = 1}^n a_{k_nn}\delta \left(E_{k_1}, \dots , E_{k_n}\right)$$

Now, $\delta(E_{k_1}, \dots E_{k_n}) = 0$ whenever two columns are equal. So only when the indices $k_1, k_2, \dots, k_n$ are a permutation of $\{1,2, \dots n\}$, the term in the sum is non zero. So let $\sigma \in S_n$ and write $\sigma(i) = k_i$. Then, the sum above becomes:

$$\sum_{\sigma \in S_n} a_{\sigma(1)1} \dots a_{\sigma(n)n}\delta \left(E_{\sigma(1)}, \dots , E_{\sigma(n)}\right)$$ $$ = \sum_{\sigma \in S_n} sgn(\sigma)a_{\sigma(1)1} \dots a_{\sigma(n)n}\delta \left(I_n\right) $$

$$= \sum_{\sigma \in S_n} sgn(\sigma)a_{\sigma(1)1} \dots a_{\sigma(n)n}$$

so there is at most one function that has the three properties above. Every property is used to derive this result. One still has to show that this function is well-defined (this would be showing existence of the function).