Can someone explain to me why do we call the determinant of a matrix "determinant"? Does it have any meaning? Like it determines something for example!

It "determines" the factor by which (oriented) volumes get multiplied by a linear transformation. (But that doesn't seem like a very good reason for the name). – Michael Hardy Oct 25 '16 at 01:12

6It also determines whether the corresponding system of linear equations has a solution. – J126 Oct 25 '16 at 01:14

3Systems of equations can be called "overdetermined" or "underdetermined" if they have too many or too few number of equations versus unknowns respectively. When there are the same number of equations as unknowns, the determinant will be nonzero precisely when there is a unique solution and will be zero when there is the possibility of infinitely many or no solutions. – JMoravitz Oct 25 '16 at 01:14

So basically the determinant determines different things... hence the name! – Learn_and_Share Oct 25 '16 at 01:16

3Can anyone deterministically determine what it is that the determinant determines? :) – David Oct 25 '16 at 01:18

1(I'm determined to find out...) – David Oct 25 '16 at 01:19

1https://en.wikipedia.org/wiki/Determinant#History – Will Jagy Oct 25 '16 at 03:58

1@MedNait: http://math.stackexchange.com/questions/194579/whatistheoriginofthedeterminantinlinearalgebra, http://math.stackexchange.com/questions/81521/developmentoftheideaofthedeterminant, http://math.stackexchange.com/questions/668/whatsanintuitivewaytothinkaboutthedeterminant/ – Moo Oct 25 '16 at 03:59
3 Answers
Here is some information about the origin of the term determinant. This term was introduced the first time $1801$ by C.F. Gauss in his Disquisitiones arithmeticae, XV, p. 2 in connection with a form of second degree.
The following is from The Theory of Determinants in the historical order of development (1905) by Thomas Muir.
[Muir, p. 64]: Gauss writes the form as \begin{align*} axx+2bxy+cyy \end{align*} and for shortness speaks of it as the form $(a,b,c)$.
The function of the coefficients $a,b,c$, which was found by Lagrange to be of notable importance in the discussion of the form, Gauss calls the determinant of the form, the exact words being
[Gauss, 1801] Numerum $bbac$, a cuius indole preprietates formae $(a,b,c)$ imprimis pendere in sequentibus decebimus, determinantem huius formae uocabimus.
and Muir continues:
 [Muir, p.64] ... Here then we have the first use of the term which with an extended signification has in our day come to be so familiar. It must be carefully noted that the more general functions, to which the name came afterwards to be given, also repeatedly occur in the course of Gauss' work, ...
 94,265
 6
 88
 219
Besides the historic reasons, which are covered in previous answers, here's another take at why we would say the determinant "determines" something. This is clearly not the origin of the word, I think, but it gives you another curious answer to the question.
I believe that an interesting way of looking at it is beginning with alternating multilinear forms, which are mappings such $t: V \times \overset{n}{\dots}\times V \to \mathbb{R}$ (where $V$ is a vector space and $\mathbb{R}$ can be substituted with any other scalar field) which are linear in every entry and evaluate to $0$ if two or more entries are equal eg. Suppose $V = \mathbb{R}$, then $t(1,2, \dots, n1, 2) = 0$ (that is what 'alternating' means). There are plenty of places where you can read about multilinear mappings: see Wikipedia for example, read it first and then come back, I will focus on the fact that interests us.
Suppose that you have a basis of $V$, say $\lbrace \vec{u}_1, \dots, \vec{u}_n \rbrace$. Now take any family of vectors you like in the input space $\lbrace \vec{x}_1, \dots, \vec{x}_n \rbrace$ and note that those vectors can be expressed in terms of the basis we chose
\begin{equation} (\vec{x}_1, \dots, \vec{x}_n) = (\vec{u}_1, \dots, \vec{u}_n) \begin{pmatrix} a_1^1 & a_2^1 & \dots & a_n^1 \\ a_1^2 & a_2^2 & \dots & a_n^2 \\ \vdots & \vdots & \vdots &\vdots \\ a_1^n & a_2^n & \dots & a_n^n \end{pmatrix} \end{equation}
Now we are ready to expand $t(\vec{x}_1, \dots, \vec{x}_n)$ using the mutilinearity of $t$:
\begin{align*} &t(\vec{x}_1, \dots, \vec{x}_n) = t(\sum_{i_1=1}^n a^{i_1}_1 \vec{u}_{i_1}, \dots, \sum_{i_n=1}^n a^{i_n}_n \vec{u}_{i_n}) =\\ &= \sum_{i_1=1}^n a^{i_1}_1 \dots \sum_{i_n=1}^n a^{i_n}_n\cdot t(\vec{u}_{i_1}, \dots,\vec{u}_{i_n}) = \sum_{i_1,\dots, i_n=1}^n a^{i_1}_1 \dots a^{i_n}_n \cdot t(\vec{u}_{i_1}, \dots,\vec{u}_{i_n}) \end{align*}
In this sum we do not put any restrictions to the indexes and so there will be terms where we will find repeated $i_j$, and then that term is equal to $0$ because $t$ is alternating eg. $t(\vec{u}_{i_1}, \dots,\vec{u}_{i_j}, \dots, \vec{u}_{i_j}, \dots,\vec{u}_{i_n}) = 0$. Therefore, the only terms in this sum which are not zero are those such $(i_1, \dots, i_n)$ are all different; in other words, the indexes are a permutation of $(1,\dots,n)$.
\begin{align*} t(\vec{x}_1, \dots, \vec{x}_n) &= \sum_{\sigma \in \mathcal{S}_n} a^{\sigma(1)}_1 \dots a^{\sigma(n)}_n \cdot t(\vec{u}_{\sigma(1)}, \dots,\vec{u}_{\sigma(n)}) = \\ &= \sum_{\sigma \in \mathcal{S}_n} a^{\sigma(1)}_1 \dots a^{\sigma(n)}_n \cdot \text{sig}(\sigma) \cdot t(\vec{u}_1, \dots,\vec{u}_n) \end{align*}
Finally, we see that the way $t$ acts upon any arbitrary family of vectors has a part which will be common to every family of vectors ($t$ acting on the basis), and a differentiating part which will be particular to that set of vectors (because their coordinates with respect to that basis identify them uniquely). We call that differentiating part the determinant of that set of vectors, because indeed is what determines the value of $t(\vec{x}_1, \dots, \vec{x}_n)$.
\begin{equation}\operatorname{det}_{\lbrace \vec{u}_i \rbrace}(\vec{x}_1, \dots, \vec{x}_n) = \sum_{\sigma \in \mathcal{S}_n} a^{\sigma(1)}_1 \dots a^{\sigma(n)}_n \cdot \operatorname{sig}(\sigma) \end{equation}
You see that the definition of the determinant of a matrix is exactly the same. Indeed it can be seen as the determinant of the vectors that form the columns of the matrix. The final part of this reasoning would be to show that there is a single multilinear form $d$ that verifies $d(\vec{u}_1, \dots, \vec{u}_n) = 1$ for a given basis, and thus $d(\vec{x}_1, \dots, \vec{x}_n) = \operatorname{det}_{\lbrace \vec{u}_i \rbrace}(\vec{x}_1, \dots, \vec{x}_n)$, and so determinant is itself an alternating multilinear form. Then we get all the nice properties of multilinear forms for our concept of determinant.
 1
 30
 276
 565
 178
 7

3This question asks about why the determinant is called the way it is. Are you sure your answer adresses that? – QuantumSpace Jul 01 '19 at 11:11

I would say so, @EpsilonDelta. The second to last paragraph closes that argument. My point is that the concept of determinant of a matrix can be derived this way (my last paragraph tries to link multilinear forms with determinants of matrices briefly) and in this derivation we see why it makes sense to call it determinant. The historical reason might be different as it is pointed out in one of the other comments, but disregarding it, this is another view as to why we would say it determines something. I hope you agree on this. If not, should I delete this answer? – miguelsxvi Jul 01 '19 at 11:27

I interpreted the question as the historic origin of the determinant. Now I see your point, but maybe make this somewhat clearer in your answer. +1 – QuantumSpace Jul 01 '19 at 13:03

It determines whether a linear system of equations has a solution.
A linear system of $m$ equations in $n$ variables, denoted $A\mathbf{x}=\mathbf{b}$, has a unique solution if the rank of its augmented matrix equals $n$, denoted $\operatorname{rank}(A^{\#}) = n$. (The rank of a matrix is the number of nonzero rows in its rowechelon form and the augmented matrix, $A^{\#}$, is $A$ extended with $\mathbf{b}$ in the last column.)
For $1\times1$ $A$ ($[a_{11}]$), the determinant is $a_{11}$ and the linear system it represents has a solution if $a_{11} \ne 0$ (this is the trivial case of $a_{11}x = b$).
For $2\times2$ $A$, represented
$$ A=\left[ \matrix{ a_{11} & a_{12} \\ a_{21} & a_{22} } \right] $$
the rowechelon form is
$$ \left[ \matrix{ a_{11} & a_{12} \\ 0 & a_{22}  \frac{a_{21}a_{12}}{a_{11}} } \right] $$
so as long as $a_{22}  \frac{a_{21}a_{12}}{a_{11}} \ne 0$, or equivalently $a_{11}a_{22}  a_{12}a_{21} \ne 0$, the system has a unique solution.
Extending this to larger 2D matrices,
$$ \det(A) = \sum \sigma \left( p_1, p_2, \ldots, p_n \right)a_{1 p_1} a_{2 p_2} \cdots a_{n p_n} $$
where the summation is over the $n!$ distinct permutations $\left( p_1, p_2, \ldots, p_n \right)$ of the integers $1, 2, 3, \ldots, n$ and
$$ \sigma \left( p_1, p_2, \ldots, p_n \right) = \begin{cases} +1 & \text{if } \left( p_1, p_2, \ldots, p_n \right) \text{ has even parity}, \\ 1 & \text{if } \left( p_1, p_2, \ldots, p_n \right) \text{ has odd parity.} \end{cases} $$
This is explained well in Chapter 3 of "Linear Analysis and Differential Equations" by Goode and Annin (Amazon Link).
Determinants can be used to calculate areas and volumes in a geometric sense, but the term itself originated from its use in determining whether or not systems of equations have solutions.
NOTE: NOT to be confused with the discriminant, which "discriminates" (i.e. deciphers) the types and numbers of solutions to a polynomial equation.
 332
 1
 10