So, let's recall the basic properties of the determinant (which I assume in the following), thay seem to me very reasonable from a real world-point of view:

- It is linear in each column, that is, for $1 \le i \le n$, $x_1, \ldots, x_n \in K^n$, $\lambda \in K$, $x_i' \in K^n$, we have
$$
\det (x_1, \ldots, x_i+\lambda x_i', \ldots, x_n) = \det(x_1, \ldots, x_n) + \lambda \det (x_1, \ldots, x_i', \ldots, x_n) $$
- It is alternating, that is $\det(x_1, \ldots, x_n) = 0$ if $x_i = x_j$ for some $i \ne j$.
- $ \det(e_1, \ldots, e_n) = 1$.

We will use 1., 2. and 3. to derive the formula in what follows. So the intuitive explanation might be: If we want to calculate the volume of a general parallelepiped using only the standard cube and invariance properties, what do we get.

From 1. and 2. we can conclude, that changing two columns changes sign:
\begin{align*}
0 &= \det (x_1, \ldots, x_i + x_j, \ldots, x_i + x_j, \ldots, x_n)\\
&= \det(x_1, \ldots, x_i, \ldots, x_i, \ldots, x_n) + \det(x_1, \ldots, x_i, \ldots, x_j, \ldots, x_n)\\ & {} + \det(x_1, \ldots, x_j, \ldots, x_i, \ldots, x_n)
+ \det(x_1, \ldots, x_j, \ldots, x_j, \ldots, x_n) \\
&= \det(x_1, \ldots, x_i, \ldots, x_j, \ldots, x_n) + \det(x_1, \ldots, x_j, \ldots, x_i, \ldots, x_n) \\
\iff \det(x_1, \ldots, x_i, \ldots, x_j, \ldots, x_n) &= -\det(x_1, \ldots, x_j, \ldots, x_i, \ldots, x_n)
\end{align*}
From this, writing a $\sigma \in S_n$ as a product of transpositions, one concludes by induction on the number of transpositions
$$ \det(x_{\sigma(1)}, \ldots, x_{\sigma(n)}) = \mathrm{sgn}\, \sigma \cdot \det(x_1, \ldots, x_n) $$
For general $x_1, \ldots, x_n \in K^n$, we now start writing them in the standard basis (we want to use 3.), say $x_j = \sum_{i=1}^n x_{ij}e_i$. We have using multilinearity
\begin{align*}
\det(x_1, \ldots, x_n) &= \det\left(\sum_{i_1=1}^n x_{i_11}e_{i_1}, \ldots, \sum_{i_n=1}^n x_{i_nn}e_{i_n}\right)\\
&= \sum_{i_1=1}^n \cdots \sum_{i_n=1}^n \prod_{j=1}^n x_{i_jj}\det(e_{i_1}, \ldots, e_{i_n})
\end{align*}
Now $\det(e_{i_1}, \ldots, e_{i_n})$ is by 2. different from zero only if $j \mapsto i_j$ is a permutation, so only $n!$ of the above $n^n$ summands remain, we get using what we found above for permutations
\begin{align*}
\det(x_1, \ldots, x_n) &= \sum_{i_1=1}^n \cdots \sum_{i_n=1}^n \prod_{j=1}^n x_{i_jj}\det(e_{i_1}, \ldots, e_{i_n})\\
&= \sum_{\pi \in S_n} \prod_{j=1}^n x_{\pi(j)j} \det(e_{\pi(1)}, \ldots, e_{\pi(n)})\\
&= \sum_{\pi \in S_n} \prod_{j=1}^n x_{\pi(j)j} \mathrm{sgn}\,\pi \cdot \det(e_1, \ldots, e_n)\\
&= \sum_{\pi \in S_n} \mathrm{sgn}\,\pi \cdot \prod_{j=1}^n x_{\pi(j)j}
\end{align*}