You say the variance is $\dfrac{\sum_{i=1}^ n(x_i-\bar x)^2}{n-1}$.

What if I told you the variance is $\dfrac{\sum_{i=1}^n(x_i-\bar x)^2} n$?

You can find both in textbooks. The fact is, dividing by $n-1$ rather than $n$ is properly done (if at all) ONLY when one is estimating the **population** variance by using a finite **sample** $x_1,\ldots,x_n$ that is not the whole population. If $x_1,\ldots,x_n$ is the whole population and each point is equally probable, then the variance of that population is given by the **second** expression above, **not** the first.

Now here's the important point:

\begin{align}
& \operatorname{var}(X_1+\cdots+X_n) \\[8pt]
= {} & \operatorname{var}(X_1) + \cdots + \operatorname{var}(X_n) \tag 1
\end{align}
if $X_1,\ldots,X_n$ are independent random variables.

**That does not work with mean absolute deviation.** (It also does not work in the version with $n-1$ instead of $n$.)

Now suppose $n=1800$ and each $X_i$ is the number of "heads" seen on the $i$th coin toss, so $X_i$ is either $0$ or $1$. Then the sum is the number of "heads" in $1800$ tosses. What is the probability that that number is at least $890$ but not more than $905$? To answer that, one approximates the distribution of the number of "heads" by the normal distribution with the same expected value and the same variance. Without the identity $(1)$, one would not know what that variance is! Abraham de Moivre discovered all this in the $18$th century. And that is why standard deviations rather than mean absolute deviations are used.