I've been studying descriptive statistics and am having a hard time understanding the actual intuition behind standard deviation. I'm trying to get a practical feeling for it and so I'm trying to draw conclusions from it using a distribution of 20 numbers, from 1 to 20. I know the mean is 10.5 and the absolute average deviation is 5, which is pretty intuitive.

Now when taking the standard deviation I get the value 5.77 which still makes some sense if I think about it as the average euclidean deviation from the mean. So I imagine adding orthogonal distances and then averaging them $\frac{\sum(x_i-\bar x)^2}{n}$ and taking the square root of that at the end to get the actual average distance. The formula makes sense from an euclidean perspective. So all that being said, my questions:

1) Why would an euclidean average distance be more accurate than an absolute deviation from the mean? I actually think absolute average deviation is more accurate since it doesn't infer any direction of the values. When taking the euclidean distance, I'm pretty much saying every value is placed at a 90° angle from each other. That does not sound right. So why the Euclidean distance? (I'm aware of this article but If someone could actually explain what efficiency is that would be very helpful: https://www.leeds.ac.uk/educol/documents/00003759.htm)

2) If The advantage of using SD is because of all the math we have developed around normal distribution shapes (68%, 95%, 99,7%...) wouldn't it be better to just rewrite that model with the new average deviation?

3) Ill probably post another question in the future about this, but when calculating standard error, this standard deviation seems to get even worse, since we need corrections for finite populations. Does this make any sense?