Let's do it in a few steps.

Firstly, there is a solution: you minimize a continuous function over a nonempty compact domain, so a solution exists by the Extreme Value Theorem.

Secondly, it is unique by looking at the convexity properties of the goal function and domain.

Thirdly, it must satisfy your KKT (or Fritz John) conditions by a standard application of the constraint qualification: for any feasible point, the gradients of the binding constraints are linearly independent.

Fourthly, how can you think about solving the KKT conditions?

Your first collection of KKT conditions for the different coordinates look a lot alike, and this helps to realize that large $\alpha_i$ lead to small $x_i$. Formally, if $\alpha_i \geq \alpha_j$, then $x_i \leq x_j$!

Why? Well, the inequality $x_i \leq x_j$ is surely true if $x_i$ is zero, so suppose it is positive. Then its multiplier $\mu_i$ must be zero and
$$
\frac{1}{\alpha_i + x_i} = \lambda - \mu_i = \lambda \geq \lambda - \mu_j = \frac{1}{\alpha_j + x_j}.
$$
So $\alpha_j + x_j \geq \alpha_i + x_i$. Rewriting $x_i - x_j \leq \alpha_j - \alpha_i \leq 0$, so $x_i \leq x_j$.

If we assume without loss of generality that
$$
\alpha_1 \leq \alpha_2 \leq \cdots \leq \alpha_n,
$$
it follows that
$$
x_1 \geq x_2 \geq \cdots \geq x_n.
$$
So the first couple of coordinates are positive, the remaining ones might be zero. Formally, there is some $k$ in $\{1, \ldots, n\}$ such that $x_1, \ldots, x_k$ are positive and $x_{k+1}, \ldots, x_n$ are zero. But which $k$?

This is a bit of simple linear algebra. For each possible $k$, you know exactly what the coordinates $x_1, \ldots, x_k$ are (and the others are zero!): by complementary slackness, $\mu_1 = \cdots = \mu_k = 0$, so
$$
\frac{1}{\alpha_1 + x_1} = \frac{1}{\alpha_i + x_i}
$$
implies that $x_i = x_1 + (\alpha_1 - \alpha_i)$ for $i = 1, \ldots, k$. And
$$
1 = x_1 + \cdots + x_k = k x_1 + \sum_{i=1}^k (\alpha_1 - \alpha_i).
$$
Rewriting gives
$$
x_1 = \frac{1 + \sum_{i=1}^k (\alpha_i - \alpha_1)}{k}.
$$
Since we expressed the other $x_i$ in terms of $x_1$, we now know all coordinate values. Also remember that $x_k$ was the smallest of the positive coordinates, so we must have
$$
x_k = x_1 + \alpha_1 - \alpha_k = \frac{1+\sum_{i=1}^k (\alpha_i - \alpha_k)}{k} > 0,
$$
or, equivalently since $k$ is positive:
$$
1+\sum_{i=1}^k (\alpha_i - \alpha_k) > 0,
$$
which tells you pretty precisely what the candidates for $k$ are. Substituting these very few candidates into the goal function will show you that you need to choose this $k$ as large as possible. I will leave those computations to you: the main insights are that large $\alpha_i$ lead to small $x_i$ and that you can easily express your vector $(x_1, \ldots, x_n)$ in terms of these $\alpha_i$ and $k$, which is the main thing: you don't need to worry overly about the exact value of the Lagrange multipliers.