Unfortunately, determining a solution with the smallest number of non-zeros is intractable. It can be expressed as the following binary linear program:
\begin{array}{ll}
\text{minimize}_{x,y} & \sum_i y_i \\
\text{subject to} & A x = b \\
& 0 \leq x \leq M y \\
& y \in \{0,1\}^n
\end{array}
where $M$ is a large number known to bound the largest feasible values of $x$. A common heuristic is to solve
\begin{array}{ll}
\text{minimize}_{x,y} & \sum_i x_i \\
\text{subject to} & A x = b \\
& x \geq 0 \\
\end{array}
This will tend to produce a solution with many zero entries, but without a guarantee that it is truly the minimum. There are a variety of other heuristics one can employ. For instance, some people employ iterative reweighting schemes, which involve solving a sequence of problems of the form
\begin{array}{ll}
\text{minimize}_{x,y} & \sum_i d_i^{(k)} x_i \\
\text{subject to} & A x = b \\
& x \geq 0 \\
\end{array}
The first iteration uses $d_i^{(1)}\equiv 1$; i.e., the same problem above. This produces a solution $x^{(1)}$. For each subsequent iteration, you choose
$$d^{(k+1)}_i = 1 / (x_i^{(k)} + \epsilon)$$
where $\epsilon$ is small. This puts extra weight on small values of $x$ to drive more of them to zero.

Another approach is a *homotopy method*, for instance
\begin{array}{ll}
\text{minimize}_{x,y} & \sum_i x_i^{p_k} \\
\text{subject to} & A x = b \\
& x \geq 0 \\
\end{array}
For $k=1$, you choose $p_1=1$; i.e., the original linear program. Then you solve a sequence of problems with $p_k\rightarrow 0$ using the previous solution as an initial point for the next. For $p_k<1$, this is non-convex, so there is no guarantee that the solution is global. I personally like iterative reweighting better.