Let's say I have one point that will be taken randomly from a normal distribution with mean $\mu_1$ and standard deviation $\sigma_1$. Let's say I have another point that is taken much in the same way from another normal distribution with mean $\mu_2$ and standard deviation $\sigma_2$;.

How can I compute the probability, given $\mu_1$, $\mu_2$, $\sigma_1$, and $\sigma_2$, that my first point will be larger than the second?

I am sort of interested in the reasoning behind an "analytic" answer (or as analytic as you can possibly get with the normal distribution, which isn't that much), but I am more importantly looking for an algorithm of computing this probability, as it will be used in a simulation/model.

Does anyone know where I could get started on reasoning through this?

Note: For actual computation, having a table of values of the % of the curve within a given multiple of the standard deviation is feasible in my situation.