Let's say I have one point that will be taken randomly from a normal distribution with mean $\mu_1$ and standard deviation $\sigma_1$. Let's say I have another point that is taken much in the same way from another normal distribution with mean $\mu_2$ and standard deviation $\sigma_2$;.

How can I compute the probability, given $\mu_1$, $\mu_2$, $\sigma_1$, and $\sigma_2$, that my first point will be larger than the second?

I am sort of interested in the reasoning behind an "analytic" answer (or as analytic as you can possibly get with the normal distribution, which isn't that much), but I am more importantly looking for an algorithm of computing this probability, as it will be used in a simulation/model.

Does anyone know where I could get started on reasoning through this?

Note: For actual computation, having a table of values of the % of the curve within a given multiple of the standard deviation is feasible in my situation.

Justin L.
  • 13,646
  • 22
  • 60
  • 73
  • 2
    Just to be sure: do you consider the two draws to be strictly independent, i.e. the mechanism simulated by the second draw is not modified or influenced by the value of the first? If this is the case, the problem can be rephrased very simply, while perhaps not realistic. Also note that there is [Cross-Validated](http://stats.stackexchange.com/), the SE site for statistical Q&A. – ogerard May 20 '11 at 09:07
  • 1
    @ogerard -- yes, they are independent in the manner you describe. – Justin L. May 20 '11 at 09:44
  • So my answer applies. – Shai Covo May 20 '11 at 10:17

1 Answers1


Suppose that $X_1 \sim {\rm N}(\mu_1,\sigma_1^2)$ and $X_2 \sim {\rm N}(\mu_2,\sigma_2^2)$ are independent. Then, $$ {\rm P}(X_1 > X_2 ) = {\rm P}(X_1 - X_2 > 0) = 1 - {\rm P}(X_1 - X_2 \le 0). $$ Now, by independence, $X_1 - X_2$ is normally distributed with mean $$ \mu := {\rm E}(X_1 - X_2) = \mu_1 - \mu_2 $$ and variance $$ \sigma^2 := {\rm Var}(X_1 - X_2) = \sigma_1^2 + \sigma_2^2. $$ Hence, $$ \frac{{X_1 - X_2 - \mu}}{{\sigma}} \sim {\rm N}(0,1), $$ and so $$ {\rm P}(X_1 - X_2 \le 0) = {\rm P}\bigg(\frac{{X_1 - X_2 - \mu }}{\sigma } \le \frac{{0 - \mu }}{\sigma }\bigg) = \Phi \Big( \frac{-\mu }{\sigma }\Big), $$ where $\Phi$ is the distribution function of the ${\rm N}(0,1)$ distribution. Thus, $$ {\rm P}(X_1 > X_2 ) = 1 - {\rm P}(X_1 - X_2 \le 0) = 1 - \Phi \Big( \frac{-\mu }{\sigma }\Big). $$

Shai Covo
  • 23,520
  • 2
  • 43
  • 66
  • So, the probelm reduces to calculating the probability that a standard normal variable is less than a given $a$. – Shai Covo May 20 '11 at 09:46
  • 1
    I really like this answer; however, I can't seem to get the results to align with the empirical results I am generating. Perhaps it's an issue with my random number generator? For $\sigma=1$ for both and $\mu_1 = 0$ and $\mu_2 = 1$, I am consistently getting around 23.9% of all simulated selections resulting in the former being higher; however, your method seems to give an expected result of 15.9%; did I use yours wrong? – Justin L. May 20 '11 at 16:51
  • 1
    Nevermind, it turns out I was using the wrong value for $\sigma$ -- $1$ instead of $\sqrt{2}$. Everything works like a charm now :) – Justin L. May 20 '11 at 21:12