$e$ and $\pi$ often show up in mathematics in a variety of areas. At times there's an intuitive and logical explanation, and at other times there aren't.

One interesting thing about the Gaussian, though: its Fourier transform is itself also a Gaussian. Considering the frequency domain is used to describe cyclic/periodic behavior, this says something about the behavior of any system that adheres to a Gaussian distribution and its behavior over time.

In particular, a Gaussian process is stationary in that a set of samples taken from one period of time should resemble a sample taken from a different period of time.

The Wikipedia article on the normal distribution says:

More generally, a normal distribution results from exponentiating a quadratic function (...):
$f(x)=e^{a x^2 + b x + c}$

... where $a$ ends up being negative. What you're looking at is a function that calculates a probability, not a function that calculates the random variables. Though I give it without any motivation, an expression given in the Wikipedia article is somewhat illuminating (from the section near the end for sampling from the Gaussian distribution):

$\begin{align*}X&=\sqrt{-2 \ln(U)} \cos(2\pi V)\\Y&=\sqrt{-2\ln(U)} \sin(2\pi V)\end{align*}$

... where $U$ and $V$ are uniformly distributed on $(0,1]$.

This reveals the cyclic components I mentioned, and the term under the radicand uses $\ln(x)$ the inverse of $e^x$.

Looking at a plot of the function for $X$ above (with a few terms removed) shows that the samples almost always have values near zero, with a very small portion of them diverging to $\pm\infty$. This is important because a Gaussian process will exhibit behavior near that of its mean value most of the time, and the argument I'm making supports this statement.

Therefore, my interpretation (and I obviously give it without any proof) is this:

- $e$ shows up as a consequence of the samples being damped by the $\ln(U)$ term.
- $\pi$ shows up because the samples exhibit cyclic behavior.

There's technical reasons they show up, as pointed out by others. Probably far more than have been listed. However, I'm a firm believer that math isn't *just* about being able to give a proof for something, but rather understanding what the math actually describes and then being able to apply the concepts necessary for the proof to come up with the result.