Suppose we want to show that $$ n! \sim \sqrt{2 \pi} n^{n+(1/2)}e^{-n}$$

Instead we could show that $$\lim_{n \to \infty} \frac{n!}{n^{n+(1/2)}e^{-n}} = C$$ where $C$ is a constant. Maybe $C = \sqrt{2 \pi}$.

What is a good way of doing this? Could we use L'Hopital's Rule? Or maybe take the log of both sides (e.g., compute the limit of the log of the quantity)? So for example do the following $$\lim_{n \to \infty} \log \left[\frac{n!}{n^{n+(1/2)}e^{-n}} \right] = \log C$$

Gerry Myerson
  • 168,500
  • 12
  • 196
  • 359
  • 979
  • 2
  • 8
  • 5
  • 17
    Keith Conrad has a good explanation of this. http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CB0QFjAA&url=http%3A%2F%2Fwww.math.uconn.edu%2F~kconrad%2Fblurbs%2Fanalysis%2Fstirling.pdf&ei=Q0n7Tuk_gaC3B7PU5M8G&usg=AFQjCNFDngMkFgLhRO9M9ujPAPRv89x96Q – jspecter Dec 28 '11 at 16:54
  • 8
    Stirling's formula is a pretty hefty result, so the tools involved are going to go beyond things like routine application of L'Hopital's rule, although I am sure there is a way of doing it that involves L'Hopital's rule as a step. I've just scanned the link posted by jspecter and it looks good and reasonably elementary. Another pretty elementary treatment is Terry Tao's: http://terrytao.wordpress.com/2010/01/02/254a-notes-0a-stirlings-formula/. The two treatments I've taken the time to work through both involved doing residue calculus on the gamma function. – Ben Blum-Smith Dec 28 '11 at 17:10
  • 4
    James: are you interested in proving that C exists or in computing its value? For the former, pretty standard procedures based on an estimation of the ratio of two consecutive terms are enough. – Did Dec 28 '11 at 17:33
  • 4
    There's a proof along this line in [The Number $\pi$](http://www.ams.org/bookstore-getitem/item=tnp) by Eymard and Lafon. (A very nice book, btw.) – lhf Dec 31 '11 at 02:12
  • There is an exercise, or a sequence of exercises, in Spivak's *Calculus* that guides you through a proof. I don't have it on hand or know the details, but I believe it goes through the Euler–Maclaurin formula. – Jonas Meyer Dec 31 '11 at 03:39
  • Short elementary proofs due to [Diaconis and Freedman](http://ocw.nctu.edu.tw/course/fourier/supplement/Stirling.pdf) and to [Patin](http://ocw.nctu.edu.tw/course/fourier/supplement/Stirling-1.pdf) were published in *The American Mathematical Monthly*. – Did Jan 01 '12 at 21:31
  • Patin's paper is already mentioned in @Byron's answer. – Did Jan 02 '12 at 09:28
  • 2
    Dan Romik, Stirling's Approximation for n!: The Ultimate Short Proof?, at [JSTOR](http://www.jstor.org/stable/2589351). – Pierre-Yves Gaillard Jan 03 '12 at 18:13
  • It is a dismaying that nobody has mentioned in this thread that Abraham de Moivre was the first to show that $$\lim_{n\to\infty} \frac{n^{n+1/2}/e^n}{n!}$$ is a positive number and to calculate the number numerically, and then James Stirling showed that it is $\sqrt{2\pi}$. ${}\qquad{}$ – Michael Hardy Jan 26 '16 at 20:28

13 Answers13


A proof I found a while ago entirely relies on creative telescoping. Since $\frac{1}{n^2}-\frac{1}{n(n+1)}=\frac{1}{n^2(n+1)}$,

$$\begin{eqnarray*} \sum_{n\geq m}\frac{1}{n^2}&=&\sum_{n\geq m}\left(\frac{1}{n}-\frac{1}{(n+1)}\right)+\frac{1}{2}\sum_{n\geq m}\left(\frac{1}{n^2}-\frac{1}{(n+1)^2}\right)\\&+&\frac{1}{6}\sum_{n\geq m}\left(\frac{1}{n^3}-\frac{1}{(n+1)^3}\right)-\frac{1}{6}\sum_{n\geq m}\frac{1}{n^3(n+1)^3}\tag{1}\end{eqnarray*} $$ hence, by the series representation for $\psi(z)=\frac{d}{dz}\log\Gamma(z)$ (where $\Gamma(z)$ is the analytic continuation of $\int_{0}^{+\infty}t^{z-1}e^{-t}\,dt$, defined for $\text{Re}(z)>0$): $$ \psi'(m)=\sum_{n\geq m}\frac{1}{n^2}\leq \frac{1}{m}+\frac{1}{2m^2}+\frac{1}{6m^3}\tag{2}$$ and in a similar fashion: $$ \psi'(m) \geq \frac{1}{m}+\frac{1}{2m^2}+\frac{1}{6m^3}-\frac{1}{30m^5}.\tag{3}$$ Integrating twice, we have that $\log\Gamma(m)$ behaves like: $$ \log\Gamma(m)\approx\left(m-\frac{1}{2}\right)\log(m)-\color{red}{\alpha} m+\color{blue}{\beta}+\frac{1}{12m}\tag{4}$$ where $\color{red}{\alpha=1}$ follows from $\log\Gamma(m+1)-\log\Gamma(m)=\log m$.

That gives Stirling's inequality up to a multiplicative constant.

$\color{blue}{\beta=\log\sqrt{2\pi}}$ then follows from Legendre's duplication formula and the well-known identity:

$$ \Gamma\left(\frac{1}{2}\right)=2 \int_{0}^{+\infty}e^{-x^2}\,dx = \sqrt{\pi}.\tag{5}$$

Addendum: if we apply creative telescoping like in the second part of this answer, i.e. by noticing that $k(x)=\frac{60x^2-60x+31}{60x^3-90x^2+66x-18}$ gives $k(x)-k(x+1)=\frac{1}{x^2}+O\left(\frac{1}{x^8}\right)$, we arrive at

$$\begin{eqnarray*} m!&\approx& 2^{\frac{37-32m}{42}}e^{\frac{1}{84} \left(42-\sqrt{35} \pi -84 m+2 \sqrt{35} \arctan\left[\sqrt{\frac{5}{7}} (2m-1)\right]\right)} \\ &\cdot&\sqrt{\pi}\, m\,(2m-1)^{\frac{8}{21}(2m-1)}\left(m^2-m+\frac{3}{5}\right)^{\frac{5}{84}(2m-1)}\tag{6}\end{eqnarray*} $$ that is much more accurate than the "usual" Stirling's inequality, but also way less "practical".
However, it might be fun to plug in different values of $m$ in $(6)$ to derive bizarre approximate identities involving $e,\pi,\sqrt{\pi}$ and values of the arctangent function, like

$$ \sqrt{\pi}\,\exp\left[-\frac{1}{42}\left(147+\sqrt{35} \arctan\frac{1}{\sqrt{35}}\right)\right]\approx 2^{\frac{19}{6}}3^{\frac{1}{6}}5^{\frac{5}{12}} 7^{-\frac{37}{12}}.\tag{7}$$

Jack D'Aurizio
  • 338,356
  • 40
  • 353
  • 787
  • 9
    (+1) Nice rewriting of $\sum\limits_{n\ge m}\frac1{n^2}$. That makes for a short derivation of the approximation. My [original derivation](http://math.stackexchange.com/revisions/95454/1) of the asymptotic series was once much shorter than my current post, but I was nudged to add more details. – robjohn Aug 25 '15 at 14:34
  • Could you please elaborate a bit more how to obatin $\beta$? I really don't understand how Legendre's duplication formula gives the value for $\beta$. – Itachi Mar 24 '19 at 20:36
  • @Sebitas: consider the asymptotic expansion of $\log\Gamma(2m)-2\log\Gamma(m)-m\log(4)$ given by $(4)$ and compare it with $$ \lim_{n\to +\infty}\frac{\sqrt{n}}{4^n}\binom{2n}{n}=\frac{1}{\sqrt{\pi}}$$ ensured by Wallis product. – Jack D'Aurizio Mar 24 '19 at 21:00
  • yes, this explains everything, thanks! – Itachi Mar 24 '19 at 21:22
  • This is fun, but I don't think it is an actual proof. We need to integrate the equation $(\tfrac{d}{dx})^2 \log \Gamma(x) = \psi'(x)$ for over real $x$, but the equation $\psi'(m) = \sum_{n \geq m} \tfrac{1}{n^2}$ only holds for $m$ an integer. How do you obtain the necessary bounds on $\psi'$ between integers? – David E Speyer Apr 27 '19 at 10:21
  • 1
    @DavidESpeyer: the log-convexity of $\Gamma$ ensures that the bounds for $\psi$ holds for non-integer parameters, too. – Jack D'Aurizio Apr 28 '19 at 20:22
  • Good, that works. – David E Speyer Apr 29 '19 at 00:19
  • What is $\psi$? It was never introduced... – amsmath Sep 03 '19 at 19:20
  • @amsmath Psi is the digamma function, the logarithmic derivative of the Gamma function. – Jack D'Aurizio Sep 03 '19 at 19:54
  • @JackD'Aurizio Since this was a newbie question, I think this should certainly be said in the answer. – amsmath Sep 03 '19 at 20:19
  • 1
    @amsmath: now fixed. – Jack D'Aurizio Sep 04 '19 at 14:55
  • 1
    I was really pleasantly surprised to see this answer, since I have never encountered anything like it before. It definitely would win the prize for "most elementary" and probably "shortest and cleverest." But I do want to point out that for someone who is familiar with mathematical methods at an advanced undergrad level, then I think that either MarkFischler's answer or the Laplace's method answers (such as robjohn's or paulgarret's) would win the prize for "most intuitive." – sasquires Dec 05 '19 at 18:03

Inspired by the two references below, here's a short proof stripped of motivation and details.

For $t>0$, define

$$ g_t(y) = \begin{cases} \displaystyle \left(1+\frac{y}{\sqrt{t}}\right)^{\!t} \,e^{-y\sqrt{t}} & \text{if } y>-\sqrt{t}, \\ 0 & \text{otherwise}. \end{cases} $$

It is not hard to show that $t\mapsto g_t(y)$ is decreasing for $y\geq 0$ and increasing for $y\leq 0$. The limit is $g_\infty(y)=\exp(-y^2/2)$, so dominated convergence gives

$$ \lim_{t\to\infty}\int_{-\infty}^\infty g_t(y)\,dy = \int_{-\infty}^\infty \exp(-y^2/2)\,dy = \sqrt{2\pi}. $$

Use the change of variables $x=y\sqrt{t}+t$ to get

$$ t! = \int_0^\infty x^t e^{-x}\,dx = \left(\frac{t}{e}\right)^t \sqrt{t} \int_{-\infty}^\infty g_t(y)\,dy. $$


[1] J.M. Patin, A very short proof of Stirling's formula, American Mathematical Monthly 96 (1989), 41-42.

[2] Reinhard Michel, The $(n+1)$th proof of Stirling's formula, American Mathematical Monthly 115 (2008), 844-845.

Antonio Vargas
  • 24,036
  • 2
  • 57
  • 144

Here is a derivation of Stirling's Formula as given by

$$\bbox[5px,border:2px solid #C0A000]{\sqrt{2\pi n}\left(\frac ne\right)^n\left(1+\frac{1}{12(n+1)}\right) \le n!\le \sqrt{2\pi n}\left(\frac ne\right)^n\left(1+\frac{1}{12(n-2)}\right)}\tag 1$$

In the ensuing development, we will use $(i)$ the formula for the trapezoidal rule for twice differentiable functions, $(ii)$ Wallis's Formula for $\pi$, and $(iii)$ standard inequalities.


To prove $(1)$, we begin with the trapezoidal rule formula

$$\int_a^b f(x)\,dx=\frac12\left(f(a)+f(b)\right)(b-a)-\frac1{12}f''(\xi)(b-a)^3\tag 2$$

for which $f''(x)$ exists for $x\in[a,b]$ and $\xi\in (a,b)$. Letting $f(x)=\log(x)$ in $(2)$ shows that

$$\begin{align} \int_1^n \log(x)\,dx&=\sum_{m=1}^{n-1} \int_m^{m+1}\log(x)\,dx\\\\ &=\sum_{m=1}^{n-1} \left(\frac12 (\log(m)+\log(m+1))+\frac1{12}\frac{1}{\xi_m^2}\right)\tag 3 \end{align}$$

where $m<\xi_m<m+1$. Denoting $\frac1{12}\sum_{m=1}^{n-1}\frac{1}{\xi_m^2}$ by $C_n$, we see that $\lim_{n\to \infty}C_n=C<\infty$.

Now, we can rewrite $(3)$ as


which becomes

$$\bbox[5px,border:2px solid #C0A000]{n!=e^{1-C_n}\sqrt{n}\left(\frac{n}{e}\right)^n }\\\\\tag 4$$


To evaluate the limit $\lim_{n\to \infty}e^{1-C_n}$, we rely on Wallis's Product for $\pi$ as given by

$$\frac{\pi}{2}=\lim_{n\to \infty}\frac{2^{4n}(n!)^4}{(2n+1)((2n)!)^2}\tag 5$$

Using $(4)$ in $(5)$ shows that

$$\frac{\pi}{2}=\lim_{n\to \infty}\frac{n^2}{(2n)(2n+1)}\left(e^{1-2C_n+C_{2n}}\right)^2$$

from which we find that

$$\bbox[5px,border:2px solid #C0A000]{\lim_{n\to \infty}e^{1-C_n}=\sqrt{2\pi}}\\\\\tag 6$$


We are now able to write the factorial

$$n!=\sqrt{2\pi n}\left(\frac ne\right)^n e^{C_n-C} \tag 7$$

Since $C_n\to C$, then we have Stirling's Formula

$$\bbox[5px,border:2px solid #C0A000]{n!\sim \sqrt{2\pi n}\left(\frac ne\right)^n }$$

as was to be shown!

DERIVING BOUNDS FOR $\displaystyle n!$:

It is easy to see that since $\frac{1}{(m+1)^2}<\frac1{\xi_m^2}<\frac{1}{m^2}$, then

$$C-C_n<\frac1{12}\int_{n-1}^\infty \frac{1}{x^2}\,dx=\frac{1}{12(n-1)}$$


$$C-C_n>\frac1{12}\int_{n+1}^\infty \frac{1}{x^2}\,dx=\frac{1}{12(n+1)}$$

Next, for $x<1/2$ the Mean-Value Theorem guarantees that


Hence, we see that for $n\ge 3$

$$1+\frac{1}{12(n+1)}<e^{C-C_n}<1+\frac{1}{12(n-1)}+\frac{1}{144(n-1)^2}<1+\frac{1}{12(n-2)}\tag 8$$

Finally, using $(8)$ in $(7)$ yields the coveted bounds in $(1)$

$$\bbox[5px,border:2px solid #C0A000]{\sqrt{2\pi n}\left(\frac ne\right)^n\left(1+\frac{1}{12(n+1)}\right) \le n!\le \sqrt{2\pi n}\left(\frac ne\right)^n\left(1+\frac{1}{12(n-2)}\right)}$$

Mark Viola
  • 166,174
  • 12
  • 128
  • 227
  • 1
    This appears to be very elementary. +1 The proof I studied was based on Euler Maclaurin summation. – Paramanand Singh Sep 26 '17 at 07:15
  • @ParamanandSingh Indeed. This development is quite straightforward, which is the reason for my posting it. And thank you for the up vote! – Mark Viola Sep 26 '17 at 08:03
  • @robjohn The strength of this development was in its relative simplicity, not the order of the bounds. – Mark Viola Jun 20 '19 at 03:48
  • also more precious lower bound for $n$ is $n>2$. – Absurd Dec 05 '19 at 15:56
  • I'm having trouble understanding how you got $(3)$. Using $f(x) = $ log$(x)$ I get $\int_1^n$ log$(x) dx = \frac{1}{2}$log$(n)(n -1) + \frac{1}{12}(\xi)^{-2}(n -1)^3$ – Samantha Wyler Feb 14 '21 at 17:42
  • 1
    @SamanthaWyler The limits of integration that led to $(3)$ are $m$ and $m+1$, not $1$ and $n$. – Mark Viola Feb 14 '21 at 17:47
  • according to the formula in (2), $1 = a$, and $n = b$. Even if we replaced $n$ with $m + 1$, and $1$ with $m$ I still don't se how we get (3). – Samantha Wyler Feb 14 '21 at 18:06
  • 1
    The left-hand side of $(2)$ has limits $1$ and $n$, Look at the right-hand side. Now, apply the Trapezoidal Rule, $\int_{m}^{m+1} f(x)\,dx=\frac12 (f(m)+f(m+1)) -\frac1{12}f''(\xi_m)$, where $\xi_m\in (m,m+1).$. – Mark Viola Feb 14 '21 at 18:25
  • I see why I was confused, in the first equality of (3) they just simply slit up the integral into sums. Then they apply the trapezoidal rule to get the third equality of (3). – Samantha Wyler Feb 15 '21 at 16:59
  • Now I'm having trouble though (3) is equal to log$(n!)$ or equal to log$(e^{1 - C_n}\sqrt{n}(\frac{n}{e})^n)$ – Samantha Wyler Feb 15 '21 at 17:03
  • $\int_1^n \log(d) dx = n \log(n) - n$ which I'm pretty sure dose not equal log$(n!)$. – Samantha Wyler Feb 15 '21 at 17:33
  • 1
    @samanthawyler You're right. But I didn't state that $\int_1^n \log(x)\,dx$ was equal to $\log(n!)$. Please work through the arithmetic. HINT: $\log(n!)=\sum_{m=1}^n \log(m)$. – Mark Viola Feb 15 '21 at 19:23
  • I kinda see what your saying. (3) is equal to $\frac{1}{2}\log((n - 1)!) + \log(n!) +C_n$. So by solving the integral on the right side of (3) and subtracting on both sides we get $\log(n!) = n\log(n) - n - \frac{1}{2}\log((n - 1)!) - C_n = \log(n^n) + log(e^{-n - C_n} + log(((n - 1)!)^{-\frac{1}{2}) = \log(e^{-C_n}\sqrt{\frac{1}{(n - 1)!}(\frac{n}{e})^n)$. – Samantha Wyler Feb 15 '21 at 20:14
  • I keep getting $\log(e^{-C_n}\sqrt{\frac{1}{(n - 1)!}(\frac{n}{e})^n)$ instead of $\log(e^{1 - C_n}\sqrt{n}(\frac{n}{e})^n)$. – Samantha Wyler Feb 15 '21 at 20:25

The way I've usually shown Stirling's Asymptotic Approximation starts with the formula $$ \begin{align} n! &=\int_0^\infty x^ne^{-x}\;\mathrm{d}x\\ &=\int_0^\infty e^{-x+n\log(x)}\;\mathrm{d}x\\ &=\int_0^\infty e^{-nx+n\log(nx)}n\;\mathrm{d}x\\ &=n^{n+1}\int_0^\infty e^{-nx+n\log(x)}\;\mathrm{d}x\\ &=n^{n+1}e^{-n}\int_{-1}^\infty e^{-nx+n\log(1+x)}\;\mathrm{d}x\\ &=n^{n+1}e^{-n}\int_{-\infty}^\infty e^{-nu^2/2}x'\;\mathrm{d}u\tag{1} \end{align} $$ Where $u^2/2=x-\log(1+x)$. That is, $u(x)=x\sqrt{\frac{x-\log(1+x)}{x^2/2}}$, which is analytic near $x=0$: the singularity of $\frac{x-\log(1+x)}{x^2/2}$ at $x=0$ is removable, and since $\frac{x-\log(1+x)}{x^2/2}$ is near $1$ when $x$ is near $0$, $\sqrt{\frac{x-\log(1+x)}{x^2/2}}$ is analytic near $x=0$. Since $u(0)=0$ and $u'(0)=1$, the Lagrange Inversion Theorem says that $x$ is an analytic function of $u$ near $u=0$, with $x(0)=0$ and $x'(0)=1$.

Note that since $u^2/2=x-\log(1+x)$, we have $$ u(1+x)=xx'\tag{2} $$ from which it follows that $\lim\limits_{u\to+\infty}\dfrac{x'(u)}{u}=1$ and $\lim\limits_{u\to-\infty}\dfrac{x'(u)}{ue^{-u^2/2}}=-1$. That is, $x'(u)=O(u)$ for $|u|$ near $\infty$.

Because $x$ is an analytic function with $x(0)=0$ and $x'(0)=1$, $$ x=\sum_{k=1}^\infty a_ku^k\tag{3} $$ where $a_1=1$. Then, looking at the coefficient of $u^n$ in $(2)$, we get $$ \begin{align} a_{n-1} &=\sum_{k=1}^na_{n-k+1}ka_k\\ &=\sum_{k=1}^na_k(n-k+1)a_{n-k+1}\\ &=\frac{n+1}{2}\sum_{k=1}^na_ka_{n-k+1}\tag{4} \end{align} $$ Therefore, we get the recursion $$ a_n=\frac{a_{n-1}}{n+1}-\frac{1}{2}\sum_{k=2}^{n-1}a_ka_{n-k+1}\tag{5} $$ Thus, we get the beginning of the power series for $x(u)$ to be $$ x=u+\tfrac{1}{3}u^2+\tfrac{1}{36}u^3-\tfrac{1}{270}u^4+\tfrac{1}{4320}u^5+\tfrac{1}{17010}u^6-\tfrac{139}{5443200}u^7+\tfrac{1}{204120}u^8+\dots\tag{6} $$ Integration by parts yields the following two identities: $$ \int_{-\infty}^\infty e^{-nu^2}|u|^{2k+1}\;\mathrm{d}u=\frac{k!}{n^{k+1}}\tag{7a} $$ and $$ \int_{-\infty}^\infty e^{-nu^2}u^{2k}\;\mathrm{d}u=\frac{(2k-1)!!}{2^kn^{k+1/2}}\sqrt{\pi}\tag{7b} $$ Furthermore, we have the following tail estimates: $$ \begin{align} \int_{|u|>a}e^{-nu^2}\;\mathrm{d}u &=\int_a^\infty e^{-nu^2}u^{-1}\;\mathrm{d}u^2\\ &=\int_{a^2}^\infty e^{-nu}u^{-1/2}\;\mathrm{d}u\\ &\le\frac{1}{a}\int_{a^2}^\infty e^{-nu}\;\mathrm{d}u\\ &=\frac{1}{na}\int_{na^2}^\infty e^{-u}\;\mathrm{d}u\\ &=\frac{1}{na}e^{-na^2}\tag{8a} \end{align} $$ and for $k\ge1$, $$ \begin{align} \int_{|u|>a}e^{-nu^2}|u|^{k}\;\mathrm{d}u &=\int_a^\infty e^{-nu^2}u^{k-1}\;\mathrm{d}u^2\\ &=\int_{a^2}^\infty e^{-nu}u^{(k-1)/2}\;\mathrm{d}u\\ &=n^{-(k+1)/2}\int_{na^2}^\infty e^{-u}u^{(k-1)/2}\;\mathrm{d}u\\ &=n^{-(k+1)/2}\int_{na^2}^\infty e^{-u/2}e^{-u/2}u^{(k-1)/2}\;\mathrm{d}u\\ &\le n^{-(k+1)/2}\int_{na^2}^\infty e^{-u/2}\left(\frac{k-1}{e}\right)^{(k-1)/2}\;\mathrm{d}u\\ &=\frac{2}{n}\left(\frac{k-1}{ne}\right)^{(k-1)/2}e^{-na^2/2}\tag{8b} \end{align} $$ The tail estimates in $(8)$ are $O\left(\frac{1}{n}e^{-na^2/2}\right)$ for all $k\ge0$. That is, they decay faster than any power of $n$.

Define $$ f_k(u)=x'(u)-a_1-2a_2u-\dots-2ka_{2k}u^{2k-1}\tag{9} $$ Since $x(u)$ is analytic near $u=0$, $f_k(u)=O\left(u^{2k}\right)$ near $u=0$. Since $x'(u)=O(u)$ for $|u|$ near $\infty$, $f_k(u)=O\left(u^{2k-1}\right)$ for $|u|$ near $\infty$. Therefore, $$ \begin{align} \int_{-\infty}^\infty e^{-nu^2/2}f_k(u)\;\mathrm{d}u &=\int_{|u|< a}e^{-nu^2/2}f_k(u)\;\mathrm{d}u + \int_{|u|> a}e^{-nu^2/2}f_k(u)\;\mathrm{d}u\\ &\le\int_{|u|< a}e^{-nu^2/2}C_1u^{2k}\;\mathrm{d}u + \int_{|u|> a}e^{-nu^2/2}C_2|u|^{2k-1}\;\mathrm{d}u\\ &=O\left(\frac{1}{n^{k+1/2}}\right)\tag{10} \end{align} $$ Therefore, using $\mathrm{(7b)}$ and $(10)$, we get $$ \begin{align} n! &=n^{n+1}e^{-n}\int_{-\infty}^\infty e^{-nu^2/2}\left(1+\tfrac{1}{12}u^2+\tfrac{1}{864}u^4-\tfrac{139}{777600}u^6+f_4(u)\right)\;\mathrm{d}u\\ &+\;n^{n+1}e^{-n}\int_{-\infty}^\infty e^{-nu^2/2}\left(\tfrac{2}{3}u-\tfrac{2}{135}u^3+\tfrac{1}{2835}u^5+\tfrac{1}{25515}u^7\right)\;\mathrm{d}u\\ &=\sqrt{2n}\;n^ne^{-n}\left(\int_{-\infty}^\infty e^{-u^2}\left(1+\tfrac{1}{6n}u^2+\tfrac{1}{216n^2}u^4-\tfrac{139}{97200n^3}u^6\right)\;\mathrm{d}u+O\left(\frac{1}{n^4}\right)\right)\\ &=\sqrt{2\pi n}\;n^ne^{-n}\left(1+\frac{1}{12n}+\frac{1}{288n^2}-\frac{139}{51840n^3}+O\left(\frac{1}{n^4}\right)\right)\tag{11} \end{align} $$

  • 326,069
  • 34
  • 421
  • 800
  • 1
    The change of variable $x\to u$ and the step where you develop $x$ as a series in $u$ require some more justification, I believe. Are we supposed to understand that $x$ in $(-1,0)$ corresponds to $u\lt0$ and that $x\gt0$ corresponds to $u\gt0$? – Did Dec 31 '11 at 18:20
  • 1
    @Didier: Yes; that was the intent. Since $u^2=x^2+x^3f(x)$, where $f$ is analytic near $0$, we can write $u=x(1+xf(x))^{1/2}=x(1+xg(x))$ where $g$ is analytic near $0$. Now, the setting should be right to use the inverse function theorem on $u=x+x^2g(x)$. I am not claiming that the power series for $x$ in terms of $u$ extends to all $u$, but as $n\to\infty$, we use a smaller and smaller neighborhood of $0$. – robjohn Dec 31 '11 at 18:58
  • 7
    Sorry but I do not think I can buy this as a full proof. – Did Dec 31 '11 at 21:32
  • @Didier: What is bothering you about my answer? Reverting $u^2/2=x-\log(1+x)$ should not be a problem. The power series may not converge everywhere, but all we need for the asymptotic expansion is that the even part of $x'$ is $1+\frac{1}{12}u^2+\frac{1}{864}u^4-\frac{139}{777600}u^6+O(u^8)$, which seems clear since $x'=u(1+1/x)$. – robjohn Dec 31 '11 at 23:12
  • 3
    The main problem is that you apply tools valid for [formal power series](http://en.wikipedia.org/wiki/Formal_power_series#Inverting_series), to get results on functions coinciding with the sums of [power series](http://en.wikipedia.org/wiki/Power_series). Such a move is tempting and there are results which allow it, but in a strict setting only (I recommend pondering the first paragraph of the WP page on formal power series). Imagine for a moment that the recursion is solved by $a_k\sim k^k$. What next? Would you consider that the asymptotics $a_0+a_1/n+a_2/n^2+O(1/n^3)$ .../... – Did Jan 01 '12 at 11:20
  • 2
    .../... is valid? Likewise: what are the dots in (6) and (8)? Some rests of converging series? The next terms of some asymptotic series? Something else? [Divergent series](http://fr.wikipedia.org/wiki/S%C3%A9rie_divergente) are a venerable subject but, here again, some caution is needed. Finally: when you write that *Thus, $x$ has a power series in $u$ like $x=0+u+\ldots$*, this probably means that, IF $x$ coincides with the sum of such a power series, then the beginning of the power series must be $0+u+$some higher order terms (perfectly correct). But you use it to .../... – Did Jan 01 '12 at 11:20
  • 2
    .../... assert that $x$ IS a power series in $u$ (which is an entirely different matter). // All in all, it might be possible to address these points and to reach a rigorous solution based on this method, but your post does not do that, yet. – Did Jan 01 '12 at 11:20
  • 10
    @Didier: I applied the tools valid for formal power series to get the coefficients of $x$ as a power series in $u$. I was not relying on them to show that $x$ was analytic in $u$; that is handled by Lagrange Inversion. I have now mentioned that explicitly in my answer. The dots in $(6)$ are the next terms in the power series for $x$. The first two sets of dots in $(9)$ are the next terms in the power series, while the last set of dots are the next terms in the asymptotic expansion. I have rearranged and amended my answer; does it now seem more complete? – robjohn Jan 03 '12 at 21:22
  • It is definitely better, but, according to me, not a full proof yet. [An answer](http://math.stackexchange.com/a/96308/6179) to the main post explains why. – Did Jan 04 '12 at 09:33
  • 1
    Aaaaahh... There are still some minor typos in the last version, but nothing the interested readers cannot correct themselves, I believe. I will soon delete my other post. Well done, @robjohn. – Did Jan 05 '12 at 15:15
  • @Didier: I would like to correct any typos. I might as well post a better answer, so if you don't mind pointing out some, I will fix them. – robjohn Jan 05 '12 at 16:43
  • Typo the singular, actually (sorry): *Since $x=O(u)$ for $|u|$ near $\infty$* should read *Since $x'=O(u)$ for $|u|$ near $\infty$* (right after (9)). You could also replace some $x$ and $x'$ by $x(u)$ and $x'(u)$ respectively, starting from the last equation in (1). Or not. – Did Jan 05 '12 at 17:06
  • @Didier: thanks! I fixed the typo and expressed the fact that $x$ is a function of $u$ by changing $x$ to $x(u)$ in some places. – robjohn Jan 05 '12 at 17:34
  • Hi @robjohn, if I want to use Stirling's approximation $n!$ is approx. $\sqrt{2\pi n}(\frac{n}{e})^n$, but I have to show a remainder term in my proof is in $O(log(n))$, then what remainder term does Stirling's approximation give? For example is $n! = \sqrt{2 \pi n} (\frac{n}{e})^n + O(n)$? Thanks, – User001 Dec 06 '15 at 02:54
  • I am basically proving out Stirling's approximation, although the problem statement doesn't actually say it is, so I hadn't realized until an MSE contributor told me about it. But the estimate required seems a bit sharper than what I have found on MSE -- I need to show that the partial sums of $log(j) = log(n!) = nlogn - n + 1 + O(logn)$, whereas a proof I found on MSE gives an estimate of $O(n)$ for the remainder term. – User001 Dec 06 '15 at 02:59
  • So, if I cannot do that - I've probably been trying too long at this point - then I will use Stirling's approximation instead. (My current attempt, not using the Stirling approx. for $n!$, is looking at the difference between sum and integral of $log(j)$, which almost works, but showing the $O(log(n))$ is difficult.) So I am just wondering what the big-O term $n!$ comes with, so that I know how to proceed with the arithemetic. Thanks, @robjohn – User001 Dec 06 '15 at 03:06
  • 1
    @LebronJames: if you look at $(11)$, you see that the "remainder term" of Stirling's approximation is $O\left(\frac{n^n}{e^n\sqrt{n}}\right)$. – robjohn Dec 06 '15 at 09:36
  • Hi @robjohn, thanks for your response. Can I ask you one final question on this, if you don't mind? I was able to show the $nlogn - n + O(log(n))$, using the Abel summation formula. But, in comparing my work to another student's solution, it uses $\sum log(j) = log1 + log 2 + ... + logn = log(n!) = log(\sqrt{2 \pi n} (\frac{n}{e})^n) = ...$ and it continues from here. Is this student's solution incorrect? – User001 Dec 07 '15 at 01:22
  • I feel that this might be a case of using the result to prove the result. On Wikipedia's page, it says that $\sum log(j) = nlogn - n + O(logn)$ and the approximation $\sqrt{2 \pi n} (\frac{n}{e})^n$ are 'variants' of each other. What do you think? Thanks @robjohn, – User001 Dec 07 '15 at 01:22
  • 1
    @LebronJames: what is it you are trying to prove? First you ask if $n!=\sqrt{2\pi n}n^ne^{-n}+O(n)$, which is false. I gave you the "remainder term", which is *much* larger than $O(n)$. Then it appears you are looking at the log of $n!$: $\sum\limits_{k=1}^n\log(k)=n\log(n)-n+O(\log(n))$, which *is* true. – robjohn Dec 07 '15 at 07:59
  • Hi @robjohn, so I am trying to prove the $nlog(n) - n + O(log(n))$, but on the L.H.S., I was wondering whether I could start with $\sum log(k) = log(n!) = log(\sqrt{2 \pi n})n^ne^{-n}$, and then proceed from here to show the $nlog(n) - n + O(log(n))$. Would this be circular logic, using Stirling's approximation on the L.H.S., when the R.H.S. is *also* Stirling's approximation, at least according to Wikipedia... – User001 Dec 08 '15 at 02:18
  • I've already proved it successfully, using Abel's summation instead, but was wondering whether the technique I saw in a solution is legitimate, @robjohn. I always try to think of a second or maybe third solution to a problem, out of curiosity, before moving on to the next problem...thanks, – User001 Dec 08 '15 at 02:23
  • But here Wikipedia isn't so explicit, or maybe I just don't know Stirling's formula well enough. Is it saying that the n! approximation and the nlogn - n + O(log(n)) are equivalent, when it says that they are "variants" of each other? If they are equivalent, then, using the n! approximation to show the nlogn-n+O(logn) is probably an incorrect approach. So, I'm just a little confused about this part, @robjohn, thanks, – User001 Dec 08 '15 at 02:25

Instead of a proof, how about a string of hints? This comes from Maxwell Rosenlicht's Introduction to Analysis (a great little easy-to-read text which is dirt cheap -- it's a Dover paperback).

  • Chapter VI: Problem #22 Show that for $n=1,2,3,\dots$ we have that $$ 1+\frac{1}{2}+\frac{1}{3}+\cdots+\frac{1}{n}-\ln(n)$$ is positive and it decreases as $n$ increases. Thus this converges to a number between 0 and 1 (Euler's constant).

  • Chapter VII: Problem #39 For $n=0,1,2,\dots$ let $I_n=\int_0^{\pi/2} \sin^n(x)\,dx$. Show that

(a) $\frac{d}{dx}\left(\cos(x)\sin^{n-1}(x)\right) = (n-1)\sin^{n-2}(x)-n\sin^n(x)$

(b) $I_n = \frac{n-1}{n}I_{n-2}$ if $n \geq 2$

(c) $I_{2n} = \frac{1\cdot 3\cdot 5\cdots (2n-1)}{2\cdot 4\cdot 6\cdots (2n)}\frac{\pi}{2}$ and $I_{2n+1} = \frac{2\cdot 4\cdot 6\cdots (2n)}{3\cdot 5\cdot 7\cdots (2n+1)}$ for $n=1,2,3,\dots$

(d) $I_0, I_1, I_2, \dots$ is a decreasing sequence having the limit zero and $$ \lim_{n \to \infty} \frac{I_{2n+1}}{I_{2n}}=1$$

(e) Wallis' product: $$\lim_{n \to \infty} \frac{2\cdot 2\cdot 4\cdot 4\cdot 6\cdot 6 \cdots (2n)\cdot (2n)}{1\cdot 3\cdot 3\cdot 5\cdot 5\cdot 7 \cdots (2n-1)\cdot (2n+1)}=\frac{\pi}{2}$$

  • Chapter VII: Problem #40

(a) Show that if $f:\{ x\in \mathbb{R}\;|\; x\geq 1\} \to \mathbb{R}$ is continuous, then $$ \sum\limits_{i=1}^n f(i) = \int_1^{n+1} f(x)\,dx + \sum\limits_{i=1}^n\left(f(i)-\int_i^{i+1} f(x)\,dx\right)$$

(b) Show that if $i>1$, then $\ln(i)-\int_i^{i+1}\,dx$ differs from $-1/2i$ by less than $1/6i^2$. [Hint: Work out the integral using the Taylor series for $\ln(1+x)$ at the point $0$.]

(c) Use part (a) with $f=\ln$, part (b), and Problem #22 from Chapter VI to prove that the following limit exists: $$\lim_{n \to \infty} \left(\ln(n!)-\left(n+\frac{1}{2}\right)\ln(n)+n\right)$$

(d) Use part (e) of Problem #39 to compute the above limit, then obtaining: $$ \lim_{n\to \infty} \frac{n!}{n^ne^{-n}\sqrt{2\pi n}}=1$$ (i.e. Stirling's Formula)

Bill Cook
  • 27,291
  • 61
  • 83

Here are a couple of "proofs" of Stirling's formula. They are quite elegant (in my opinion), but not rigorous. On could write down a real proof from these, but as they rely on some hidden machinery, the result would be quite heavy.

1) A probabilistic non-proof

We start from the expression $e^{-n} n^n/n!$, of which we want to find an equivalent. Let us fix $n$, and let $Y$ be a random variable with a Poisson distribution of parameter $n$. By definition, for any integer $k$, we have $\mathbb{P} (Y=k) = e^{-n} n^k/k!$. If we take $k=n$, we get $\mathbb{P} (Y=n) = e^{-n} n^n/n!$. The sum of $n$ independent random variables with a Poisson distribution of parameter $1$ has a Poisson distribution of parameter $n$; so let us take a sequence $(X_k)$ of i.i.d. random variables with a Poisson distribution of parameter $1$. Note that $\mathbb{E} (X_0) = 1$. We have:

$$\mathbb{P} \left( \sum_{k=0}^{n-1} (X_k - \mathbb{E} (X_k)) = 0 \right) = \frac{e^{-n} n^n}{n!}.$$

In other words, $e^{-n} n^n/n!$ is the probability that a centered random walk with Poissonnian steps of parameter $1$ is in $0$ at time $n$. We have tools to estimates such quantities, namely local central limit theorems. They assert that:

$$\frac{e^{-n} n^n}{n!} = \mathbb{P} \left( \sum_{k=0}^{n-1} (X_k - \mathbb{E} (X_k)) = 0 \right) \sim \frac{1}{\sqrt{2 \pi n \text{Var} (X_0)}},$$

a formula which is closely liked with Gauss integral and diffusion processes. Since the variance of $X_0$ is $1$, we get:

$$n! \sim \sqrt{2 \pi n} n^n e^{-n}.$$

The catch is of course that the local central limit theorems are in no way elementary results (except for the simple random walks, and that is if you already know Stirling's formula...). The methods I know to prove such results involve Tauberian theorems and residue analysis. In some way, this probabilistic stuff is a way to disguise more classical approaches (in my defense, if all you have is a hammer, everything looks like a nail).

I think one could get higher order terms for Stirling's formula by computing more precise asymptotics for Green's function in $0$, which requires the knowledge of higher moments for the Poisson distribution. Note that the generating function for a Poisson distribution of parameter $1$ is:

$$\mathbb{E} (e^{t X_0}) = e^{e^t-1},$$

and this "exponential of exponential" will appear again in a moment.

2) A generating functions non-proof

If you want to apply analytic methods to problems related to sequences, generating functions are a very useful tool. Alas, the series $\sum_{n \geq 0} n! z^n$ is not convergent for non-zero values of $z$. Instead, we shall work with:

$$e^z = \sum_{n \geq 0} \frac{z^n}{n!};$$

we are lucky, as this generating function is well-known. Let $\gamma$ be a simple loop around $0$ in the complex plane, oriented counter-clockwise. Let us fix some non-negative integer $n$. By Cauchy's formula,

$$\frac{1}{n!} = \frac{1}{n!} \frac{\text{d} e^z}{\text{d} z}_{|_0} = \frac{1}{2 \pi i} \oint_\gamma \frac{e^z}{z^{n+1}} \text{d} z.$$

We choose for $\gamma$ the circle of radius $n$ around $0$, with its natural parametrization $z (t) = n e^{it}$:

$$\frac{1}{n!} = \frac{1}{2 \pi i} \int_{- \pi}^{\pi} \frac{e^{n e^{it}}}{n^{n+1} e^{(n+1)it}} nie^{it} \text{d} t = \frac{e^n}{2 \pi n^n} \oint_{- \pi}^{\pi} e^{n (e^{it}-it-1)} \text{d} t = \frac{e^n}{2 \pi \sqrt{n} n^n} \int_{- \pi \sqrt{n}}^{\pi \sqrt{n}} e^{n \left(e^{\frac{i\theta}{\sqrt{n}}}-\frac{i\theta}{\sqrt{n}}-1\right)} \text{d} \theta,$$

where $\theta =t \sqrt{n}$. Hitherto, we have an exact formula; note that we meet again the "exponential of exponential". Now comes the leap of faith. For $x$ close to $0$, the value of $e^x-x-1$ is roughly $x^2/2$. Moreover, the bounds of the integral get close to $- \infty$ and $ \infty$. Hence, for large $n$, we have:

$$\frac{1}{n!} \sim \frac{e^n}{2 \pi \sqrt{n} n^n} \int_{- \infty}^{+ \infty} e^{\frac{n}{2} \left(\frac{i\theta}{\sqrt{n}}\right)^2} \text{d} \theta = \frac{e^n}{2 \pi \sqrt{n} n^n} \int_{- \infty}^{+ \infty} e^{-\frac{\theta^2}{2}} \text{d} \theta = \frac{e^n}{\sqrt{2 \pi n} n^n}.$$

Of course, it is not at all trivial to prove that the equivalents we took are rigorous. Indeed, if one apply this method to bad generating functions (e.g. $(1-z)^{-1}$), they can get false results. However, this can be done for some admissible functions, and the exponential is one of them.

I have learnt this method thanks to Don Zagier. It is also explained in Generatingfunctionology, Chapter $5$ (III), where the author credits Hayman. The original reference seems to be A generalisation of Stirling's formula (Hayman, 1956), but I can't read it now.

One of the advantage of this method is that it becomes very easy to get the next terms in the asymptotic expansion of $n!$. You just have to develop further the function $e^x-x-1$ at $0$. Another advantage is that it is quite general, as it can be applied to many other sequences.

  • 3,879
  • 2
  • 15
  • 36
D. Thomine
  • 10,605
  • 21
  • 51
  • 1
    I'd like to have a clarification on the following equation: $$\mathbb{P} \left( \sum_{k=0}^{n-1} (X_k - \mathbb{E} (X_k)) = 0 \right) \sim \frac{1}{\sqrt{2 \pi n \text{Var} (X_0)}}$$ Does it use an approximation of the standard normal cumulative distribution function? I can't see which formulation of the central limit theorem is used. Is it a De Moivre-Laplace like formula? – Alessandro Cuttin Mar 25 '13 at 15:13
  • 2
    @ Allessandro Cuttin: No. It is an instance of a Local Central Limit Theorem. These are proved via specral methods, much like the standard proof of the CLT, but the machinery involved is usually heavier. – D. Thomine Aug 11 '13 at 17:56

Depending on one's preferences, one might care that it is possible to understand Stirling's formula (perhaps really due to Laplace?) in a usefully more general context, namely, as an example of a "Laplace's method" or "stationary phase" sort of treatment of asymptotics of integrals.

This is available on-line on my page http://www.math.umn.edu/~garrett/m/v/, with the file being http://www.math.umn.edu/~garrett/m/v/basic_asymptotics.pdf

One might object to certain very-classical treatments which make Gamma appear as a singular thing. While I agree that it is of singular importance in applications within and without mathematics, the means of establishing its asymptotics are not.

paul garrett
  • 46,394
  • 4
  • 79
  • 149

The best derivation of Stirling's approximation I have seen starts from Euler's Summation Formula, valid for integers $a\leq b$ and $m\geq 1$ $$ \sum_{a\leq k<b}f(k) = \int_a^b f(x)\,dx +\sum_{k=1}^m \frac{B_k}{k!} f^{(k-1)}(b) -\sum_{k=1}^m \frac{B_k}{k!} f^{(k-1)}(a) \\+ (-1)^{m+1} \int_a^b \frac{B_m}{m!}f^{(m)}(x)\,dx$$

Here, $f^{(n)}(x)$ refers to the $n$-th derivative of $f$ evaluated at $x$, and $B_k$ is the $k$-th Bernoulli number.

The "remainder term" $(-1)^{m+1} \int_a^b \frac{B_m}{m!}f^{(m)}(x)\,dx$ lies between $0$ and the first discarded term.

Applying Euler's Summation Formula to $f(x) = \ln(x)$, and using $a=1, b=n$, we obtain $$ \sum_{1\leq k<n} \ln k = n\ln n - n +\sigma - \frac{\ln n}{2} + \sum_{k=1}^m \frac{B_{2k}}{2k(2k-1)n^{2k-1}} + \varphi_{m,n}\frac{B_{2m+2}}{(2m+2)(2m+1)n^{2m+1}} $$ where $\varphi_{m,n}$ depends on the $n$ and the number of terms $m$, but is always between zero and one. $\sigma$ is a constant which is not determined by application of the formula. The best method I have seen for determining that $\sigma = \ln(\sqrt{2\pi})$ is in Brylan Schmuland's answer above, although in Concrete Mathematics the derive it by applying the summation formula to $\sum\binom{2n}{k}$.

Adding $\ln n$ to both sides, we get $$ \ln n! = n\ln n - n + \frac{\ln n}{2} + \ln \sqrt{2\pi} + \frac1{12n}-\frac1{360n^3} + \frac\epsilon{n^5} $$ with an error term having $0 \leq \epsilon \leq \frac1{1260}$.

Lastly, exponentiate and series expand the terms of order $\frac1n$ or less to get $$ n! = \sqrt{2\pi n}\left(\frac{n}{e}\right)^n \left( 1 + \frac1{12n} + \frac1{288n^2} - \frac{139}{51840n^3} -\frac{571}{2488320 n^4} + \frac{\rho}{n^5}\right) $$ with $-0.00001 < \rho < 0.00078$

At this level of approximation, the approximation, rounded to the nearest integer, is exact for up to $n=11$ (of course, the relative error in the approximation falls as $\frac1{n^5}$ with a small coefficient, and so is extremely small.

  • 19,889
  • 3
  • 17
  • 36
Mark Fischler
  • 40,850
  • 2
  • 36
  • 70
  • This is the first proof I ever encountered. If you have seen the Euler-Maclaurin formula before, it is really straightforward. Even if you haven't, then the idea of the Euler-Maclaurin formula is very intuitive. On a couple of occasions in undergrad and grad school, I was able to convince fellow physicists that this is a proof within a few minutes (although of course there are a lot of details that you can't fill in for a true proof in a three-minute discussion). – sasquires Dec 05 '19 at 17:55

If you're familiar with

$$\mathop {\lim }\limits_{n \to \infty } \frac{{\left( {2n} \right)!!}}{{\left( {2n - 1} \right)!!}}\frac{1}{{\sqrt n }} = \sqrt \pi $$

Then you can use

$$\eqalign{ & \mathop {\lim }\limits_{n \to \infty } \frac{{\left( {2n} \right)!{!^2}}}{{\left( {2n} \right)!}}\frac{1}{{\sqrt n }} = \sqrt \pi \cr & \mathop {\lim }\limits_{n \to \infty } \frac{{{2^{2n}}{{\left( {n!} \right)}^2}}}{{\left( {2n} \right)!}}\frac{1}{{\sqrt n }} = \sqrt \pi \cr} $$

Now you can check that

$$\alpha = \mathop {\lim }\limits_{n \to \infty } \frac{{n!{e^n}}}{{{n^n}\sqrt n }} = \mathop {\lim }\limits_{n \to \infty } \frac{{\left( {2n} \right)!{e^{2n}}}}{{{{\left( {2n} \right)}^{2n}}\sqrt {2n} }}$$

exists. Then square the first expression and divide by the latter to get

$$\alpha = \mathop {\lim }\limits_{n \to \infty } \frac{{{{\left( {n!} \right)}^2}{e^{2n}}}}{{{n^{2n}}n}}\frac{{{{\left( {2n} \right)}^{2n}}\sqrt {2n} }}{{\left( {2n} \right)!{e^{2n}}}} = \mathop {\lim }\limits_{n \to \infty } \frac{{{{\left( {n!} \right)}^2}{2^{2n}}\sqrt 2 }}{{\left( {2n} \right)!\sqrt n }} = \sqrt {2\pi } $$

Thus you have that

$$\mathop {\lim }\limits_{n \to \infty } \frac{{n!{e^n}}}{{{n^n}\sqrt {2n} }} = \sqrt \pi $$


$$n! \sim {n^n}{e^{ - n}}\sqrt {2\pi n} $$

  • 116,339
  • 16
  • 202
  • 362
  • 1
    You're following almost verbatim the argument contained in the analysis book of Rey Pastor *et al*. while totally avoiding the heart of the matter, namely, to provide the non-trivial details about the existence of the limit of $\alpha_n = n!\, e^n / n^n \sqrt{n}$ when $n\to \infty$, either thoroughly discussed in the enlightening next paragraph on the boundedness of the quotient $\alpha_{n+1} / \alpha_n$, or left as very an instructive yet tricky exercise at the end of Rey Pastor's first volume. – Fitzcarraldo Jan 18 '21 at 23:05
  • @Fitzcarraldo Are you suggesting I have copied this answer from that book? I am confused. – Pedro Jan 19 '21 at 10:11

This answer derives Stirling's Asymptotic Approximation by writing the log of a factorial as a Riemann-Stieltjes Integral.

Theorem $\bf{1}$: (Asymptotic Approximation of the Central Binomial Coefficients) $$ \lim_{n\to\infty}\frac{\sqrt{\pi n}}{4^n}\binom{2n}{n}=1\tag1 $$

Note that Theorem $1$ is equivalent to Wallis' Product for $\pi$.

Proof: The orthogonality of $e^{ikx}$ on $[0,2\pi]$ gives $$ \begin{align} \int_0^{2\pi}\cos^{2n}(x)\,\mathrm{d}x &=\frac1{4^n}\sum_{k=0}^{2n}\binom{2n}{k}\int_0^{2\pi}e^{i(2n-k)x}e^{-ikx}\,\mathrm{d}x\\ &=\frac{2\pi}{4^n}\binom{2n}{n}\tag{1a} \end{align} $$ In this answer, it is shown geometrically that for $0\le x\lt\frac\pi2$, we have $\sin(x)\le x\le\tan(x)$.

Since $\sin^2(x)\le x^2$, $\cos^2(x)=1-\sin^2(x)\ge1-x^2$. Furthermore, $e^{x^2}\ge1+x^2$. Thus, $$ \cos^2(x)\,e^{x^2}\ge1-x^4\tag{1b} $$ For $0\le x\le\frac\pi2$, we have $\frac{\mathrm{d}}{\mathrm{d}x}\cos^2(x)\,e^{x^2}=2(x-\tan(x))\cos^2(x)\,e^{x^2}\le0$. Therefore, $$ \cos^2(x)\,e^{x^2}\le1\tag{1c} $$ $\text{(1b)}$ and $\text{(1c)}$ give $$ 1-x^4\le\cos^2(x)\,e^{x^2}\le1\tag{1d} $$ Using $[a,b]_\#$ to denote a number between $a$ and $b$, Bernoulli's Inequality, $\text{(1d)}$, and $x^4\le\frac{x^3+x^5}2$ yield $$ \begin{align} \sqrt{n}\int_0^{2\pi}\cos^{2n}(x)\,\mathrm{d}x &=4\sqrt{n}\int_0^{\pi/2}\cos^{2n}(x)\,\mathrm{d}x\\ &=4\sqrt{n}\int_0^{\pi/2}\left[\cos^2(x)\,e^{x^2}\right]_\#^{\normalsize n}e^{-nx^2}\,\mathrm{d}x\\ &=4\sqrt{n}\int_0^{\pi/2}\left[1-nx^4,1\right]_\#e^{-nx^2}\,\mathrm{d}x\\ &=4\int_0^{\pi\sqrt{n}/2}\left[1-\frac{x^4}n,1\right]_\#e^{-x^2}\,\mathrm{d}x\\ &=4\int_0^{\pi\sqrt{n}/2}e^{-x^2}\,\mathrm{d}x-\left[0,\frac3n\right]_\#\tag{1e} \end{align} $$ The Squeeze Theorem applied to $\text{(1e)}$ gives $$ \begin{align} \lim_{n\to\infty}\sqrt{n}\int_0^{2\pi}\cos^{2n}(x)\,\mathrm{d}x &=4\int_0^\infty e^{-x^2}\,\mathrm{d}x\\[6pt] &=2\sqrt\pi\tag{1f} \end{align} $$ Combining $\text{(1a)}$ and $\text{(1f)}$ yields $(1)$.


Theorem $\bf{2}$: $$ \log\left(\frac{n!\,e^n}{\sqrt{n}\,n^n}\right)=1-\sum_{k=1}^{n-1}\int_0^1\frac{\left(x-\frac12\right)^2}{k^2+k+x(1-x)}\,\mathrm{d}x\tag2 $$

Theorems $3$ and $4$ follow fairly simply from Theorems $1$ and $2$.

Proof: ( Euler-Maclaurin applied to $\log(n!)$ ) $$ \begin{align} \log(n!) &=\sum_{k=1}^n\log(k)\tag{2a}\\ &=\int_1^{n^+}\log(x)\,\mathrm{d}\lfloor x\rfloor\tag{2b}\\ &=\int_1^n\log(x)\,\mathrm{d}x-\int_1^{n^+}\log(x)\,\mathrm{d}\!\left(\{x\}-\tfrac12\right)\tag{2c}\\ &=n\log(n)-n+1+\tfrac12\log(n)+\int_1^n\frac{\{x\}-\frac12}x\,\mathrm{d}x\tag{2d}\\ &=\log\left(\sqrt{n}\,\frac{n^n}{e^n}\right)+1+\sum_{k=1}^{n-1}\int_0^1\frac{x-\frac12}{k+x}\,\mathrm{d}x\tag{2e}\\ &=\log\left(\sqrt{n}\,\frac{n^n}{e^n}\right)+1-\sum_{k=1}^{n-1}\int_0^1\frac{\left(x-\frac12\right)^2}{\left(k+\frac12\right)\left(k+x\vphantom{\frac12}\right)}\,\mathrm{d}x\tag{2f}\\ &=\log\left(\sqrt{n}\,\frac{n^n}{e^n}\right)+1-\sum_{k=1}^{n-1}\int_0^1\frac{\left(x-\frac12\right)^2}{k^2+k+x(1-x)}\,\mathrm{d}x\tag{2g}\\ \end{align} $$ Explanation:
$\text{(2a)}$: write the log of a product as the sum of logs
$\text{(2b)}$: apply the Riemann-Stieltjes integral
$\text{(2c)}$: $\lfloor x\rfloor=x-\{x\}$
$\text{(2d)}$: evaluate the left integral and integrate by parts on the right
$\text{(2e)}$: collect terms and write the integral with $\{x\}$ as a sum of integrals on $[0,1]$
$\text{(2f)}$: subtract $\int_0^1\frac{x-1/2}{k+1/2}\,\mathrm{d}x=0$ from each term in the sum
$\text{(2g)}$: substitute $x\mapsto1-x$ and average


Theorem $\bf{3}$: (Stirling's Asymptotic Approximation) $$ \lim_{n\to\infty}\frac{n!\,e^n}{\sqrt{2\pi n}\,n^n}=1\tag3 $$

Proof: Define $a_n=\frac{n!\,e^n}{\sqrt{n}\,n^n}$. $(2)$ says that $a_n$ is decreasing. Because $\frac1{k^2+k+x(1-x)}\le\frac1{k(k+1)}$ and $\int_0^1\left(x-\frac12\right)^2\,\mathrm{d}x=\frac1{12}$, $(2)$ also says that $$ \begin{align} \log(a_n) &\ge1-\frac1{12}\sum_{k=1}^\infty\frac1{k(k+1)}\\ &=\frac{11}{12}\tag{3a} \end{align} $$ Therefore, $\lim\limits_{n\to\infty}a_n$ exists and is positive. Finally, $$ \begin{align} 1 &=\lim_{n\to\infty}\frac{\sqrt{\pi n}}{4^n}\binom{2n}{n}\tag{3b}\\[3pt] &=\lim_{n\to\infty}\frac{\sqrt{\pi n}}{4^n}\frac{(2n)!}{n!^2}\tag{3c}\\ &=\lim_{n\to\infty}\frac{\sqrt{\pi n}}{4^n}\frac{a_{2n}\sqrt{2n}\frac{(2n)^{2n}}{e^{2n}}}{\left(a_n\sqrt{n}\frac{n^n}{e^n}\right)^2}\tag{3d}\\ &=\sqrt{2\pi}\lim_{n\to\infty}\frac{a_{2n}}{a_n^2}\tag{3e}\\[6pt] &=\sqrt{2\pi}\big/\lim_{n\to\infty}a_n\tag{3f} \end{align} $$ Explanation:
$\text{(3b)}$: Theorem $1$
$\text{(3c)}$: write binomial coefficients using factorials
$\text{(3d)}$: apply the definition of $a_n$
$\text{(3e)}$: cancel equal terms
$\text{(3f)}$: since $\lim\limits_{n\to\infty}a_n$ exists and is not $0$, it is the reciprocal of $\lim\limits_{n\to\infty}\frac{a_{2n}}{a_n^2}$

$(3)$ is the reciprocal of $\text{(3f)}$.


Theorem $\bf{4}$: (Bounds on Stirling's Formula) $$ 1+\frac1{12n+\frac2{5n}}\le\frac{n!\,e^n}{\sqrt{2\pi n}\,n^n}\le1+\frac1{12n-1}\tag4 $$

Proof: Combining Theorem $2$ and Theorem $3$ yields $$ \log\left(\frac{n!\,e^n}{\sqrt{2\pi n}\,n^n}\right) =\sum_{k=n}^\infty\int_0^1\frac{\left(x-\frac12\right)^2}{k^2+k+x(1-x)}\,\mathrm{d}x\tag{4a} $$ Since $\int_0^1\left(x-\frac12\right)^2\,\mathrm{d}x=\frac1{12}$, for $k\ge1$, $$ \frac1{12}\frac1{k^2+k+\frac1{10}}\le\int_0^1\frac{\left(x-\frac12\right)^2}{k^2+k+x(1-x)}\,\mathrm{d}x\le\frac1{12}\frac1{k(k+1)}\tag{4b} $$ The left-hand inequality follows from Jensen's inequality using the convex function $\frac1x$ and unit measure $12\left(x-\frac12\right)^2\mathrm{d}x$ on $[0,1]$. The right-hand inequality follows from $x(1-x)\ge0$ on $[0,1]$.

Furthermore, $$ \begin{align} \hspace{-12pt}\frac1{k+\frac1{30k}}-\frac1{k+1+\frac1{30(k+1)}} &=\frac{1-\frac1{30k(k+1)}}{\left(k+\frac1{30k\vphantom{()}}\right)\left(k+1+\frac1{30(k+1)}\right)}\tag{4c}\\ &\le\frac1{k(k+1)\left(1+\frac1{30k^2}\right)\left(1+\frac1{30k(k+1)}\right)\left(1+\frac1{30(k+1)^2}\right)}\tag{4d}\\ &\le\frac1{k^2+k+\frac1{10}}\tag{4e} \end{align} $$ Explanation:
$\text{(4c)}$: subtraction
$\text{(4d)}$: $1-a\le\frac1{1+a}$
$\text{(4e)}$: $\left(1+a^2\right)\left(1+ab\right)\left(1+b^2\right)\ge1+a^2+ab+b^2\ge1+3ab$

Combine $\text{(4b)}$-$\text{(4e)}$ to get $$ {\textstyle\frac1{12}\left(\frac1{k+\frac1{30k}}-\frac1{k+1+\frac1{30(k+1)}}\right)}\le\int_0^1\frac{\left(x-\frac12\right)^2}{k^2+k+x(1-x)}\,\mathrm{d}x\le{\textstyle\frac1{12}\left(\frac1k-\frac1{k+1}\right)}\tag{4f} $$ Sum $\text{(4f)}$ and plug it into $\text{(4a)}$ to get $$ \frac1{12n+\frac2{5n}}\le\log\left(\frac{n!\,e^n}{\sqrt{2\pi n}\,n^n}\right)\le\frac1{12n}\tag{4g} $$ Noting that $1+x\le e^x\le\frac1{1-x}$ leads to $(4)$.


  • 326,069
  • 34
  • 421
  • 800
  • I'm having trouble seeing how you get (2b) by applying the Riemann-Stieltjes integral. – Samantha Wyler Feb 14 '21 at 18:00
  • Do you know what $\lfloor x\rfloor$ (the [floor function](https://en.wikipedia.org/wiki/Floor_and_ceiling_functions)) is? If so, then look at the [Riemann-Stieltjes Integral](https://en.wikipedia.org/wiki/Riemann%E2%80%93Stieltjes_integral#Riemann_integral). That article does not explicitly state that $\int_{1^-}^{n^+}f(x)\,\mathrm{d}\lfloor x\rfloor=\sum\limits_{k=1}^nf(k)$, but you can approximate $\lfloor x\rfloor$ and it's derivative using piecewise linear functions, and the derivative tends to dirac deltas at the integers. – robjohn Feb 14 '21 at 19:14
  • I know that $[x]$ is the floor function, but I'm not sure how to approximate its derivative, or what "the derivative tends to dirac deltas at the integers" means, and I don't see any equations where a summation equals an integral in the artical. – Samantha Wyler Feb 15 '21 at 15:45
  • approximate $$\lfloor x\rfloor_n=\left\{\begin{array}{}k&\text{if }k+\frac1n\le x\lt k+1-\frac1n\\k-\frac12+\frac n2(x-k)&\text{if }k-\frac1n\le x\lt k+\frac1n\end{array}\right.$$ where $$\lfloor x\rfloor_n'=\left\{\begin{array}{}0&\text{if }k+\frac1n\le x\lt k+1-\frac1n\\\frac n2&\text{if }k-\frac1n\le x\lt k+\frac1n\end{array}\right.$$ – robjohn Feb 16 '21 at 00:06

The short and intuitive proof of Stirling formula I was taught long time ago relies on steepest descent/saddle point evaluation of Gamma integral. Let us expand $f(x)=a\log x-x$ in Taylor series around its maximum $x_m$: $0=f'(x)=a/x-1\Rightarrow x_m=a$, $f''(x)=-a/x^2$ hence for $a\rightarrow\infty$, $\Gamma(a+1)=\int_0^\infty e^{f(x)}dx\rightarrow e^{a\log{a}-a}\int_{-\infty}^{+\infty}e^{-(x-x_m)^2/(2a)}dx = e^{a\log{a}-a}\sqrt{2\pi a}$. Exactness of the proof depends on demonstration that contributions to integral away from $x_m$ become insignificant. For me it suffices that $\sigma/x_m\equiv\sqrt{a}/a\rightarrow 0$ for $a\rightarrow\infty$ but more elaborate discussion is possible. This relies on Gauss integral, proven e.g. as $\left[\int_{-\infty}^{+\infty}e^{-x^2}dx\right]^2=\int_{-\infty}^{+\infty}e^{-x^2-y^2}dx\,dy=\int_0^\infty e^{-r^2}2\pi r\,dr=\pi$

  • 31
  • 2

Using the Leibniz rule $\frac{d}{dt}\Bigg[\int_{a(t)}^{b(t)}f(x,t)dx\Bigg]=\int_{a(t)}^{b(t)}\frac{\partial}{\partial t}f(x,t)dx+\bigg[f(b(t),t)\frac{d}{dt}b(t)-f(a(t),t)\frac{d}{dt}a(t)\bigg]$

$$ \int_0^\infty e^{-x}dx=1\\ \text{Let }x=tu\implies dx=t.du,t>0\\ \int_0^\infty te^{-tu}du=1\implies \int_0^\infty e^{-tu}du=\frac{1}{t}\\ \frac{d}{dt}\int_0^\infty e^{-tu}du=\int_0^\infty \frac{\partial}{\partial t}e^{-tu}du=\frac{-1}{t^2}\\ \int_0^\infty ue^{-tu}du=\frac{1}{t^2}\\ \int_0^\infty u^2e^{-tu}du=\frac{2}{t^3}\\ \int_0^\infty u^3e^{-tu}du=\frac{6}{t^4}\\ \int_0^\infty u^4e^{-tu}du=\frac{24}{t^5}\\ \int_0^\infty u^ne^{-tu}du=\frac{n!}{t^{n+1}}\\ \implies \int_0^\infty \bigg[\frac{x}{t}\bigg]^ne^{-x}\frac{dx}{t}=\frac{n!}{t^{n+1}}\\ \implies \boxed{\int_0^\infty x^ne^{-x}dx=n!=\Gamma(n+1)} $$ Dividing by $\bigg[\frac{n}{e}\bigg]^n$ $$ \bigg[\frac{e}{n}\bigg]^n\Gamma(n+1)=\int_0^\infty \left[\frac{x}{n}\right]^ne^{-(x-n)}dx\\ \text{Let } s=x-n,\\ \bigg[\frac{e}{n}\bigg]^n\Gamma(n+1)=\int_{-n}^\infty \bigg[1+\frac{s}{n}\bigg]^ne^{-s}ds=\int_{-n}^\infty f(s)ds $$ where $f(s)=\bigg[1+\frac{s}{n}\bigg]^ne^{-s}$ $$ \ln(f(s))=n\ln\bigg[1+\frac{s}{n}\bigg]-s=n\bigg[\frac{s}{n}-\frac{1}{2}\bigg(\frac{s}{n}\bigg)^2+\frac{1}{3}\bigg(\frac{s}{n}\bigg)^3-\cdots\bigg]-s\\ =\frac{-s^2}{2n}+\frac{s^3}{3n^2}-\cdots $$ For large $n$, $$ \ln(f(s))\approx\frac{-s^2}{2n} $$ $$ \bigg(\frac{e}{n}\bigg)^n\Gamma(n+1)=\int_{-\infty}^\infty e^{-s^2/2n}ds=\sqrt{2\pi n}, \text{ since }\int_{-\infty}^\infty e^{-at^2}dt=\sqrt{\frac{\pi}{a}}\\ \Gamma(n+1)=\bigg(\frac{n}{e}\bigg)^n\sqrt{2\pi n}\\ \implies \boxed{n!=\Gamma(n+1)\approx \bigg(\frac{n}{e}\bigg)^n\sqrt{2\pi n}} $$

  • 326,069
  • 34
  • 421
  • 800
Sooraj S
  • 6,813
  • 3
  • 34
  • 68

Stirling's approximation: Can we make it obvious?

By induction on $n$, using integration-by-parts for the inductive step, it is easily shown that $$n!=\int_0^\infty\!x^n e^{-x} dx$$ for $\,n=0,1,2,$ etc.  Changing the variable of integration to $$\color{blue}{t=\ln(x/n)}$$ gives $$n!\,=\,n^{n+1\,}e^{-n}\int_{-\infty}^\infty\!e^{nf(t)} e^t dt$$ where $$f(t)=1+t-e^t.$$ Now $f(t)$ has a negative second derivative everywhere, goes to $-\infty$ as $t\to\pm\infty$,  and has the maximum value $0$ at $\,t=0$.  At points sufficiently close to this maximum,  $f(t)$ can be replaced by a single term in $t^2$ with a matching second derivative, i.e. by $-\frac{1}{2}t^2$,  and the factor $e^t$ can be replaced by $1$.  Moreover, as $n\to\infty$,  the values of $f(t)$ and $e^t$ that contribute significantly to the integral are confined to an ever narrower range about the maximum of $f(t)$.  So we can make the indicated replacements, obtaining $$n! \,\sim\, n^{n+1\,}e^{-n}\int_{-\infty}^\infty\!e^{-nt^{2\!}/2\,} dt\,.\label{estar}\tag{*}$$ The remaining integral has the form $$I=\int_{-\infty}^\infty\!e^{-at^2} dt\,,$$ whence $$I^2=\int_{-\infty}^\infty\!e^{-ax^2}dx \int_{-\infty}^\infty\!e^{-ay^2}dy \,= \int_{-\infty}^\infty\int_{-\infty}^\infty\!e^{-a(x^2+y^2)} dx\,dy\,.$$ This integral w.r.t. area over the $xy$ plane is easily re-expressed in polar coordinates and evaluated as $\pi/a$,  so that $$I=\sqrt{\pi/a}\,.$$ The integral in equation$\,(\ref{estar})$ has $a=n/2$ and therefore comes to $\sqrt{2\pi/n}$,  which gives $$n! \,\sim\, \sqrt{2\pi n}\,\left(\frac{n}{e}\right)^n.$$