Just curious, how do you calculate an irrational number? Take $\pi$ for example. Computers have calculated $\pi$ to the millionth digit and beyond. What formula/method do they use to figure this out? How does it compare to other irrational numbers such as $\varphi$ or $e$?
 71,951
 6
 191
 335
 461
 2
 5
 7

Just keep solving the expression which defines the irrational number? If you into programming check out ycruncher [here](http://www.numberworld.org/ycruncher/) – Inquest May 20 '12 at 09:05

Also check out the paper which mentions one of the method and implementation details. http://www.ams.org/journals/mcom/196216077/S00255718196201360519/S00255718196201360519.pdf – Inquest May 20 '12 at 09:11

6Minor aside: your question is not "how do you calculate an irrational number", but "how do you calculate the decimal expansion of an irrational number". – May 20 '12 at 12:20

7This is a really good question, because it's simple to ask but has no simple answer. It depends a lot on which number you have in mind. For an interesting counterpoint, consider [Euler's constant $\gamma$](http://enwp.org/Euler's_constant)$\approx 0.57721\ldots$. Methods are known for calculating $\gamma$ with great precision, but it is not known whether it is irrational or not! – MJD May 20 '12 at 13:12

You may be interested in this question: http://math.stackexchange.com/questions/129777/whatitthefastestmostefficientalgorithmforestimatingeulersconstantg – Argon May 20 '12 at 15:02

1Relevant [this](http://en.wikipedia.org/wiki/Bailey%E2%80%93Borwein%E2%80%93Plouffe_formula) and [this](http://en.wikipedia.org/wiki/Approximations_of_%CF%80#Development_of_efficient_formulae). – May 20 '12 at 15:42

5At the other extreme, there are irrational numbers that are not computable at all, such as Chaitin's constant. See http://en.wikipedia.org/wiki/Chaitin%27s_constant – Robert Israel May 20 '12 at 19:36

[This](http://math.stackexchange.com/questions/135827/computingdecimaldigitsofirrationalnumbers/138114#138114) might help for non square $n$s and $\sqrt{n}$. – Pedro May 29 '12 at 01:54

By looking at everyone's responses it has sparked another question. How are we really sure of what Pi or Phi is? Many of these formulas seem different. How do we verify if a formula is correct or not? – Sean Jun 03 '12 at 08:58

3@Sean All these formulas have been proven equivalent, otherwise they wouldn't be called "formulas for $\pi$". In general, showing that two formulas are equivalent is very hard, and requires a great deal of mathematics – Alex Becker Jul 06 '12 at 02:50

But if they are equivalent, then how is it possible for one to be more accurate than another? – Sean Oct 18 '12 at 06:09

And how to we create these formulas? – Sean Oct 18 '12 at 06:09
5 Answers
$\pi$
For computing $\pi$, many very convergent methods are known. Historically, popular methods include estimating $\arctan$ with its Taylor's series expansion and calculating $\pi/4$ using a Machinlike formula. A basic one would be
$$\frac{\pi}{4} = 4 \arctan\frac{1}{5}  \arctan\frac{1}{239}$$
The reason these formulas are used over estimating $\arctan 1 =\frac{\pi}{4}$ is because the series for $\arctan x$ is move convergent for $x \approx0$. Thus, small values of $x$ are better for estimating $\pi/4$, even if one is required to compute $\arctan$ more times. A good example of this is Hwang ChienLih's formula:
$$ \begin{align} \frac{\pi}{4} =& 183\arctan\frac{1}{239} + 32\arctan\frac{1}{1023}  68\arctan\frac{1}{5832} + 12\arctan\frac{1}{113021}\\ &  100\arctan\frac{1}{6826318}  12\arctan\frac{1}{33366019650} + 12\arctan\frac{1}{43599522992503626068}\\ \end{align} $$ Though $\arctan$ needs to be computed 7 times to a desired accuracy, computing this formula interestingly requires less computational effort then computing $\arctan 1$ to the same accuracy.
Iterative algorithms, such as Borwein's algorithm or Gauss–Legendre algorithm can converge to $\pi$ extremely fast (Gauss–Legendre algorithm find 45 million correct digits in 25 iterations), but require much computational effort. Because of this, the linear convergence of Ramanujan's algorithm or the Chudnovsky algorithm is often preferred (these methods are mentioned in other posts here as well). These methods produce 68 digits and 14 digits respectively term added. It is interesting to mention that the Bailey–Borwein–Plouffe formula can calculate the $n^{th}$ binary digit of $\pi$ without needing to know the $n1^{th}$ digit (these algorithms are known as "spigot algorithms"). Bellard's formula is similar but 43% faster.
The first few terms from the Chudnovsky algorithm are (note the accuracy increases by about 14 decimal places):
n Approx. sum Approx. error (pisum)
0 3.141592653 5.90 x 10^14
1 3.141592653 3.07 x 10^28
2 3.141592653 1.72 x 10^42
3 3.141592653 1.00 x 10^56
See these two questions as well.
$e$
The most popular method for computing $e$ is its Taylor's series expansion, because it requires little computational effort and converges very quickly (and continues to speed up). $$e=\sum_{n=0}^\infty \frac{1}{n!}$$ The first sums created in this series are as follows:
n Approx. sum Approx. error (esum)
0 1 1.718281828.
1 2 0.718281828
2 2.5 0.218281828
3 2.666666666 0.051615161
...
10 2.718281801 2.73 x 10^8
...
20 2.718281828 2.05 x 10^20
One should also note that the limit definition of $e$ and the series may be used in conjunction. The canonical limit for $e$ is
$$e=\lim_{n \to \infty}\left(1+\frac{1}{n}\right)^n$$
Noting that this is the first two terms of the Taylor's series expansion for $\exp(\frac{1}{n})$ to the exponent of $n$ for $n$ large, it is clear that $\exp(\frac{1}{n})$ can be computed to a higher accuracy in fewer terms then $e^1$ in the series, because in two terms give a better and better estimate as $n \to \infty$. This means that if we add another few terms of the expansion of $\exp(\frac{1}{n})$, we can find the $n^{th}$ root of $e$ to high accuracy (higher then the limit and the series) and then we just multiply the answer $n$ times with itself (easy, if $n$ is an integer).
As a formula, we have, if $m$ and $a$ are large:
$$e \approx \left(\sum_{n=0}^m \frac{1}{n!a^n}\right)^a$$
If we use the series to find the $100^{th}$ root (i.e. using the above formula, $a=100$) of $e$, this is what results (note the fast rate of convergence):
n Approx. sum Approx. sum^100 Approx. error (esum)
0 1 1 1.718281828.
1 1.01 2.704813829 0.013467999
2 1.01005 2.718236862 0.000044965
3 1.010050166 2.718281716 1.12 x 10^7
...
10 1.010050167 2.7182818284 6.74 x 10^28
...
20 1.010050167 2.7182818284 4.08 x 10^51
$\varphi$
The golden ratio is $$\varphi=\frac{\sqrt{5}+1}{2}$$ so once $\sqrt{5}$ is computed to a sufficient accuracy, so can $\varphi$. To estimate $\sqrt{5}$, many methods can be used, perhaps most simply through the Babylonian method. Newton's rootfinding method may also be used to find $\varphi$ because it and its reciprocal, $\Phi$, are roots of $$0=x^2x1$$
If $\xi$ is a root of $f(x)$, Newtons method finds $\xi$:
$$x_{n+1}=x_n\frac{f(x_n)}{f'(x_n)}$$ $$\xi=\lim_{n \to \infty}x_n$$
We thus assign $f(x)=x^2x1$ and $f'(x)=2x1$. Then $$x_{n+1}=x_n\frac{x_n^2x_n1}{2x_n1}=\frac{x_n^2+1}{2x_n1}$$
If $x_0=1$, the first few iterations yield:
n value of x_n Approx. error (phix_n)
1 2 0.381966011
2 1.666666666 0.048632677
3 1.619047619 0.001013630
4 1.618034447 4.59 x 10^7
...
7 1.618033988 7.05 x 10^54
The quadratic convergence of this method is very clear in this example.
$\gamma$
Unfortunately, no quadratically convergent methods are known to compute $\gamma$.
As mentioned above, some methods are discussed here: What is the fastest/most efficient algorithm for estimating Euler's Constant $\gamma$?
The algorithm from here is
$$ \gamma= 1\log k \sum_{r=1}^{12k+1} \frac{ (1)^{r1} k^{r+1}}{(r1)!(r+1)} + \sum_{r=1}^{12k+1} \frac{ (1)^{r1} k^{r+1} }{(r1)! (r+1)^2}+\mbox{O}(2^{k}) $$
and this method gives the following approximation:
k Approx. sum Approx. error (gammasum)
1 0.7965995992978246 0.21938393439629178
5 0.5892082678451087 0.011992602943575847
10 0.5773243590712589 1.086941697260313 x 10^4
15 0.5772165124955206 8.47593987773898 x 10^7
This answer has even faster convergence.
Some other methods are also reviewed here: http://www.ams.org/journals/mcom/198034149/S00255718198005513074/S00255718198005513074.pdf
$\zeta(3)$
A method for estimating $\zeta(3)$ is the AmdeberhanZeilberger formula ($O(n \log n^3)$):
$$\zeta(3)=\frac{1}{64}\sum_{k=0}^{\infty}\frac{(1)^k(205k^2+250k+77)(k!)^{10}}{((2k+1)!)^5}$$
$G$ (Catalan's constant)
Fee, in his article, presents a method for computing Catalan's constant based on a formula of Ramanujan:
$$G=\sum_{k=0}^\infty \frac{2^{k1}}{(2k+1)\binom{2k}{k}}\sum_{j=0}^k \frac1{2j+1}$$
Another rapidlyconverging series from Ramanujan has also been used for computing Catalan's constant:
$$G=\frac{\pi}{8}\log(2+\sqrt 3)+\frac38\sum_{n=0}^\infty \frac{(n!)^2}{(2n)!(2n+1)^2}$$
$\log 2$
The Taylor's series for $\log$ has disappointingly poor convergence and for that alternate methods are needed to efficiently compute $\log 2$. Common ways to compute $\log 2$ include "Machinlike formulae" using the $\operatorname{arcoth}$ function, similar to the ones used to compute $\pi$ with the $\arctan$ function mentioned above:
$$\log 2=144\operatorname {arcoth}(251)+54\operatorname {arcoth}(449)38\operatorname {arcoth}(4801)+62\operatorname {arcoth}(8749)$$
$A$ (GlaisherKinkelin constant)
One usual method for computing the GlaisherKinkelin constant rests on the identity
$$A=\exp\left(\frac1{12}(\gamma+\log(2\pi))\frac{\zeta'(2)}{2\pi ^2}\right)$$
where $\zeta'(s)$ is the derivative of the Riemann zeta function. Now,
$$\zeta'(2)=2\sum_{k=1}^\infty \frac{(1)^k \log(2k)}{k^2}$$
and any number of convergence acceleration methods can be applied to sum this alternating series. Two of the more popular choices are the Euler transformation, and the CRVZ algorithm.
Another interesting website that has many fast algorithms for common constants is here.

"These methods produce 68 digits and 14 digits respectively." That would be *per iteration*, no? – Gerry Myerson May 21 '12 at 00:35


2

In lieu of writing a different answer, I decided to add to this CW answer. I hope you don't mind. – J. M. ain't a mathematician Jul 07 '12 at 03:52

Different irrationals yield to different techniques. $\phi=(1+\sqrt5)/2$ just involves calculating $\sqrt5$, which can be done easily by Newton's method from introductory calculus. The infinite series $$e=1+1+1/2+1/6+1/24+\cdots$$ where the denominators are the factorials, can be used to calculate $e$. For pi, this article on GaussLegendre algorithm will give you some ideas.
 1,799
 4
 17
 34
 168,500
 12
 196
 359

4Perhaps it looks better if you name the link instead of showing the link address directly in your answer. Like, "For pi, [this Wikipedia article](http://en.wikipedia.org/wiki/Gauss%E2%80%93Legendre_algorithm) [...]". – Gigili May 20 '12 at 09:48
Gerry Myerson's answer above is correct in saying that different irrational numbers lead to different techniques. In essence, though, all those techniques boil down to one idea: Find some sort of method (formula, infinite series, algorithm, etc.) that when used, will yield a decimal expansion that will converge to the value of the irrational (or rational, for that matter!). Naturally, certain techniques are more useful in certain circumstances (e.g., in computing, techniques that converge very quickly, but also result in as few processor instructions as possible are preferred).
As an aside, my personal favorite formula for $\pi$ was given by Ramanujan:
$$ \frac{1}{\pi} = \frac{\sqrt{8}}{9801} \sum_{n=0}^{\infty}\frac{(4n)!}{(n!)^4}\frac{1103+26390n}{396^{4n}} $$
This formula converges really really quickly. The MathWorld article notes that it provides, on average, 6 to 8 decimal places per term.
 181
 1
An example not yet given.
$\zeta(3)=1.20205690315959428539973816151144$
$\qquad\qquad 999076498629234049...$ (here)
The number $$\zeta (3)=\sum_{n=1}^\infty \frac{1}{n^3} \tag{1}$$ is called Apéry's constant, because its irrationality was first proved by Roger Apéry. The following series, which converges to $\zeta (3)$ faster than $(1)$, can be used to compute it
$$\zeta (3)=\frac{5}{2}\sum_{n=1}^{\infty }\frac{\left( 1\right) ^{n1}}{n^{3}\binom{2n}{n}}.\tag{2}$$
For the same purpose we can use the continued fraction expansion for $\zeta (3)$, which is
$$\zeta \left( 3\right) =\dfrac{6}{5\dfrac{1}{117\dfrac{64}{535...\dfrac{n^{6}}{34n^{3}+51n^{2}+27n+5}}}}.\tag{3}$$
Another possibility is to use the following limit $$\begin{equation*} \zeta (3)=\lim_{n\rightarrow \infty }\frac{a_{n}}{b_{n}}, \end{equation*}\tag{4}$$ where
$$\begin{equation*} a_{n}=\sum_{k=0}^{n}\binom{n}{k}^{2}\binom{n+k}{k}^{2}c_{n,k}, \end{equation*}\tag{5}$$
$$\begin{equation*} b_{n}=\sum_{k=0}^{n}\binom{n}{k}^{2}\binom{n+k}{k}^{2}, \end{equation*}\tag{6}$$
and
$$\begin{equation*} c_{n,k}=\sum_{m=1}^{n}\frac{1}{m^{3}}+\sum_{m=1}^{k}\frac{\left( 1\right) ^{m1}}{2m^{3}\binom{n}{m}\binom{n+m}{m}}\quad k\leq n. \end{equation*}\tag{7}$$

References
Apéry, Roger (1979), Irrationalité de $\zeta 2$ et $\zeta 3$, Astérisque 61: 11–13
Alfred van der Poorten (1979), A proof that Euler missed..., The Mathematical Intelligencer 1 (4): 195–203
 37,567
 13
 96
 238
For $\pi$ there is a nice formula given by John Machin: $$ \frac{\pi}{4} = 4\arctan\frac{1}{5}  \arctan\frac{1}{239}\,. $$
The power series for $\arctan \alpha$ is given by $$\arctan\alpha = \frac{\alpha}{1}  \frac{\alpha^3}{3}+\frac{\alpha^5}{5}  \frac{\alpha^7}{7} + \ldots\,. $$
Also you could use (generalized) continued fractions:
$$ \pi = \dfrac{4}{1+\cfrac{1^2}{3+\cfrac{2^2}{5+\cfrac{3^2}{7+\cdots}}}} $$
There are many other methods to compute $\pi$, including algorithms able to find any number of $\pi$'s hexadecimal expansion independently of the others. As I remember, the wikipedia has a lot on methods to compute $\pi$. Moreover, as $\pi$ is a number intrinsic to mathematics, it shows in many unexpected places, e.g. in a card game called Mafia, for details see this paper.
As for $e$, there are also power series and continued fractions, but there exists more sophisticated algorithms that can compute $e$ much faster. And for $\phi$, there is simple recurrence relation based on Newton's method, e.g. $\phi_{n+1} = \frac{\phi_n^2+1}{2\phi_n1}$. It is worth to mention that the continued fraction for the golden ratio contain only ones, i.e. $[1;1,1,1,\ldots]$ and the successive approximations are ratios of consecutive Fibonacci numbers $\frac{F_{n+1}}{F_n}$.
To conclude, majority of example methods here was in one of the forms: computing better and better ratios (but each fraction was calculated exactly) or work with approximations the whole time, but create a process that will eventually converge to the desired number. In fact this distinction is not sharp, but the methods that are used in those approaches are usually different. Useful tools: power series, continued fractions, and rootfinding.
 36,363
 8
 53
 121