Sorry if this is a really naive question, but in my reading of a lot of textbooks and articles, there is a lot of mention of how many decimals we know of a certain number today, such as $\pi$ or $e$. An excerpt from my textbook:

In 1748, Leonard Euler used the sum of the infinite series of $e$ (mentioned in the book in a section about Taylor Series) to find the value of $e$ to 23 digits. In 2003, Shigeru Kondo, again using the series, computed $e$ to 50 billion decimals places

My question is why does it matter how many decimals we know? Isn't this just a huge waste of time? What could we ever do with so many decimal places? And, if $e$ can be represented as a sum of infinite series of $1/n!$, can't we just plug that into a computer that just loops the same equation but increasing $n$ every iteration, and find as many decimals of $e$ as we like?

(Once again, I realize this may be an ignorant/naive question, but I've always been curious about this)

J. M. ain't a mathematician
  • 71,951
  • 6
  • 191
  • 335
  • 2,586
  • 9
  • 35
  • 47
  • 10
    It becomes even more incredulous when you realize that you only need 55 digits of pi to draw a circle with the radius of the universe to the accuracy of the radius of a hydrogen atom. – crasic Dec 01 '10 at 03:21
  • 1
    Woah, where's this fact from? – Snowman Dec 01 '10 at 03:28
  • 1
    @fprime: presumably an estimate of the "radius of the universe" in some model and what's known as the "Bohr radius" which is roughly $10^{-11}$ meters. Apparently in this model the "radius of the universe" is no more than $10^{44}$ meters. – Ryan Budney Dec 01 '10 at 04:04
  • 1
    @Ryan Budney: @fprime:A generous estimate as Wikipedia (Observable universe) comes up with 14x10^9 parsecs = 4.3x10^26 meters – Ross Millikan Dec 01 '10 at 04:13
  • 1
    Nice answer, but why do you say that we are not good at parallel computing? Most of Google's services are proof of excellent parallelization. – Neil G Dec 01 '10 at 07:42
  • @Neil: You probably intended that as a comment to Arturo's answer. In any event, parallel numerical computing is a relative infant; a lot of the nice numerical methods are serial, which is one problem. Another is that properly parallelizing to maintain *stability* remains difficult for some methods. I'd go into more detail. but there are people smarter than myself who have written books on this. – J. M. ain't a mathematician Dec 01 '10 at 09:25
  • @J.M. Good point, but calculation of the digits of pi and e are easily parallelizable since the calculation of later digits don't depend on earlier ones. – Neil G Dec 01 '10 at 09:51
  • As I said @Neil, just because it's parallelizable doesn't necessarily mean it should be parallelized; though the separate instances may be able to do the calculation stably, the problems arise in the consolidation phase. – J. M. ain't a mathematician Dec 01 '10 at 10:06
  • @J.M. I really don't see that. It's a standard map-reduce with key=decimal positions, value=decimal values. The map-reduce framework does the consolidation automatically. – Neil G Dec 01 '10 at 10:19
  • IIRC @Neil, the fastest formulae actually do all the manipulations in hex, and the supposedly cosmetic portion of transforming to decimal actually takes a nontrivial amount of time. Unfortunately, I'm going merely from memory since I'm far from my books. Maybe there's something I neglected to mention. – J. M. ain't a mathematician Dec 01 '10 at 10:40
  • @Neil G: It looks like you meant it to me. What you are talking about it more properly known as 'vectorizing', from what I understand: you break up the problem into discrete parts, and then you solve each part serially with different processors. It is not known if this is truly the best method to perform parallel computations or not; people have a much easier time thinking in terms of *serial* algorithms than of *parallel* ones (that's what I was refering to). And there is also the issue of trying to automatize both the vectoring and the parallelization, which is also at a very infant stage. – Arturo Magidin Dec 01 '10 at 16:59
  • @Arturo, My point is that the parallelization and vectorizing problem is essentially solved by the mapreduce algorithm. All one needs is a program that calculates some digits, whether in hex or in decimal. – Neil G Dec 01 '10 at 18:17
  • @Neil G: if your goal is to compute the digits; but one can use the computation of the digits to test algorithms that try to take an arbitrary computation and attempt to vectorize/parallelize it for optimized computation. Since the computation of the digits is a nice benchmark, you can see how your automated procedure behaves relative to that benchmark. Likewise for specific methods for parallelizing/vectorizing the computation: you try them to compare them against the benchmark, not necessarily because you are very keen on getting the digits. Or maybe I'm still missing your point... – Arturo Magidin Dec 01 '10 at 18:21
  • @Arturo, I think I understand, thanks. – Neil G Dec 01 '10 at 18:45

3 Answers3


There are a couple of reasons why people compute $\pi$ and $e$ to so many digits.

One is simply that it is a way to test hardware. The values are known (or can be known), and you want to see if your hardware is fast and accurate with these computations.

Another is that there are actually a lot of questions about the decimal expansions of $\pi$ and $e$ (and other numbers) for which we simply don't know the answer yet. For example: is $\pi$ normal in base 10? That is, will a particular sequence of digits occur in its decimal expansion in about the frequency you expect? More precisely, given a specific sequence of $n$ digits, will $$\frac{\text{number of times that the specific sequence occurs in the first $m$ digits of $\pi$}}{m}$$ approach $1/10^n$ as $m\to\infty$ (there are $10^n$ different sequences of $n$ digits, so this is what you would expect if the sequence was completely random)? While the answer cannot be known simply by computing digits of $\pi$, knowing more and more digits helps us see whether it seems at least plausible or not, at least early on. We can also perform other tests of randomness to see whether the digits of $\pi$ (or $e$) seem to pass it or not.

As to your question about $e$, yes, in principle we can do that. But the process gets more and more complex as $n$ gets larger. Normally, a computer only knows how to represent numbers that are no larger than a certain quantity (depending on the number of bytes it uses to represent numbers), so you need to "explain" to your computer how to handle big numbers like $n!$ for large $n$. The storage space needed to perform the computations also gets larger and larger and larger. And that idea only works if you start from a known point. So while theoretically we could compute $e$ to as many digits as we want, in practice if we want the computations to finish sometime before the Sun runs out of hydrogen we can't really go that far. There are other algorithms known to compute decimals of $\pi$ or $e$ that are faster, or that don't require you to know the previous digits to figure out the next one.

And that leads to yet another thing that people who are computing so many digits of $\pi$ and $e$ may be doing: testing algorithms for large number computations, for floating point accuracy, or for parallel computing (we are not very good at parallel computing, and trying to figure out how to do it effectively is a Very Big Deal; coming up with ways to do computations such as "compute $\pi$ to the $n$th millionth digit" are ways to test ideas for doing parallel computing).

That leads to a third interest in the computations: coming up with mathematical ideas that can zero in on desired digits of the decimal expansion quickly; not because we are particularly interested in the digits of $\pi$, necessarily, but because we often are interested in particular digits of other numbers. $\pi$ and $e$ are just very convenient benchmarks to test things with.

Why did Euler compute the numbers? Because he wanted to show off some of the uses for Taylor series; up to that time, the methods for approximating the value of $\pi$ were much more onerous. Ludolph van Ceulen famously used the methods of Archimedes (inscribing and circumscribing polygons in circles) to find the value of $\pi$ to 16 decimals (and after many more years of effort, to 25 places); this was in the late 16th century, before it was known that $\pi$ was transcendental (or even before the notion of being transcendental was very clear), so trying to compute $\pi$ precisely actually had a point. He was so proud of the accomplishment (given all the hard work it had entailed) that he had the value of $\pi$ he computed put into his tombstone. Euler showed that Taylor series could be used to obtain the same results with a lot less effort and with more precision.

Arturo Magidin
  • 356,881
  • 50
  • 750
  • 1,081
  • 21
    "I am ashamed to tell you to how many figures I carried these computations, having no other business at the time." ― Isaac Newton, after computing 15 digits of π in 1666 – J. M. ain't a mathematician Dec 01 '10 at 01:41
  • 15
    @J.M. That too; I have a friend who, in High School, used to kill time by writing down a random integer and then trying to factor it into primes. By the time he was in Grad School, he was killing time by writing down a random monic polynomial with integer coefficients, and trying to compute the class number of its splitting field... – Arturo Magidin Dec 01 '10 at 01:53
  • 2
    Did your friend end up inventing Macaulay2? – Timothy Wagner Dec 01 '10 at 01:56
  • 1
    @Timothy Wagner: No; I *do* know the guy who is now the lead Sage developer, but this was someone else. – Arturo Magidin Dec 01 '10 at 02:14

I'd just like to give two quotes from the book The SIAM 100-Digit Challenge: A Study in High-Accuracy Numerical Computing that might help explain motivation. Here is the one from their chapter on computing constants to 10,000 digits:

While such an exercise might seem frivolous, the fact is we learned a lot from the continual refinement of our algorithms to work efficiently at ultrahigh precision. The reward is a deeper understanding of the theory, and often a better algorithm for low-precision cases.

and here is something from the foreword written by David Bailey, one of the pioneers of experimental mathematics:

Some may question why anyone would care about such prodigious precision, when in the “real” physical world, hardly any quantities are known to an accuracy beyond about 12 decimal digits. For instance, a value of π correct to 20 decimal digits would suffice to calculate the circumference of a circle around the sun at the orbit of the earth to within the width of an atom. So why should anyone care about finding any answers to 10,000 digit accuracy?

In fact, recent work in experimental mathematics has provided an important venue where numerical results are needed to very high numerical precision, in some cases to thousands of decimal digit accuracy. In particular, precision of this scale is often required when applying integer relation algorithms to discover new mathematical identities. An integer relation algorithm is an algorithm that, given $n$ real numbers ($x_i,\quad 1\leq i\leq n$), in the form of high-precision floating-point numerical values, produces $n$ integers, not all zero, such that $a_1x_1+a_2x_2+\cdots+a_n x_n=0$.

The best known example of this sort is the discovery in 1995 of a new formula for π:


This formula was found by a computer program implementing the PSLQ integer relation algorithm, using (in this case) a numerical precision of approximately 200 digits. This computation also required, as an input real vector, more than 25 mathematical constants, each computed to 200-digit accuracy. The mathematical significance of this particular formula is that it permits one to directly calculate binary or hexadecimal digits of π beginning at any arbitrary position, using an algorithm that is very simple, requires almost no memory, and does not require multiple-precision arithmetic.

J. M. ain't a mathematician
  • 71,951
  • 6
  • 191
  • 335

The answer to your last question is no, for a certain value of "no." The problem with your idea is that as $n$ gets bigger it gets harder to calculate $n!$, so it gets harder to tell what the digits of $\frac{1}{n!}$ are. In other words, if you actually tried to carry out your plan, you would quickly find that it is computationally infeasible.

So instead one has to resort to a smarter algorithm. Thus being able to compute constants to high precision is a measure of how smart our algorithms are. If your algorithm is smarter than mine, the most concrete way to prove that is to use it to compute more digits in a reasonable amount of time than I can. So while it's probably true that nobody is actually going to use these digits for anything (except possibly to verify certain conjectures), the fact that we know them is a measure of our knowledge, both about algorithms and about $e$. (It is also a measure of how good our hardware is, but I'm trying to emphasize the math here.)

Also, here is an announcement from Kondo about a more recent version of this computation (for $\pi$) and here are, among other things, his reasons for doing it.

Qiaochu Yuan
  • 359,788
  • 42
  • 777
  • 1,145