If we have some approximation $x$ for $\pi$, it is possible to improve that approximation by calculating $\sin(x) + x$ if $x$ is sufficiently close to $\pi$. The reason why this works is that for $x \approx \pi$, $\sin(x) \approx \pi - x$ (note that $\sin'(\pi) = -1$), so $x + \sin(x) \approx x + \pi - x = \pi$.

I am interested in the number of good digits when approximating $\pi$ by iteratively applying this technique iteratively starting with the number $3$. In other words, I am interested in the following sequences:

$$ a_0=3; a_{n+1}=\sin(a_n)+a_n\\ b_n=\text{The number of digits of accuracy of }a_n $$

The first few elements of $b$ are $\{0, 3, 10, 32, 99, 300, 902, 2702\}$. I did not find this sequence in OEIS. Interestingly, the number of correct digits seems to almost triple with every step.

Why does this method of approximating $\pi$ triple the number of accurate digits? If this approximation or sequence has been studied before, any pointers are welcome as well.

Robert Z
  • 136,560
  • 12
  • 93
  • 176
  • 2
    Just a side-comment (good answers are below): this of course works if you have a usable black-box that computes the sine function. Of course, it's usually the other way around: you need approximate numerical methods to compute trig functions to begin with. This is why most well known formulas for $\pi$ don't use trigonometric functions, only elementary operations (which can be computed without iteration). Otherwise, you could just say $\pi = 2 \arcsin 1$ and be done with it ;) – orion Sep 19 '16 at 11:29
  • @orion I agree (in fact I put """ around the word approximation in my first phrasing of the question because of this). The original motivation of this question was that when you have a computer, your processor might have a native instruction for computing a sine (e.g. fsin on Intel processors) that is as accurate as your number format (e.g. IEEE 754) supports. –  Sep 19 '16 at 12:24
  • 1
    @MarkusHimmel On the other hand, up to IEEE double precision you can just hardcode $\pi$. I think the more interesting question is whether you can use this method to make an arbitrary precision method for computing $\pi$...for which you can't depend on a native double precision method like fsin. I think you'll find this method is not very good for this purpose, because computing $\sin(x)$ to a precision of $\epsilon$ of numbers when $x$ is near $\pi$ takes a significant number of terms--getting within $\epsilon$ requires something like $k/2$ Taylor terms where $\frac{(\pi e)^k}{k^k}<\epsilon$. – Ian Sep 19 '16 at 12:41
  • @lan Interestingly enough, this method even semi-applies when using IEEE double precision: Doubles have 15-17 decimal digits of floating point precision. If I now take the fsin of a hard-coded value of $\pi$ (with 15-17 digits of precision), I get the error of that hardcoded value of $\pi$ - again with 15-17 digits of precision. The sum of these two values cannot be accurately expressed using double precision, but if I were to add the two values by hand, I'd get $\pi$ with ca. 30 significant decimal digits. despite both fsin and the hardcoded value of $\pi$ being only exact to 15-17 digits. –  Sep 19 '16 at 13:00

7 Answers7


The Taylor Series for $\sin(x)$ for $x$ near $\pi$ says $$ \sin(x)=\sin(\pi-x)=(\pi-x)-\frac{(\pi-x)^3}6+O\!\left((\pi-x)^5\right) $$ Thus $$ x+\sin(x)-\pi=\frac{(x-\pi)^3}6+O\!\left((\pi-x)^5\right) $$ That is, $$ x_{n+1}-\pi\sim\frac{(x_n-\pi)^3}6 $$ which means the number of correct digits more than triples with each iteration ($d_n=3d_{n-1}+0.778$).

  • 326,069
  • 34
  • 421
  • 800

The Taylor expansion at $x=\pi$ is $$\sin(x)= \pi-x + \frac{1}{6}(x-\pi)^3- O((x-\pi)^4)$$ $$\sin(x) +x = \pi + \frac{1}{6}(x-\pi)^3- O((x-\pi)^4)$$ Therefore $$a_{n+1}-\pi = \sin(a_n)+a_n-\pi = \frac{1}{6}(a_n-\pi)^3- O((a_n-\pi)^4)$$

This means that the correct digits triple with each step, after convergence has set-it.

  • 18,369
  • 2
  • 22
  • 37

Note that $$|a_{n+1}-\pi|=|a_{n}+\sin(a_n)-\pi|leq =|(a_{n}-\pi)-\sin(a_n-\pi)|\leq C\cdot\frac{|a_{n}-\pi|^3}{6}$$ because $\sin(t)=t-\frac{t^3}{6}+O(t^5)$ as $t\to 0$. So the order of convergence is $3$ So the decimal expansion of $a_{n+1}$ should have about three times more zeros than $a_n$.

Robert Z
  • 136,560
  • 12
  • 93
  • 176

Denote $c_n = a_n - \pi$, then you ask how fast the sequence approaches zero. Now we have

$$\begin{align}c_{n+1} &= \sin(\pi + c_n) + c_n\\ &= c_n - \sin(c_n)\\ &= \frac{c_n^3}{6} +o(c_n^3)\end{align}$$

So the decimal expansion of $c_{n+1}$ should have roughly three times more zeros than $c_n$, which explains the tripling of accurate digits.

Joel Cohen
  • 8,969
  • 1
  • 28
  • 41

$\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$

$\ds{\mbox{Newton-Raphson:}\ \left\{\begin{array}{rcl} \ds{x_{0}} & \ds{=} & \ds{\color{#f00}{3}} \\ \ds{x_{n}} & \ds{=} & \ds{x_{n - 1} - {\sin\pars{x_{n - 1}} \over \cos\pars{x_{n - 1}}} = x_{n - 1} - \tan\pars{x_{n - 1}}\,,\qquad n \geq 1} \end{array}\right.}$


Clear[n, x];
Module[{n = 0, x = N[3,50]},
While[n++ < 20, x -= N[Tan[x], 50]];
N[x, 50]]


All the digits are 'correct'.

  • 250
  • 4
  • 15
Felix Marin
  • 84,132
  • 10
  • 143
  • 185

Your method is an improvement over Newton's method, which would be to look at the sequence $u_n$ defined by $u_0=3$ and $$u_{n+1}=u_n-\dfrac{\sin(u_n)}{\sin'(u_n)}$$ You're using some apriori knowledge about the root, hence getting a quicker convergence. Newton's method is known for doubling the number of accurate digits in each iteration.

Olivier Moschetta
  • 4,179
  • 1
  • 7
  • 14
  • While Newton's method *generally* doubles the precision with each step, in this particular case, it actually triples it. The Taylor series of $x-\tan(x)$ near $\pi$ is $\pi -\frac{1}{3} (x-\pi )^3+O((x-\pi)^5)$. Thus Newton's method is actually quite comparable. – Mark McClure Sep 20 '16 at 14:42

This iterative approach for $\pi$ was considered by Daniel Shanks in a 1-page note: "Improving an approximation for pi." Amer. Math. Monthly 99 (1992), no. 3, 263. He does not spell out the cubic convergence.