I started learning about the Taylor Series in my calculus class, and although I understand the material well enough, I'm not really sure what actual applications there are for the series.

Question: What are the practical applications of the Taylor Series? Whether it's in a mathematical context, or in real world examples.

  • 1,485
  • 1
  • 14
  • 28
  • 2
    Related: [Motivating Infinite Series](http://math.stackexchange.com/questions/9524/motivating-infinite-series). – Mike Spivey Dec 19 '12 at 17:40

15 Answers15


One reason is that we can approximate solutions to differential equations this way: For example, if we have


To solve this for $y$ would be difficult, if at all possible. But by representing $y$ as a Taylor series $\sum a_nx^n$, we can shuffle things around and determine the coefficients of this Taylor series, allowing us to approximate the solution around a desired point.

It's also useful for determining various infinite sums. For example:

$$\frac 1 {1-x}=\sum_{n=0}^\infty x^n$$ $$\frac 1 {1+x}=\sum_{n=0}^\infty (-1)^nx^n$$ Integrate: $$\ln(1+x)=\sum_{n=0}^\infty \frac{(-1)^nx^{n+1}}{n+1}$$ Substituting $x=1$ gives

$$\ln 2=1-\frac12+\frac13-\frac14+\frac15-\frac16\cdots$$

There are also applications in physics. If a system under a conservative force (one with an energy function associated with it, like gravity or electrostatic force) is at a stable equilibrium point $x_0$, then there are no net forces and the energy function is concave upwards (the energy being higher on either side is essentially what makes it stable). In terms of taylor series, the energy function $U$ centred around this point is of the form


Where $U_0$ is the energy at the minimum $x=x_0$. For small displacements the high order terms will be very small and can be ignored. So we can approximate this by only looking at the first two terms:

$$U(x)\approx U_0+k_1(x-x_0)^2\cdots$$

Now force is the negative derivative of energy (forces send you from high to low energy, proportionally to the energy drop). Applying this, we get that


Rephrasing in terms of $y=x-x_0$:


Which is the equation for a simple harmonic oscillator. Basically, for small displacements around any stable equilibrium the system behaves approximately like an oscillating spring, with sinusoidal behaviour. So under certain conditions you can replace a potentially complicated system by another one that's very well understood and well-studied. You can see this in a pendulum, for example.

As a final point, they're also useful in determining limits:

$$\lim_{x\to0}\frac{\sin x-x}{x^3}$$ $$\lim_{x\to0}\frac{x-\frac16x^3+\frac 1{120}x^5\cdots-x}{x^3}$$ $$\lim_{x\to0}-\frac16+\frac 1{120}x^2\cdots$$ $$-\frac16$$

which otherwise would have been relatively difficult to determine. Because polynomials behave so much more nicely than other functions, we can use taylor series to determine useful information that would be very difficult, if at all possible, to determine directly.

EDIT: I almost forgot to mention the granddaddy:

$$e^x=1+x+\frac12x^2+\frac16x^3+\frac1{24}x^4\cdots$$ $$e^{ix}=1+ix-\frac12x^2-i\frac16x^3+\frac1{24}x^4\cdots$$ $$=1-\frac12x^2+\frac1{24}x^4\cdots + ix-i\frac16x^3+i\frac1{120}x^5\cdots$$ $$=\cos x+i\sin x$$ $$e^{ix}=\cos x+i\sin x$$

Which is probably the most important equation in complex analysis. This one alone should be motivation enough, the others are really just icing on the cake.

Robert Mastragostino
  • 15,009
  • 3
  • 31
  • 52
  • Granddaddy is used to approximate $\exp$, $\sin$ and $\cos$, right? – krlmlr Oct 22 '12 at 11:37
  • 1
    @user946850 without the imaginary part, yes. Taylor series can also be used to approximate these functions in computers to pretty high accuracy. $\sin x\approx x-\frac16x^3$ has an error of at most $8\%$. Adding the next term reduces that to less than $0.5\%$ and using this in conjunction with $\sin(\pi/2-x)=\cos x$ and $\cos x\approx1-\frac12 x^2+\frac1{24}x^4$ can lower this even further. There are many other approximations that exist as well (the CORDIC algorithm, Chebyshev approximation, etc.) but this is sometimes used in practice. – Robert Mastragostino Oct 22 '12 at 15:43
  • 1
    @RobertMastragostino what $x$ range did you use to determine that the error is at most $8\%$? For $x\in\mathbb{R}$ the error is unbounded. – Ruslan Feb 12 '14 at 13:10
  • 5
    @Ruslan you can use the symmetries and periodicity of $\sin(x)$ to restrict your calculation to $[0,\pi/2]$. It's this range that has the maximum error of $8\%$. – Robert Mastragostino Feb 12 '14 at 14:41

In the calculator era, we often don't realize how deeply nontrivial it is to get an arbitrarily good approximation for a number like $e$, or better yet, $e^{\sin(\sqrt{2})}$. It turns out that in the grand scheme of things, $e^x$ is not a very nasty function at all. Since it's analytic, i.e. has a Taylor series, if we want to compute its values we just compute the first few terms of its Taylor expansion at some point.

This makes plenty of sense for computing, say, $e^{1/2}: 1+1/2+1/2!(1/2)^2+1/3!(1/2)^3+...$ is obviously going to converge very quickly: $1/4!2^4<1/100$ and $1/5!2^5<1/1000$, so we know for instance we can get $e^{1/2}$ to $2$ decimal places by summing the first $5$ terms of the Taylor expansion.

But why should this work for computing something like $e^{100}$? Now the expansion looks like $1+100+100^2/2+100^3/3!+...$, and initially it blows up incredibly fast. This is where analytic functions really show how special they are: the denominators $n!$ grow so fast that it doesn't matter what $x^n$ we have in the numerators, before too long the series will converge. That's the essence of the Taylor approximation: analytic functions are those that are unreasonably close to polynomials.

There are much faster methods for getting approximations like the one for $\sqrt{e}$, in theory: using Newton's method to solve $x^2-e=0$ will give you an approximation to $\sqrt{e}$ accurate to a number of places that goes like the square of the number of iterations you've done. But how do we apply Newton's method here? The first formula is $$x_1=x_0-\frac{2x_0}{x_0^2-e}$$ So, if we want a decimal expansion of $\sqrt{e}$, we'd better be able to get one of $x_0^2-e$. And how are we going to get that? The Taylor series.

Kevin Arlin
  • 48,131
  • 3
  • 49
  • 100
  • 1
    Great answere here. Could you please provide [your *analogical* insights into my query there?](http://math.stackexchange.com/questions/1009116/analogy-to-the-purpose-of-taylor-series) – bonCodigo Nov 07 '14 at 04:21
  • 2
    Also, Newton-Raphson itself uses Taylor series approximation (first order) to find the root. – Anindya Mahajan Jun 08 '20 at 21:39

Taylor Series are studied because polynomial functions are easy and if one could find a way to represent complicated functions as series (infinite polynomials) then one can easily study the properties of difficult functions.

  1. Evaluating definite Integrals: Some functions have no antiderivative which can be expressed in terms of familiar functions. This makes evaluating definite integrals of these functions difficult because the Fundamental Theorem of Calculus cannot be used. If we have a polynomial representation of a function, we can oftentimes use that to evaluate a definite integral.

  2. Understanding asymptotic behaviour: Sometimes, a Taylor series can tell us useful information about how a function behaves in an important part of its domain.

  3. Understanding the growth of functions

  4. Solving differential equations

I'm pretty sure this is not all but with a little research you can find as many as possible.


The applications of Taylor series is mainly to approximate ugly functions into nice ones(polynomials)!

Example: Take $f(x) = \sin(x^2) + e^{x^4}$. This is not a nice function, but it can be approximated to a polynomial using Taylor series.

  • 9,997
  • 7
  • 43
  • 107

A good example of Taylor series and, in particular, the Maclaurin series, is in special relativity, where the Maclaurin series are used to approximate the Lorrentz factor $\gamma$. Taking the first two terms of the series gives a very good approximation for low speeds. You can actually show that at low speeds, special relativity reduces to classical (Newtonian) physics. For example, in special relativity, the momentum is given by $p = \gamma mv$, and at low speeds $\gamma \approx 1$, so $p \approx mv$, which is the (linear) momentum in classical mechanics.

Also, the most famous equation in physics, $E = m{c^2}$, is actually an approximation for low velocities, which, again, can be derived using Taylor series.

By the way, $$\gamma = \frac{1}{\sqrt{1 - v^2/c^2}},$$ where $v$ is the velocity and $c$ is the speed of light.

Another example is again from physics: When we first study pendulum motion, we often begin with an assumption $\sin \theta \approx \theta $, which also comes from Taylor series because $$\sin \theta = \theta - \frac{\theta^3}{3!} + \frac{\theta^5}{5!} - \frac{\theta^7}{7!} + \cdots$$

Moreover, any software that graphs various functions actually uses very good Taylor approximations.

  • 9,670
  • 2
  • 30
  • 52
  • 3
    "Moreover, any software that graphs various functions actually uses very good Taylor approximations". I don't think that's always true, do you have a source for that claim? – R R Dec 14 '13 at 21:05
  • I am pretty sure there should be an article about this. Do you honestly think a computer can plot $e^x$, $\sin x$, $\cos x$, etc. with infinite precision? Almost all real numbers are transcendental (because algebraic numbers are countable), and therefore irrational, but all numbers in a computer are rational. – glebovg Dec 16 '13 at 07:15
  • 2
    I'm not claiming that at all and I'm not sure how you got that impression. You're claiming that computers always use taylor approximations, I'm thinking taylor approximations are not always efficient or accurate and that there are probably other computational tricks for computing tricky functions. One example off the top of my head is the CORDIC system. – R R Dec 16 '13 at 17:43
  • 1
    Can you expand on your comment that $ E = mc^2$ is an approximation for low velocities? – littleO Feb 03 '15 at 20:42
  • @littleO Have a look at Einstein's famous energy-momentum relation, which is not very difficult to derive. For a particle at rest, the momentum $p$ is zero, so we get $E = mc^2$. If $p \approx 0$, then $E \approx mc^2$. – glebovg Feb 23 '15 at 21:50

Taylor series provide the basic method for computing transcendental functions such as $e^x$, $\sin x$, and $\cos x$.

Matt E
  • 118,032
  • 11
  • 286
  • 447

No one's mentioned the combinatorial side of things, so I'll be the first to say it: generating functions. We use generating functions to pass hard discrete counting problems to the continuous, where things are easy. Generating functions are a central tool in combinatorics (counting, graph theory, etc.) and probability (where we have moment generating functions). Taylor series is the fundamental idea behind all of these. Read: http://en.wikipedia.org/wiki/Generating_function for details, and take a combinatorics or mathematical probability class to learn more.

Gyu Eun Lee
  • 17,017
  • 1
  • 31
  • 64

In physics you often approximate a complicated function by taking the first few terms in the Taylor series (the Taylor polynomial). For small values of the independent variable, you often assume linearity, which can allow you to get a closed form solution. For example, if you take an introductory physics class then you usually study the motion of the pendulum by approximating $\sin(\theta)$ by $\theta$ for small angles.

  • 439
  • 2
  • 16

Someone already mentioned the usefulness of Taylor's series in relativity, I would like to spent a few words to further explore this point because relativity is a good arena to test the very important role of Taylor series in solving practical problems in physics. Let's consider relativistic kinetic energy formula \begin{equation} E_K= mc^2 \left( \frac{1}{\sqrt{1-\frac{v^2}{c^2}}} -1 \right) \end{equation} Taylor series says that for $v \ll c$ the kinetic energy is about \begin{equation} E_K \approx \frac{mv^2}{2}+\frac{3m v^4}{8 c^2} \end{equation} and this allow you to evaluate the relativistic value when you are in classical regime, and then to get an idea of how relativistic corrections are far from our daily experience. At first glance you can say "who care about $E_K \approx$ blah blah...? We are in XXI century, I don't need approximate formulae to simplify pen & paper calculations, I simply can take my computer and insert numbers to see what happens". Well, things are not so simple. Let's consider this exercise I took from a Taha Sochi's book: we are evaluating kinetic energy of a 1 kg body moving at 100 m/s. enter image description here Classical mechanics says 5000 J but what is the relativistic answer? The book's answer is completely wrong and it is very instructive to see what happened here. I think the author used $3.33\cdot10^{-7}$ in place of $\frac{v}{c}$, he took his computer, he entered the value, and he found an absurd 4996 J: a relativistic energy lower than classical one! You could think this is a problem related to the very bad habit to do big roundings in intermediate steps. You could say: "I can correct easily this naive mistake: let's use some more digits in $\frac{v}{c}$ value!". The idea of using so many digits until the result stabilizes seem reasonable. You can do calculations by exploiting Spreadsheets, WolframAlpha, or simply Google search cell, probably you will find (try!) 5009 J (or 5016 J if you use the approximate value $3\cdot 10^8$ for $c$ used by the author). You may feel satisfied and feel that the result is right, after all it is just a bit greater than classical. But wait a minute! Is it plausible that for a 1 kg ball moving at the ridiculously low speed of a fast car or slow airplane, the relativistic correction is of some joule? This would be decidedly huge: the second answer too is completely wrong. The problem is that computers usually works with a very limitate number of digits, and from something like $1,0000000$(...small)$ - 1$ you can get zero or any other strange results! The only way to solve this problem, as far as I know, is using Taylor formula (unless you know how to force computer using more digits, it is possible do that with some programming language, but probably this would be a more complicated and less sure way to solve the problem). Using Taylor formula written before (adding more Taylor terms the change is negligible) you simply got the correct relativistic kinetic energy of our moving ball: about $5000.000000000417$ J ($\frac{3mv^4}{8c^2} \approx 4.17 \cdot 10^{-10}$ J). So in this case classical and relativistic results differ of about $0.00000000001\%$. All this show that Taylor series are not only illuminating and useful, but sometimes practically indispensable.

Fausto Vezzaro
  • 487
  • 2
  • 8

All of computational science is built on Taylor's theorem.

The biggest hammer by far is Newton's method, which is fragile in its raw form but serves as the basis of many efficient and practical algorithms for solving equations $$f(x) = 0$$ for horribly nonlinear functions $f$. It's hard to get more general than that!

Let me mention one other specific application: simulating physics, using Newton's laws. Suppose you have some object with position $x(t)$ that is being acted upon by several possibly complicated nonlinear forces. The second law says $$F = ma$$ or $$F = m\frac{d^2x}{dt^2}.$$

Typically $F$ is a function of $x$: for instance, the gravitational force of one object acting on another obeys an inverse square law, which depends on $x$. This gives us the second-order ODE $$F(x) = m\frac{d^2x}{dt^2}.$$ If $F$ is complicated enough, there is no hope of solving this equation analytically. But suppose we know the initial position $x(0)=x_0$ and initial velocity $\frac{dx}{dt}(0)=v_0$, and we want to know what the position and velocity will be at time $h$. We can Taylor-expand $x(t)$: $$x(h) = x(0) + h\frac{dx}{dt}(0) + \frac{h^2}{2}\frac{d^2x}{dt^2}(0) + O(h^3).$$

If $h$ is small, we can ignore the higher-order terms, and plug in the above into Newton's law to get $$F(x(0)) \approx 2m\frac{x(h)-x(0)-h\frac{dx}{dt}(0)}{h^2}$$ or $$x(h)\approx x_0 + hv_0 + \frac{h^2}{2m}F(x_0),$$ and similarly $$v(h) \approx v_0 + hF(x_0).$$

Once you have the position and velocity at time $h$, you can predict them at time $2h$, by using the above calculations and replacing $x_0$ with $x_h$ and $v_0$ with $v_h$. Repeating this process, you can get a good approximation for $x(t)$ for all $t$!

The error of the each step above will depend on the error you incurred by truncating the Taylor series, which depends on $h$. But you know that the error should scale roughly like $h^3/h^2=h$, so that halving $h$ halves the error at each step. Even more accurate methods can be developed along this vein, where by taking into account more terms of the Taylor series, you have less error at every step, at the cost of more expensive computation each step.

  • 45,846
  • 11
  • 84
  • 142

We can also use Taylor series to approximate integrals that are impossible with the other integration techniques.

A classic example is $\int\sin(x^2)\,\mathrm{d}x$.

We can't actually integrate this, but using the taylor series for $sin(x)$ we can substitute $x^2$ in for $x$ at each term of the series, and then integrate each term individually. After doing so, we can write a new sum.

  • 326,069
  • 34
  • 421
  • 800

The Taylor Series is used in power flow analysis of electrical power systems (Newton-Raphson method).


  • 149
  • 1
  • 13

Multivariate Taylor series is used in different optimization techniques; that is you approximate your function as a series of linear or quadratic forms, and then successively iterate on them to find the optimal value.


Say you were navigating or orienteering and had plenty of time: One could use the law of sines (and the Taylor series) to evaluate lengths of triangles on maps ( SineA/A = SineB/B = SineC/C). Thus distances could be done with incredible accuracy. Three terms of the series would be plenty. It allows you to calculate sine without a calculator. This is obviously a little ridiculous but ever so slightly useful. If you you didn't have paper you could compute in the sand!

  • 19
  • 1

One of the main tools in statistical sciences (that you can find almost in every research in Social sciences, Economics and Medicine) is regression analysis. One of the justification of validity of such analysis is that linear regression can be viewed as a linear approximation to some unknown function $f(x)$. Namely, you have a data set that is $\{x_i, y_i)\}_{i=1}^n$ and you assume that your data come from some process $$ Y_i=f(X_i)+\epsilon_i, $$
where $$ \mathbb{E}[Y|X] = f(x) = f(0)+f'(0)x+R_1(x) = \beta_0 + \beta_1x+R_1(x) $$ namely, you can approximate the data generating process by estimating $$ y_i=\beta_0 + \beta_1x_i + \epsilon_i. $$ In such a case you can use some pretty simple methods to estimate the parameters, however in a non-linear models one can use the Newton-Raphson method that uses a linear approximation (first order Taylor expansion) to estimate the parameters. The same logic holds for multiple regression models, where the linear regression is just a first order Taylor expansion (models with interactions and quadratic terms can be viewed as second-order Taylor expansions).

Another useful application is the result that is called the Delta-method. In the context of statistical inference and parameter estimation. Let $\theta$ be a parameter of interest and $X_n$ its estimator, then if $$ \sqrt{n}(X_n - \theta) \xrightarrow{D} N(0, \sigma^2) $$ and let $g$ some function where $g'(\theta)$ exists and not equals zero, then $$ \sqrt{n}(X_n - \theta) \xrightarrow{D} N(0, g'(\theta)^2\sigma^2). $$ This result a straight-forward consequence of Taylor expansion of $g(X_n)$ at $\theta$. This result (and its multivariate) analog allows us to compute asymptotically-correct confidence intervals to various parameters, including for the aforementioned regression parameters.

Using the same basic logic, Taylor expansion allows to approximate variance of complicated configurations (functions) where the explicit variance is too complicated for precise analytical calculations.

The bottom line that in computational sciences where the basic tools are models and the main goals are approximations of (unknown) functions, Taylor series is maybe one of the most fundamental tools to start with.

V. Vancak
  • 15,490
  • 3
  • 17
  • 38