I think I have some understanding of what an analytic function is — it is a function that can be approximated by a Taylor power series. But why is the notion of "analytic function" so important?

I guess being analytic entails some more interesting knowledge rather than just that it can be approximated by Taylor power series, right?

Or, maybe I don't understand (underestimate) how a Taylor power series is important? Is it more than just a means of approximation?

Rodrigo de Azevedo
  • 18,977
  • 5
  • 36
  • 95
Code Complete
  • 903
  • 7
  • 12
  • 3
    Taylor series approximate a function locally. – Claude Leibovici Jun 14 '18 at 10:36
  • 10
    They are a really interesting class of functions but let me try to nudge this a bit and ask: Who says they are so important? In what context do they say it's important? Basically why are you under the impression that its of agreed upon "importance"? – T_M Jun 14 '18 at 17:12
  • 1
    @T_M Assuming the OP is new to higher math, I'd say the most common response is because they appear in any standard calculus course. So the question is, why do so many authors believe they are important? – Jacopo Stifani Jun 20 '18 at 00:51

11 Answers11


Analytic functions have several nice properties, including but not limited to:

  1. They are $C^\infty$ functions.
  2. If, near $x_0$, we have$$f(x)=a_0+a_1(x-x_0)+a_2(x-x_0)^2+a_3(x-x_0)^3+\cdots,$$then$$f'(x)=a_1+2a_2(x-x_0)+3a_3(x-x_0)^2+4a_4(x-x_0)^3+\cdots$$and you can start all over again. That is, you can differentiate them as if they were polynomials.
  3. The fact that you can express them locally as sums of power series allows you to compute fast approximate values of the function.
  4. When the domain is connected, the whole function $f$ becomes determined by its behaviour in a very small region. For instance, if $f\colon\mathbb{R}\longrightarrow\mathbb R$ is analytic and you know the sequence $\left(f\left(\frac1n\right)\right)_{n\in\mathbb N}$, then this knowledge completely determines the whole function (the identity theorem).
  • 2,338
  • 7
  • 11
José Carlos Santos
  • 397,636
  • 215
  • 245
  • 423
  • 4
    This might just be an english language thing but I feel like your phrasing introduced an ambiguity so I wanted to point out to others that the condition about differentiating the power series is not a "more precise" statement than smoothness, it's an additional property altogether. I'm sorry to be that person. – T_M Jun 14 '18 at 21:11
  • @T_M What you you think now? – José Carlos Santos Jun 15 '18 at 08:13
  • 2
    I think it's a little unclear what *"you can differentiate them as if they were polynomials"* means. I feel like differentiation is a pretty deterministic, mechanical process... how would you differentiate something *not*-as-if-it-were-a-polynomial? – user541686 Jun 15 '18 at 09:57
  • 1
    @Mehrdad It is not trivial that if you differentiate the power series $a_0+a_1x+a_2x^2+\cdots$ then what you get is what you would get if it was just a finite sum (tha is, a polynomial): $a_1+2a_2x+3a_3x^2+\cdots$ – José Carlos Santos Jun 15 '18 at 10:08
  • @JoséCarlosSantos: It doesn't follow immediately from the linearity of differentiation? – user541686 Jun 15 '18 at 10:27
  • @Mehrdad No. Linearity means that differentiability is preserved under *finite* sums. – José Carlos Santos Jun 15 '18 at 10:28
  • @JoséCarlosSantos: I mean linearity says $f(ax_1+bx_2) = a f(x_1) + b f(x_2)$... so you could just pick off every term one-by-one ($x_1 = \text{\{first term of power series\}}$ and $x_2 = \text{\{rest of power series\}}$)? Presumably you already have a sane definition of the sum of an infinite series, which would allow you to subtract the first term to obtain the rest... – user541686 Jun 15 '18 at 10:39
  • @Mehrdad And do you differentiate the rest of the power series unless you already know *how* to differentiate a power series? – José Carlos Santos Jun 15 '18 at 10:41
  • @JoséCarlosSantos: The exact way I just mentioned? You know differentiation is linear, so you know the derivative of the whole thing is the derivative of the first term plus the derivative of the rest. The first you know how to do. The rest is a subproblem you recurse on. – user541686 Jun 15 '18 at 10:42
  • 1
    @Mehrdad That is a circular reasoning. You can't deal with the rest just like that without knowing *first* that power sereis are always differentiable. – José Carlos Santos Jun 15 '18 at 10:45
  • @JoséCarlosSantos: What? If you claimed the power series is equal to the function for all inputs and the function is differentiable for all inputs then the power series is also differentiable for all inputs by your definition. Otherwise they wouldn't be equal!! I really don't see anything circular here... – user541686 Jun 15 '18 at 10:47
  • @Mehrdad I know that the function is differentiable **because** the sum of a power series is always differentiable. – José Carlos Santos Jun 15 '18 at 10:48
  • @JoséCarlosSantos: I mean, it sounds to me like you're choosing to start backwards, then complaining that reasoning forwards results in circular logic. I guess it does, but that's only because you intentionally chose a different starting point... the way they [differentiate $\sin$](https://www.wyzant.com/resources/lessons/math/calculus/derivative_proofs/sinx) for example doesn't involve the power series at all. – user541686 Jun 15 '18 at 10:58
  • @Mehrdad All I did was to use the *definition* of analytic function: it's a function which can be expressed locally as the sum of a power series. – José Carlos Santos Jun 15 '18 at 11:01
  • @JoséCarlosSantos: You're totally losing me... you seem to be going around in circles. I don't really know how to break out of your loop but I guess in any case this is my feedback for you that, if whatever you're saying is correct, it's also unclear, so you may want to clarify the answer... – user541686 Jun 15 '18 at 11:05
  • 1
    @Mehrdad I think the reason we can use induction without our reasoning being circular is because the resursed-on term is provabily smaller each time. For any finite size you can use the 'recipe' to produce a finite proof by just repeatedly inlining the recursion. In this case however the recursed-on term is not obviously smaller. In fact, I don't think it's smaller at all since both the starting polynomial and the 'smaller' one have countably many terms. – Luka Horvat Jun 15 '18 at 12:24
  • @LukaHorvat: Hm, could you give a simple concrete example? – user541686 Jun 15 '18 at 12:43
  • 2
    @Mehrdad Sure. Since I'm saying that your argument 'prooves too much', lets proove something that's too much. For example we can proove that every infinite sum is convergent. Just take the first term (obviously finite) and the rest of the sum. Since a sum of two finite values is finite and, by your reasoning, we can assume that the 'smaller' rest of the sum is convergent/finite, the whole sum must also converge. – Luka Horvat Jun 15 '18 at 15:04
  • @LukaHorvat: That's not a concrete example of what you just said... and doesn't help me understand what you were saying at all... – user541686 Jun 15 '18 at 19:01
  • @Mehrdad the way I understand what you are writing is that you would think that the out of linearity of the derivative the differential of $\sum_{n\in\Bbb N}f_n$ is $\sum_{n\in\Bbb N}f_n'$ so long as both sums converge (pointwise) and the $f_n$ are all differentiable. This result is not actually true. I can't pull an example out of my head right now, but somebody else or a search is likely to be forthcoming. – s.harp Jun 15 '18 at 22:18
  • @s.harp: I'm at least also assuming the derivative of $\sum_{n\in\Bbb N}f_n$ is known to exist (which need not be circular... [see my comment above](/questions/2819345/why-is-the-notion-of-analytic-function-so-important/2819352?noredirect=1#comment5815702_2819352)). – user541686 Jun 15 '18 at 22:37
  • Yes, it will still be wrong. The statement is equivalent to: $g_n\to g$ pointwise with $g$ differentiable and $g_n'\to h$ pointwise $\implies$ $h=g'$. This is not true, you need locally uniform convergence of the sequences. To see the equivalence let $f_n=g_n-g_{n-1}$. – s.harp Jun 15 '18 at 22:43
  • @s.harp: I'm actually having a hard time telling if this is exactly equivalent or not, but in any case you do make a good point, thanks! Some allusion to potential caveats ike this would really help clarify the answer... – user541686 Jun 15 '18 at 22:56
  • Let $g_{-1}=0$, then $\sum_{n=0}^N g_n-g_{n-1} = g_N-g_{-1}=g_N$, this is called a telescoping sum. The pointwise limit $\sum_n f_n$ is then $g$, the pointwise limit of $\sum_n f_n'$ is the pointwise limit of $g_n'$, which is $h$. Now recover the equivalence. – s.harp Jun 15 '18 at 23:11
  • @s.harp: No... I meant I'm having a hard time seeing if your assumptions are equivalent to mine. I gave you one example of where they differ (which you kindly replied to) but I have to think about it for a while to see if that's the only difference. For example, another thing that comes to my mind right now is that the $f_n$ here aren't arbitrary; they're actually polynomials. You didn't assume that. But again, my only point is that the answer is worth clarifying, so I don't want to keep this going forever... it's worth its own Q/A and needs more time investment than I can put in right now. – user541686 Jun 15 '18 at 23:36
  • @Mehrdad The [Weierstrass function](https://en.wikipedia.org/wiki/Weierstrass_function) is an infinite sum of differentiable terms. But the function itself is not differentiable at *any* point. – Tavian Barnes Jun 16 '18 at 19:53
  • @TavianBarnes Though Mehrdad is also assuming that the derivative of $\sum f_n$ is known to exist – Richard Jun 16 '18 at 20:44
  • @TavianBarnes: I'm well aware of that example but it seems you missed my comment that I was assuming the function was differentiable. – user541686 Jun 16 '18 at 20:44
  • @Mehrdad I think you should ask for an example as a question – Richard Jun 16 '18 at 20:59
  • @Mehrdad Taylor series are infinite sums. They may **diverge**, **converge absolutely** or [**converge conditionally**](https://en.wikipedia.org/wiki/Conditional_convergence). A conditionally-convergent series can converge to any limit, or $\pm \infty$, given [creative-enough rearrangements](https://math.stackexchange.com/questions/1168043#comment2380630_1168045). A Taylor series has a [radius of convergence](https://en.wikipedia.org/wiki/Radius_of_convergence) within which its convergence is absolute. But you must still prove the convergence of the series once it's made up of derivatives. – Iwillnotexist Idonotexist Jun 17 '18 at 19:18
  • @Mehrdad, I think this works: On $\Bbb R \setminus \{0\}$, $f_n(x) := \frac{1}{n}\sin(n/x)$ converges uniformly to $f(x) = 0$. The derivatives don't: e.g. $f'_n(1/\pi) = \frac{(-1)^n}{\pi^2}$ for all $n$. Turn that sequnce into a sum with telescopes. – Torsten Schoeneberg Jun 17 '18 at 20:18
  • @TorstenSchoeneberg: Awesome example, thanks!! So it seems the intuition for it is that you can effectively keep the max |derivative| (= Lipschitz constant) the same if you rescale horizontally as you do vertically, despite the function clearly changing... so you just make the function vertically converge to zero while keeping the max derivatives the same. Very nice!! And P.S. I think you can get rid of the $\setminus \{0\}$ by just hitting it with $e^{-1/x^2}$! (Or some variation of [this](/a/328876/4890) to get rid of the error in finite time.) – user541686 Jun 18 '18 at 21:58

A serious issue when dealing with functions is the ability to evaluate them. The basic tools we have at disposal for function evaluation are the four arithmetic operations.

Hence polynomials (and to a lesser extent rational fractions) are of utmost importance. Taylor development bridges functions to polynomials and their generalization, entire series. In addition, they enjoy numerous important properties, such as continuity, differentiability, smoothness... and are amenable to analytic processing.

  • 1
    Huh, evaluation? I'd say the basic tools we have at disposal for function evaluation are [$\alpha$- and $\beta$-reduction](https://en.wikipedia.org/wiki/Lambda_calculus#Reduction). Arithmetic operations, and certainly limits, are much more specific, and don't apply to most kinds of functions you could define. – leftaroundabout Jun 15 '18 at 13:10
  • 5
    @leftaroundabout: we're just trying to get a point across here to someone who could well be a high schooler, not lay the foundations for an alternative to axiomatic set theory or something... – user541686 Jun 15 '18 at 20:22
  • I have never before heard the 'ability to evaluate' a function being a serious issue. I think this comment needs more context. Do you mean actually, numerically evaluating functions? If so, then this comment needs more context. – T_M Jun 15 '18 at 22:25
  • 1
    @T_M: Yeah, evaluating functions is hard. I can give two examples: (1) Every known algorithm for evaluating $y^x$ to within half an ulp ("unit in the last place") requires unbounded memory (which no machine has); it's called the [table-maker's dilemma](https://en.wikipedia.org/wiki/Rounding#Table-maker's_dilemma). (2) More practically, sometimes functions are given implicitly (e.g. via a differential equation) and so merely "evaluating" them would amount to a numerical solution of the differential equation, which can be more or less arbitrarily difficult... – user541686 Jun 16 '18 at 23:19

Excellent question! I'm glad you asked!

There are lots of reasons, but I would say the most fundamental are the following:

1. Because Taylor series approximate using ONLY basic arithmetic

I wish someone told me this back in school. It's why we study polynomials and Taylor series.

The fundamental mathematical functions we really understand deeply are $+$, $-$, $\times$, $\div$... to me, it's fair to say the study of polynomials is really the study of "what can we do with basic arithmetic?"

So when you prove that a function can be approximated by a Taylor series, what you're really saying is that you can evaluate that function to a desired precision via basic arithmetic.

If this doesn't sound impressive, it's probably because someone else has already done the work for you so you don't have to. ;) To elaborate:

You probably type in sin(sqrt(2)) into a calculator and take it for granted that it gives you back an answer (and notice it's an approximate one!) without ever knowing how it actually does this. Well, there isn't a magic sin and sqrt circuit in your calculator. Everything is done via a sequence of $+$, $-$, $\times$, $\div$ operations, because those are the only things it knows how to do.

So how does it know which exact sequence of basic arithmetic operations to use? Well, frequently, someone has used Taylor series to derive the steps needed to approximate the function you want (see e.g. Newton's method). You might not have to do this if all you're doing is punching things into a calculator, because someone else has already done it for you.

In other words: Taylor series are the basic building blocks of fundamental functions.

But that's not all. There's also another important aspect to this:

2. Taylor series allow function composition using ONLY basic arithmetic

To understand this part, consider that the Taylor series for $f(x) = g(h(x))$ is pretty easy to evaluate: you just differentiate via the chain rule ($f'(x) = g'(h(x)) h'(x)$, etc.) and now you have obtained the Taylor series for $f$ from the derivatives of $g$ and $h$ using ONLY basic arithmetic.

In other words, when $f$ is analytic and you've "solved" your problem for $g$ and $h$, you've "solved" it for $f$ too! (You can think of "solving" here to mean that we can evaluate something in terms of its individual building blocks that we already know how to evaluate.)

If composability seems like a trivial thing, well, it is most definitely not!! There are lots of other approximations for which composition only makes your life harder! Fourier series are one example. If you try to compose them arbitrarily (say, $\sin e^x$) you'll quickly run into a brick wall.

So, in other words, Taylor series also provide a "glue" for these building blocks.

That's a pretty good deal!!

  • 12,494
  • 15
  • 48
  • 93

Being analytic, and especially being complex-analytic, is a really useful property to have, because

  1. It's very restrictive. Complex-analytic functions integrate to zero around closed contours, are constant if bounded and analytic throughout $\mathbb{C}$ (or if their absolute value has a local maximum inside a domain), preserve angles locally (they are conformal), and have isolated zeros. Analyticity is also preserved in uniform limits.
  2. Most of the functions we obtain from basic algebraic operations, as well as the elementary transcendental functions, (and, indeed, solutions to linear differential equations), are analytic at almost every point of their domain, so the surprising restrictiveness of being an analytic function does not stop the class of functions that are analytic from containing many interesting and useful examples. Proving something about analytic functions tells you a something about all of these functions.

Being real-analytic is rather less exciting (in particular, there is no notion of conformality and its related phenomena). Most properties of real-analytic functions can be deduced from restricting local properties of complex-analytic ones anyway, due to this characterisation. So we still have isolation of zeros, and various other properties, but nowhere near as much (and uniform limits are no longer analytic).

  • 65,147
  • 11
  • 62
  • 122
  • 1
    Yes. But I always say, “It's very restrictive” is a _caveat_, because evidently you're throwing out a huge class of functions that could potentially be very useful too, albethey not as convenient to handle. Mostly this is a reason to avoid complex differentiability, because it's an innocuous-looking but stupidly powerful property. – leftaroundabout Jun 15 '18 at 13:13
  • 1
    True. It depends what you want to do, of course. Classical function theory, differential equations and so on like the rigidity, while PDEs, calculus of variations, Banach spaces and differential geometry like partitions of unity and smooth bump functions. – Chappers Jun 15 '18 at 13:44
  • I agree that the rigidity is one of the most important things to understand about analyticity. – T_M Jun 15 '18 at 22:26
  • 2
    I think it is particularly impressing that a "harmless" condition (can nicely be approximated by linear functions locally) that is therefore applicable to many functions (or makes other functions not-nice) leads to the extreme restrictions (identity theorem) while still allowing enough flexibility for things like Riemann's mapping theorem. – Hagen von Eitzen Jun 16 '18 at 09:38

We like functions that can be expressed by Taylor series, because they are really well behaved. Doing analysis with analytic functions is simply easier than with more general functions.

It might be interesting to consider the state of affairs two centuries ago: the following quote by Niels Henrik Abel is one of my favorite

... There are very few theorems in advanced analysis which have been demonstrated in a logically tenable manner. Everyone one finds this miserable way of concluding from the special to the general, and it is extremely peculiar that such a procedure has led to so few of the so-called paradoxes. It is really interesting to seek the cause.

To my mind, it lies in the fact that in analysis, one is largely occupied with functions which can be expressed by powers. As soon as other functions enter — this, however, is not often the case — then it does not work any more and a number of connected, incorrect theorems arise from the false conclusions. ...

(as quoted by Niels Henrik Abel: Mathematician Extraordinary)

In short, mathematicians of the 18th and early 19th century proved things about analytic functions because:

  • those are the functions that come up with doing analysis
  • those are the functions they knew how to prove things about

And, in fact, it wasn't even really recognized that the analytic functions were special.


The notion of analytic functions is so important because they have a lot of interesting properties and a lot of interesting examples popping up in various fields of mathematics.

Let's start with basic examples: most importantly polynomials and rational functions are analytic functions on their domains, the exponential function, and hence all trigonometric functions, the complex logarithm on a subdomain of $\mathbf C$, are also important examples of analytic functions. Some of the most important functions in number theory are analytic – Riemann's $\zeta$ (Zeta) function, Euler's $\Gamma$ (Gamma) function to quote the most immediate examples.

In some sense, analytic functions are only a slight generalisation of polynomials and they retain a lot of their nice properties:

  • We can locally encode the function by a formal object (the Taylor series of that function) that allows us for formal computations and this is an effective way to prove theorems. (For instance the Lagrange inversion formula.)

  • If $w \in \mathbf C$ is a value of an analytic function $f$, then all complex numbers near $w$ are also values of $f$. (The open image theorem.) This has important consequences about where the maximum of the modulus $|f|$ of $f$ can reach its maximum.

  • We understand pretty well the connection between the position of zeroes of an analytic function and the structure of the function. (In short, it is possible to factor out the zeroes as we can do for polynomials.)

Now I would like to get a bit more in details into the examples so that we can understand in which fields analytic functions occur naturally. There are even more occurrences but I will stick to these three because they are the easiest to talk about. How easy it is to meet analytic functions in math theories is probably partly explained by the various view points we can have on analytic functions: as power series, as derivable functions (see below) or as Cauchy integrals.

Complex Analysis

Just like mathematicians need to understand the asymptotic behaviour of real functions, they need to do so for complex functions. The main theorem here is

THEOREM. If a complex function of the complex variable defined on a subdomain of $\mathbf C$ is everywhere derivable then it is analytic.

What a strong result! Nothing near this is true for real functions! Actually even $\mathcal C^\infty$ real functions are only mildly tame (Borel theorem). So the study of analytic functions is “just” the study of derivable complex functions. Stepping further in this direction will lead us to theorems like Dirichlet's theorem for subharmonic functions.

Algebraic Geometry

Algebraic geometers study geometric objects that can be described by polynomial equations – usually with a lot of dimensions and variables. A very important class of such objects are spaces encoding some geometric configurations: so for instance if we think to the situation “one ellipsis in the plane and a line tangent to this ellipsis” we can build an “algebraic geometric space” whose points correspond to a concrete geometric situation. If we do not embarrass with details, this is fairly easy to do: an ellipsis is described by five coefficients and a line by three coefficient, so our configuration space is the subset of points in a 8 dimensional space that satisfy the equations for “the line is tangent to the ellipsis”. Since all equations in problems of this kind are polynomial, this is I hope a good motivation for algebraic geometry: we have a lot of problems we are interested in and they are encoded by polynomials – the ”easy” functions. Now, when we learn group theory and differential calculus, we discover that polynomials are not always enough to study our problems and that it is very nice to have an emergency class of easy functions, with polynomials in it, the exponential and the logarithm. Analytic functions are here a good fit.

Number theory

The first thorough study of numeric series was made by Euler and he made the following fascinating observation:

$$ \sum_{n = 1}^\infty \frac1{n^s} = \prod_{p\in\mathcal P} \frac1{1 - p^{-s}} $$

where $\mathcal P$ is the set of prime numbers and $s$ is a complex number whose real part is larger than $1$. This is a very exciting observation because the left hand side is easily seen to be an analytic function and the right hand side tells something about the set of prime numbers, which is one of the primary study object in number theory!

Besides this entry point, analytic functions (in connection with elliptic integrals, modular forms, L functions, the $\Theta$, Klein's $j$ function…) play a cardinal role in number theory.


Consider the Cauchy’s kernel: $$C_y (x)≔\dfrac{i}{2π} \dfrac{1}{x+iy}.$$ Get a compact supported distribution on $\mathbb{R}$, say $f$. Then is well defined the function: $$F∶\mathbb{C}\backslash\mathbb{R}⟶\mathbb{C}, z=x+iy⟼(f*C_y )(x).$$ This function is holomorphic and has the property that: $$x\mapsto F(x+iy)-F(x-iy)$$ converges in distribution to $f$ as $y⟶0$.

This observation leads to the study of hyperfunctions (basically holomorphic functions defined on $\mathbb{C}\backslash \mathbb{R}$ up to an equivalence relation) where you can embed the whole distributions realm. So, in particular, you can do the whole real analysis in the realm of holomorphic functions, being now able to use powerful complex analysis techniques also to study pretty bad functions.

  • 4,803
  • 1
  • 16
  • 34

Complex analytic functions are the ones such that the derivative and multiplication by $i$ commute, in the sense described below. This seems at least formally like an interesting property to have.

Thinking of $\mathbb{C}$ as being a plane, multiplication by $i$ is rotation about $0$ by $90^{\circ}$ counterclockwise. Let $J:\mathbb{C}\to\mathbb{C}$ be this linear operator $J(z)=iz$. Parameterize $\mathbb{C}$ by $a+bi$ for $(a,b)\in\mathbb{R}^2$, and hence we may also think of $J$ as being a $2\times 2$ matrix.

Let's have $D_zf$ represent the Jacobian matrix for a function $f:\mathbb{C}\to\mathbb{C}$ at a point $z\in\mathbb{C}$, in the usual multivariable calculus sense, and we'll write $Df$ to leave the $z$ implicit. (Let's assume $f$ is differentiable, which implies that the four partial derivatives comprising $Df$ exist.) Now, consider the condition $$(Df)\circ J=J\circ (Df),$$ where composition is matrix multiplication (or composition of linear operators). This is the condition that the derivative and multiplication by $i$ commute.

When evaluated on the vector $(1,0)$ corresponding to $1\in\mathbb{C}$, this equation gives the equation $(Df)(0,1)=J(Df)(1,0)$, and when the Jacobian matrix is expanded it becomes \begin{align*} \begin{bmatrix}\frac{\partial f_1}{\partial b}\\\frac{\partial f_2}{\partial b}\end{bmatrix} &=J\begin{bmatrix}\frac{\partial f_1}{\partial a}\\\frac{\partial f_2}{\partial a}\end{bmatrix} \end{align*} which in turn is the Cauchy-Riemann equations: \begin{align*} \frac{\partial f_1}{\partial b} &=-\frac{\partial f_2}{\partial a}\\ \frac{\partial f_2}{\partial b} &=\frac{\partial f_1}{\partial a} \end{align*} Differentiable complex functions that satisfy these equations are analytic, and vice-versa.

While $J$ is a $2\times 2$ matrix representing multiplication by $i$, the matrix correspondence goes much deeper if we also represent $1$ as the $2\times 2$ identity matrix $I$. Then, the collection of matrices $aI+bJ=\begin{pmatrix}a&-b\\b&a\end{pmatrix}$, for all $a,b\in\mathbb{R}$, is isomorphic to $\mathbb{C}$, with addition of matrices and multiplication of matrices being addition and multiplication of the corresponding elements of $\mathbb{C}$.

Under this correspondence, the Cauchy-Riemann equations say that $Df$ is a matrix of this exact form, and in particular $Df$ is $\frac{\partial f_1}{\partial a}I+\frac{\partial f_2}{\partial a}J$. The complex derivative is $$\frac{df}{dz}=\frac{\partial f_1}{\partial a}+\frac{\partial f_2}{\partial a}i.$$

A quick application of the theory of analytic functions is that $\ln x$ cannot be well-approximated by a polynomial outside of a bounded interval. By that I mean, if you have a Taylor series for $\ln x$ centered at $a>0$, then because of the singularity at $x=0$ the radius of convergence must be at most $a$. In fact, even at $x=2a$ the partial sums of the Taylor series misbehave. This is useful if you are in the 1800s and need to compile a logarithm table.

Kyle Miller
  • 17,468
  • 1
  • 18
  • 49

As Chappers says, the analytic property of a function is very useful on those defined on the complex plane, and it turns out that all the usual functions are analytic.

Those functions have very interesting properties, such as complex-derivative, zero integral on closed paths, and the residue formula. You can also use a lot of real analysis results (Leibniz' formula, the chain rule) in the study of analytic functions.

For instance, without complex-analytic related tools, it would be impossible to prove major theorems like the Prime Number Theorem. As a matter of fact, it is quite astounishing that properties of complex function need to be used to prove a result about arithmetics.


Taking a more "applied" approach to this question, I would say that analytic functions are so important because they come up in practical problems. Newton's laws of motion are differential equations, and analytic functions play well as solutions to differential equations. There are tons of other problems in physics and other sciences that use differential equations and benefit from analytic functions.

Analytic functions are well-behaved and easy to work with. Studying an analytic function is typically much easier than studying a non-analytic one.

Also, I would say even just approximation by itself is important. It might seem tautological, but $\pi^2 - \mathrm{erf}(3)$ doesn't mean anything, until you convert that expression into a number, if you want to figure out how much concrete you need to build a bridge.

Of course, analytic functions are not the solution to every problem. An example are non-analytic smooth functions that can be constructed with Fourier series: a tool commonly used in signal processing (and more...).


When a function is analytic on $\mathbb{R}$, its value for all real numbers can be completely determined by all of its derivatives at only one real number. For any function that's analytic over $\mathbb{R}$, for any real number $r$, the radius of convergence of its Taylor series centered at $r$ will always be nonzero and the Taylor series will be exactly equal to the value of the function in its radius of convergence. Even when the radius of convergence of the Taylor series of the function centered at $r$ is finite, it's still true that the value of the function at all real numbers can be completely determined by its derivatives at $r$. For functions that are analytic in an open interval of $\mathbb{R}$, it's also true that for any real number $r$ in that interval, the value of the function everywhere in that interval can be completely determined from all its derivatives at $r$ and the fact that it's analytic in that interval.

  • 1,223
  • 1
  • 12
  • 25
  • 741
  • 7
  • 16