Why, when we perform Fourier transforms/decompositions, do we use sine/cosine waves (or more generally complex exponentials) and not other periodic functions? I understand that they form a complete basis set of functions (although I don't understand rigourously why), but surely other period functions do too?

Is it purely because sine/cosine/complex exponentials are convenient to deal with, or is there a deeper reason? I get the relationship these have to circles and progressing around them at a constant rate, and how that is nice and conceptually pleasing, but does it have deeper significance?

Rodrigo de Azevedo
  • 18,977
  • 5
  • 36
  • 95
  • 1,235
  • 7
  • 10
  • 6
    Fourier transforms are useful when working with differential equations because the basis functions behave nicely under differentiation; in particular, the complex exponentials are eigenfunctions of the differentiation operator. –  Apr 12 '18 at 08:47
  • 1
    Up to scaling, FT uses $e^{ikx}$ as **kernel**; Fourier sine transform uses $\sin kx$; Hankel transform uses Bessel functions. See more [**here**](https://en.wikipedia.org/wiki/Integral_transform#Table_of_transforms) – Ng Chung Tak Apr 12 '18 at 09:49
  • The two-word answer is Pontryagin duality, but a variety of useful transforms exist: the Mellin transform, the Riesz transform, etc. – anomaly Apr 13 '18 at 03:23
  • 1
    None of the answers seem to answer the question. The real answer is "Because Jean Baptiste Joseph Fourier investigated the case with sines and cosines and the transform he discovered came to bear his name." – mathreadler Apr 13 '18 at 19:39
  • why reinvent the wheel when the wheel works just fine for every intended purpose? Thanks, Fourier. Sine and Cos are smooth, analytic and orthogonal which is really nice so we have no reason to look for 'alternatives.' Not every/any periodic function satisfies all of these nice properties. – user29418 Jul 07 '20 at 08:01
  • @user29418 I think you've kinda missed the intent of my (2 year old) question. The question isn't one of practicality, it's more about trying to gain an understanding of what makes the Fourier transform interesting and deep, about what the deep idea of it is, not merely what nice and useful properties this particular arbitrary mix of symbols can be proven to have. – user6873235 Jul 08 '20 at 10:16

7 Answers7


The Fourier basis functions $e^{i \omega x}$ are eigenfunctions of the shift operator $S_h$ that maps a function $f(x)$ to the function $f(x - h)$: $$ e^{i \omega (x-h)} = e^{-i\omega h} e^{i \omega x} $$ for all $x \in \mathbb R$.

All of the incarnations of the Fourier transform (such as Fourier series and the discrete Fourier transform) can be understood as changing basis to a basis of eigenvectors for a shift operator.

It is possible to consider other operators, which have different eigenfunctions leading to different transforms. But this shift operator is so simple and fundamental that it's not surprising the Fourier transform turns out to be particularly useful.

  • 48,104
  • 8
  • 84
  • 154
  • 2
    Good answer. I like Terry Tao's [notes on harmonic analysis](http://www.math.ucla.edu/~tao/247b.1.07w/) (Note 9) on this subject. – Giuseppe Negro Apr 12 '18 at 09:27
  • 13
    An important corollary of this is that convolution in real space becomes multiplication in Fourier space, which is perhaps the most concise way to wrap up the properties of FT. – leftaroundabout Apr 12 '18 at 12:28
  • 2
    I like this answer. It feels right, I can see how naturally this ties into periodicity, and you elaborate on how it generalises. Thanks. – user6873235 Apr 12 '18 at 12:35
  • 4
    When the shift operator is the operator that puts $f(x)\to f(x+h)$, isn't than **any** h-periodic (or anti-periodic) function an eigenfunction? – Raphael J.F. Berger Apr 12 '18 at 16:38
  • 3
    @R_Berger Yes, but if you only use $h$-periodic functions for a fixed $h$, you cannot express all functions. And it's important that we have eigenfunctions for any shift, not just a single fixed one. – Joonas Ilmavirta Apr 12 '18 at 17:01
  • This does not answer the question. – mathreadler Apr 13 '18 at 19:42

There is no direct mathematical reason to use sine/cosine/exponential. In fact you can define a similar decomposition using any orthogonal basis of the square integrable functions. For example you could decompose a function on an interval using the Legendre polynomials or in a more general sense take any sufficiently nice basis and do what is called a Wavelet transform. Most of the properties of the Fourier transform such as for example isometry will still hold, with proofs that are very much identical.

There are however a lot of indirect reasons to use sine/cosine/exponential, namely that they have a lot of nice and useful properties, mostly related to differentiation. Just to name a few of the top of my head:

  • They are eigenfunctions of the differential operator. That is, they tend to reproduce under differentiation. $\frac{d}{dt} e^{ikt}$ again is a multiple of $e^{ikt}$. We can use this to solve differential equations by turning them into simple linear equations by taking the transform.
  • They are the solutions of the simple harmonic oscillator $\ddot{f} = -kf$. This equation (or variations of it) turns up extremely often in physics and thus it is no wonder that Fourier series or transform is useful when dealing with such problems. (And indeed for other equations you will need a different transform)
  • They are analytic and periodic. I know that you can turn any function on an interval into a periodic one, however since sine/cosine/exponential correspond to their own power series, they are kind of "naturally periodic".
  • 3,841
  • 17
  • 21
  • 2
    From a physics perspective, what makes sine/cosine/complex exp particularly important is that they are eigenfunctions of the Hamiltonian in free space (which is basically a second derivative). Kind of a special case of your points 1 and 2. In case you would care to highlight that explicitly :) – David Z Apr 12 '18 at 22:50
  • To add to your point about physics, when you write down (first-order) approximate differential equations for a vibrating string, the connection with the real trigonometric functions and Fourier series becomes clear. Ah I see *DisintegratingByParts* has mentioned this and more. – user21820 Apr 14 '18 at 05:34
  • I'm a little out of my comfort zone here, but if I recall correctly all the eigenfunctions of a Sturm-Liouville operator (https://en.wikipedia.org/wiki/Sturm%E2%80%93Liouville_theory) will yield orhtogonal eigenfunctions (wrt different products). In this regard sine and cosine are in equal footing as Legendre or Hermite polynomials among others. These are fundamental in quantum mechanics as well. I remember fondly the book "Fourier series and orthogonal functions" by H.F Davis, where at least part of this story is detailed. – user347489 Apr 14 '18 at 10:12

There is some deeper significance from the point of view of representation theory.

For the Fourier transform on the circle, functions of the form $e^{ikx}$ (depending on the period/normalization) are precisely the characters, irreducible complex representations of the group $\mathbb T$ (which you can think of as ${\mathbf R}/{\mathbf Z}$, ${\mathbf R}/{\mathbf 2\pi \mathbf Z}$, or as the complex numbers of norm $1$, or another renormalization you prefer).

Functions $\sin(kx)$ and $\cos(kx)$ are the matrix coefficients of the irreducible real representations of the group.

Similarly, for the real line, $e^{i\xi x}$ are the irreducible complex unitary representations of the group $(\mathbf R,+)$, while $\sin(\xi x)$, $\cos(\xi x)$ are the matrix coefficients of the irreducible orthogonal real representations.

Representation theory gives a precise sense to the Fourier transform for any locally compact group (and probably more, but I'm no specialist), and in the abelian case, we have Pontryagin duality, which is responsible for the inverse Fourier transform.

  • 33,008
  • 3
  • 45
  • 103
  • That is an important aspect, but I don't think it has much to do with why technical applications use FT so much. In the 2-sphere case, the irreducible representation is given by the spherical harmonics, and though these are indeed used a lot in atom physics and geophysics they are nowhere as widely known&used as the 2D Fourier transform (which most people wouldn't associate with the 2-torus). – leftaroundabout Apr 12 '18 at 16:10
  • 1
    @leftaroundabout: The question did not really ask about technical applications. For that, I guess one of the important reasons is the existence of FFT algorithms. But OP specifically asked about deeper reasons than just convenience. One can probably argue that those connections are the reason why they are convenient to use (but I don't feel qualified for that). And likewise, just because people do not appreciate the connection does not mean it's not there, or that it is not important. – tomasz Apr 12 '18 at 17:54

Your question is partly about History. And the History of how Mathematicians were led to consider orthogonal expansions in trigonometric functions is not a natural one. In fact, Fourier's conjecture that arbitrary mechanical functions could be expanded in a trigonometric series was not believed by other Mathematicians at the time; the controversy concerning this issue led to banning Fourier's original work from publication for more than a decade.

The idea behind trigonometric expansions grew out of looking at the wave equation for the displacements of a string: $$ \frac{\partial^2 u}{\partial t^2}=c^2\frac{\partial^2 u}{\partial x^2}. $$ In 1715, B. Taylor, concluded that for any integer $n\ge 1$, the function $$ u_n(x,t)=\sin\frac{n\pi x}{a}\cos\frac{n\pi c}{a} (t-\beta_n) $$ represented a standing wave solution. $n=1$ corresponded to the "fundamental" tone, and for $n=2,3,\cdots$, the other solutions were its harmonics (which is where the term Harmonic Analysis first arose in Mathematics.) It was a natural question at the time to ask if a general solution could constructed from a combination of the fundamental mode and the harmonics. If such a general solution were to exist in the form $$ u(x,t) = \sum_{n=1}^{\infty}A_n u_n(x,t), $$ where $A_n$ are constants, then it would be necessary to be able to expand the initial displacement function as $$ u(x,0) = \frac{a_0}{2}+a_1\cos\frac{\pi x}{a}+b_1\sin\frac{\pi x}{a}+\cdots. $$ The consensus at the time was that an arbitrary initial mechanical function could not be expanded in this way because the function on the right would be analytic, while $u(x,0)$ might not be. (This reasoning was not correct, but Mathematics was not very rigorous during that era.) The orthogonality relations used to isolate the coefficients were not discovered for some time after that by Clairaut and Euler.

Fourier decided that such an expansions could be done, and he set out to prove it. Fourier's work was banned from publication for over a decade, which tells us that the idea of expanding in a Fourier series was not a natural one.

Fourier did not come up with the Fourier series, and he did not discover the orthogonality conditions which allowed him to isolate the coefficients in such an expansion. He did, however, come up with the Dirichlet integral for the truncated series, and he did essentially give the Dirichlet integral proof for the convergence of the Fourier Series, though it was falsely credited to Dirichlet. Fourier's work on this expansion became a central focus in Mathematics. And trying to study the convergence of the trigonometric series forced Mathematics to become rigorous.

What Fourier did that was original is to abstract the discrete Fourier series to the Fourier transform and its inverse by arguing that the Fourier transform was the limit of the Fourier series as the period of the fundamental mode tended to infinity. He used this to solve the heat equation on infinite and semi-infinite intervals. Fourier's argument to do this was flawed, but his result was correct. He derived the Fourier cosine transform and its inverse, as well as the sine transform and its inverse, with the correct normalization constants: \begin{align} f & \sim \frac{2}{\pi}\int_{0}^{\infty}\sin(st)\left(\int_{0}^{\infty}\sin(st')f(t')dt'\right)ds \\ f & \sim \frac{2}{\pi}\int_{0}^{\infty}\cos(st)\left(\int_{0}^{\infty}\cos(st')f(t')dt'\right)ds. \end{align} He used these to solve PDEs on semi-infinite domains. The sin's and cos's were eigenfunctions of $\frac{d^2}{dx^2}$ that were obtained using Fourier's separation of variables technique. The term "eigenvalue" grew out of this technique as a way of understanding Fourier's separation parameters.

Based on this story, I would say that it was not a natural idea to expand a function in trigonometric functions. Fourier's work led to notions of linear operators, eigenalues, selfadjoint, symmetric, and general orthogonal expansions in eigenfunctions of a differential operator, but it took over a century for this work to look "natural."

Disintegrating By Parts
  • 79,842
  • 5
  • 49
  • 126

I will respectfully disagree with littleO, and elaborate on the answer of mlk, and argue that the even more fundamental reason for the choice of the trig functions as basis functions is that they are the eigenfunctions of the Laplacian.

The smoothness (in the geometric, and not analytic, sense) of a function on $S^1$ can be measured by calculating its Dirichlet energy $$E(f) = \int \langle \nabla f, \nabla f\rangle,$$ where after applying integration by parts, $$E(f) = -\int f\Delta f = -\langle f, \Delta f\rangle.$$

The Laplacian is self-adjoint and negative, and its eigenfunctions are the usual sines and cosines. Let us sort them in ascending order of their eigenvalue magnitudes to get basis functions $b_i(\theta)$ with eigenvalues $\lambda_i$. Of course, $b_0$ is the DC component $b_0(\theta)=\frac{1}{\sqrt{2\pi}}$ with eigenvalue 0, etc.

If we now expand a function $f$ in this basis, $f(\theta) = \sum_i \alpha_i b_i(\theta)$, we have $$E(f) = \sum_i \alpha_i^2 (-\lambda_i).$$

In other words, the first few entries in the expansion of $f$, $\sum_{i=0}^N \alpha_i b_i$, contain the "smooth parts" of $f$, the parts that contribute least to $f$'s Dirichlet energy. The more terms you add, the more high-frequency details you recover. In the (very common) case that you must approximate a function using only a limited amount of information, and the coarse, smooth behavior of $f$ is most important to preserve, the Fourier basis thus gives you a natural representation for doing so.

The above picture generalizes directly to other settings, such as on the sphere (where the spherical harmonics play the role of the sines and cosines) or other manifolds.

  • 45,846
  • 11
  • 84
  • 142
  • Is there a way to reconcile this view with littleO's? It seems closely related, as being an eigenfunction of the shift operation seems intimately related to having nice properties under derivation. Is there a similarly simple and deep interpretation of the Laplacian? – user6873235 Apr 13 '18 at 11:36
  • @user6873235 Yes, I do wonder if they are equivalent, or if they are separate notions that happen to coincidentally give the same basis functions in the case of $S^1$... I'm not sure what the shift operator looks like on $S^2$ for instance (though there we do have that the spherical harmonic basis functions relate to the rotation group in a similar way.) – user7530 Apr 13 '18 at 18:42

Sine and cosine functions are eigenfunctions of the second derivative operator with negative eigenvalues, $\frac{d^2}{dx^2} sin(\omega x) = -\omega^2$, while exponetial (and hyperbolic sine/cosine) are eigenfunctions with positive eigenvalues. This makes Laplace transforms (for positive eigenvalues) and Fourier transforms (for negative eigenvalues) very useful for second order differential equations. Force is proportional to second derivative, so sinusoidal functions are the eigenfunctions in a variety of physical situations: harmonic motion, waves, etc.

Sinusoidal functions are in some sense the "simplest" periodic functions.

  • 11,145
  • 3
  • 13
  • 26

Complex exponentials are the Eigenfunctions of linear shift-invariant systems, and a shitload of physical and mathematical problems are at least in first approximation linear shift/time invariant systems. As such their system behavior is equally well described by their impulse response and its Fourier transform, its frequency response.

Convolution with the impulse response corresponds to multiplication of the respective functions in the Fourier transform domain. Convolution also is a fundamentally important operation for probability distributions, with their Fourier transforms being characteristic functions.