We can define all signals as a sum of sinusoidals by taking fourier transform of the signal. Thats OK. My question is, why sinusoidals.? Can there be an another transform like Fourier somewhere in the universe that can explain all signals as sum of rectangulars or triangulars(or any periodic shape)?

Yunus Yurtturk
  • 465
  • 3
  • 8

5 Answers5


This question has a very general answer given by the Stone–Weierstrass theorem. This says, in the situation to which Fourier series apply,

Let $C[a,b]$ be the space of continuous functions on the interval $[a,b]$ and let $A \subset C[a,b]$ be a set, closed under addition, multiplication, and scaling, having the two properties:

  • For every $x \in [a,b]$, there is some $f \in A$ such that $f(x) \neq 0$, and
  • For every $x, y \in [a,b]$, there is some $f \in A$ such that $f(x) \neq f(y)$.

Then every function in $C[a,b]$ is the uniform limit of elements of $A$.

The connection with Fourier series is that we can take $A$ to be the set of functions generated by $\sin(x)$ and $\cos(x)$ on (in this case) $[-\pi, \pi]$ by addition, multiplication (including the $0$'th power), and scaling. Since you can check that $\sin(x)$, $\cos(x)$, and the constant function $1$ can never simultaneously either vanish or be equal, $A$ satisfies the conditions of the theorem, showing that every continuous function on $[-\pi, \pi]$ is the uniform limit of "trigonometric polynomials": literally, polynomials in $\sin(x)$ and $\cos(x)$, which using trig identities (or complex exponentials, which is the same but easier) you can show are the same as expressions of the form $$\sum_{n = 0}^N (a_n \cos(nx) + b_n \sin(nx)).$$

Using other basic functions, the theorem gives a simple criterion for checking whether their "polynomials" can be used to approximate other functions. The advantage of Fourier series here is that the approximating polynomials are partial sums of a single infinite series whose coefficients can be computed by an inner product (an integral). That is, for example, one can use polynomials such as the Bernstein polynomials to approximate continuous functions as well, but as the degree of the polynomials approximating any one function increase, the coefficients all change, not just the highest ones. This is simply because these polynomials do not form an orthonormal set with respect to an inner product. Other such "orthogonal polynomials" exist in plenty, such as Legendre polynomials, and they do not have this problem.

In short, the scheme that allows Fourier series to work can be generalized partially by replacing the trigonometric functions with an arbitrary orthonormal basis for some inner product on $C[a,b]$, and generalized even farther by just using any subalgebra of $C[a,b]$ that "separates points and vanishes nowhere", but then the approximants are not as well-behaved individually.

Ryan Reich
  • 6,080
  • 20
  • 30

That's a good question, and the answer can be deep and open up large areas of math. The Fourier transform (in its various incarnations) takes a function and writes it as a linear combination of basis vectors -- and the basis it uses is a special basis, the Fourier basis, which is a basis of eigenvectors for the shift operator. It's possible to use different bases (perhaps a basis of eigenvectors for a different operator) and this will give you different transforms. This is related to the spectral theorem in linear algebra and in functional analysis.

In applied math, the discrete Fourier transform isn't always the one we want. For example, to compress a signal we may prefer to use a wavelet transform or some other transform.

  • 48,104
  • 8
  • 84
  • 154

Can there be an another transform like Fourier somewhere in the universe that can explain all signals as sum of rectangulars or triangulars(or any periodic shape)?

Yes, there are several popular transforms similar to Fourier transforms, except using something other than both sines and cosines.

Every day people watch digitized video or listen to digitized audio that uses several such transforms, usually without even realizing it.

For example,

  • The Fourier transform decomposes a signal into a sum of both infinitely-long sine waves and infinitely-long cosine waves.
  • The discrete cosine transform (DCT) is used in many image compression and video compression systems, including JPEG, MPEG, and Theora. The DCT decomposes a signal into a sum of cosine waves.
  • The modified discrete cosine transform (MDCT) is used in a few more recent image and video compression systems and many audio compression systems, such as MP3, AAC, and Vorbis, since it reduces block-boundary artifacts typical of systems based on DCT. The MDCT also decomposes a signal into a sum of cosine waves.
  • The discrete sine transform decomposes a signal into a sum of sine waves. I am unaware of any practical use.
  • The Hartley transform and discrete Hartley transform (DHT) decomposes a signal into a sum of cas waves, where cas(t) = cos(t) + sin(t). It seems to have several advantages over the Fourier transform, but I am unaware of any practical use.
  • The Walsh transform -- aka Hadamard transform -- decomposes a signal into a sum of infinitely-long even square waves and odd square waves. It is used in JPEG XR and H.264 and also seems to be useful for quantum computing.

Time-frequency transforms are useful for making waterfall plots and spectrograms which are used in a variety of fields -- speech training; studies of animal sounds; radar; seismology; ham radio operators use them to discover what frequencies and protocols people are using and to decode very slow Morse code "by eye"; etc.

  • The short-time Fourier transform is a time-frequency transformation that decomposes a signal into a sum of short pieces of sine and cosine waves, each piece localized in time.
  • Various wavelet transforms is an even more popular time-frequency transformation. The simplest is the Haar transform, which uses a mother wavelet composed of 2 square pulses.
  • chirplet transform is another time-frequency transform.
  • the fractional Fourier transform (FRFT), which I find quite unexpected and surprising that such a thing is even possible, but I am unaware of any practical use.
David Cary
  • 1,757
  • 14
  • 30
  • 2
    I know at least that the FrFT has applications in optics, specifically, [lens flare ghosts](http://www.angelsghosts.com/images/lens_flare_ghosts_photographs_112008z.jpg) can be described by the FrFT as a kind of continuum between wave diffraction and ray optics. This has applications in CG, movie rendering, and computer image synthesis in general, I guess. – Thomas Jan 04 '14 at 10:19
  • Because of the nature of my question, answers are likely to be explanatory. So I can't say this is also an answer to the question, all answers so far can be accepted an answer but I can choose only one among them. Thanks for your reply, all – Yunus Yurtturk Jan 04 '14 at 22:13

You can use different functions, as long as they form a complete set-that says they can expand any function. It is nice if they are independent, so the expansion is unique. It is also nice if they are orthogonal, so you can compute one expansion coefficient without worrying about the others. Sine waves are all of these. So are Walsh functions, which are basically square waves. There are many others.

In much physical work, the advantage of sine waves is that the properties of a medium or electric circuit depend on the frequency. You can expand your input into sine waves, calculate how each component is altered (delay, amplification, attenuation) and add them back up to get the output. If your environment is different, that is strong motivation for another set of functions-you would like your basis functions to interact nicely with the environment.

Ross Millikan
  • 362,355
  • 27
  • 241
  • 432

In almost every field of physics that deal with oscillation (quantum mechanics, AC-electrics and spring mechanics to mention a few), sinusoidal oscillation is the most basic oscillation. It is described by one of the simplest second-degree differential equations (and appear in solutions to many more complicated), and seing as many physical models are based off of second-degree differential equations, sinusoidals are of great importance.

In many cases sinusoidal movement can be thought of as the projection of a hypothetical constant speed rotational motion in the complex plane, which is probably the simplest (non-trivial) oscillation there is. Thus of all oscillatory decompositions, sinusoidal is often the one of greatest interest.

Sinusoidal functions are also quite simple to work with mathematically. They're analytical and well studied. In contrast, triangulars are not differentiable, and rectangulars aren't even continuous.

  • 187,016
  • 14
  • 158
  • 288