92

I haven't taken any complex analysis course yet, but now I have this question that relates to it.

Let's have a look at a very simple example. Suppose $x,y$ and $z$ are the Cartesian coordinates and we have a function $z=f(x,y)=\cos(x)+\sin(y)$. However, now I change the $\mathbb{R}^2$ plane $x,y$ to complex plane and make a new function, $z=\cos(t)+i\sin(t)$.

So, can anyone tell me some famous and fundamental differences between complex plane and $\mathbb{R}^2$ by this example, like some features $\mathbb{R}^2 $ has but complex plane doesn't or the other way around. (Actually I am trying to understand why electrical engineers always want to put signal into the complex numbers rather than $\mathbb{R}^2$, if a signal is affected by 2 components)

Thanks for help me out!

Zev Chonoles
  • 124,702
  • 19
  • 297
  • 503
Cancan
  • 2,507
  • 5
  • 21
  • 28
  • 12
    In electrical engineering, we use complex numbers instead of points in the 2D plane for one basic reason: you can multiply and divide complex numbers, which you can't do with points. – rurouniwallace Jul 15 '13 at 21:02
  • But, can the multiplication and division still make sense in the context after you do so? – Cancan Jul 15 '13 at 21:06
  • Absolutely. Basically it has to do with magnitude and phase. I'll explain a bit more in my answer. – rurouniwallace Jul 15 '13 at 21:09
  • 6
    it seems to me that complex numbers are simply a handy way of written this electrical engineer stuff. that's all, no electrical engineer understands anything about metric, topological spaces or fields etc. that's all they want to do, after all. –  Jul 16 '13 at 00:59
  • 17
    @Heinz: Re: "no electrical engineer understands anything about metric, topological spaces or fields etc.": Would you also say that no mathematician understands anything about electricity? – ruakh Jul 16 '13 at 04:48
  • 4
    @Cancan it might be more appropriate to say that the a "useful" and natural canonical product is defined in the complex plane, whereas in R^2 such a useful product appears contrived, arbitrary, and unnatural. – Justin L. Jul 16 '13 at 07:25
  • 1
    I guess the poster is mainly thinking of $\Bbb R^2$ as its role as a *representation* of $\Bbb C$ in terms of real numbers. Another one is as real matrices $\begin{bmatrix}a&b\\-b&a\end{bmatrix}$. We could also ask "what's the difference between $\Bbb C$ and the real matrices of the form $\begin{bmatrix}a&b\\-b&a\end{bmatrix}$? Algebraically and topologically all of these models are equivalent. It's just that working with one or the other can be more useful in certain situations. – rschwieb Jan 13 '14 at 15:18
  • Note that the transformation you describe are incompatible unless in some trivial cases. In the first function, $x,y,z$ (in particular $z$) represent real numbers. However, in the second, only $t$ is presumably real (in particular, $z$ now represents a complex quantity). In fact you've not made the relationship between them clear. Your question suggests you're thinking of a transformation between them. If so, you've not defined it. But the main point is in your confusing the two $z$'s. If we take them as is stated and thus identity the two expressions, we end up with complete nonsense. – Allawonder Sep 02 '19 at 03:18

14 Answers14

83

$\mathbb{R}^2$ and $\mathbb{C}$ have the same cardinality, so there are (lots of) bijective maps from one to the other. In fact, there is one (or perhaps a few) that you might call "obvious" or "natural" bijections, e.g. $(a,b) \mapsto a+bi$. This is more than just a bijection:

  • $\mathbb{R}^2$ and $\mathbb{C}$ are also metric spaces (under the 'obvious' metrics), and this bijection is an isometry, so these spaces "look the same".
  • $\mathbb{R}^2$ and $\mathbb{C}$ are also groups under addition, and this bijection is a group homomorphism, so these spaces "have the same addition".
  • $\mathbb{R}$ is a subfield of $\mathbb{C}$ in a natural way, so we can consider $\mathbb{C}$ as an $\mathbb{R}$-vector space, where it becomes isomorphic to $\mathbb{R}^2$ (this is more or less the same statement as above).

Here are some differences:

  • Viewing $\mathbb{R}$ as a ring, $\mathbb{R}^2$ is actually a direct (Cartesian) product of $\mathbb{R}$ with itself. Direct products of rings in general come with a natural "product" multiplication, $(u,v)\cdot (x,y) = (ux, vy)$, and it is not usually the case that $(u,v)\cdot (x,y) = (ux-vy, uy+vx)$ makes sense or is interesting in general direct products of rings. The fact that it makes $\mathbb{R}^2$ look like $\mathbb{C}$ (in a way that preserves addition and the metric) is in some sense an accident. (Compare $\mathbb{Z}[\sqrt{3}]$ and $\mathbb{Z}^2$ in the same way.)
  • Differentiable functions $\mathbb{C}\to \mathbb{C}$ are not the same as differentiable functions $\mathbb{R}^2\to\mathbb{R}^2$. (The meaning of "differentiable" changes in a meaningful way with the base field. See complex analysis.) The same is true of linear functions. (The map $(a,b)\mapsto (a,-b)$, or $z\mapsto \overline{z}$, is $\mathbb{R}$-linear but not $\mathbb{C}$-linear.)
Billy
  • 4,177
  • 12
  • 19
  • Billy Thanks! I got all what I want to know so far from you, theoretically :P You are great! – Cancan Jul 15 '13 at 21:39
  • 18
    Saying that $\mathbb{R}^2$ and $\mathbb{C}$ have the same cardinality isn't saying much, $[0,\,1]$ and $\mathbb{R^N}$ have the same cardinality. – fhyve Jul 15 '13 at 23:15
  • @Cancan No problem! Sorry if this was too theoretical - I got a little carried away. ;) – Billy Jul 16 '13 at 00:41
  • 5
    @fhyve Right - I guess I was just trying to hammer home the point that any structure that can be imposed on one can be imposed on the other (for obvious but somehow stupid reasons), *but* that that new structure isn't necessarily interesting. I can give [0,1] a few topologies and algebraic operations at random, but (it seems to me that) this isn't interesting *because* the structures don't interact. Imposing the structure of $\mathbb{C}$ on $\mathbb{R}^2$ somehow isn't interesting in the same sort of way. You get two multiplications that never talk to each other, for example. – Billy Jul 16 '13 at 00:43
  • $\mathbf{R}^2=\mathbf{C}$ as vector spaces over $\mathbf{R}$, also as metric spaces. Btw your last point is lame –  Jul 16 '13 at 01:03
  • 12
    @Heinz Almost all complex analysis derives from the fact that differentiable functions in $\mathbb{C}$ are different. Not really sure how that's "lame". – Emily Jul 16 '13 at 04:15
  • Because its obvious, after all the very definition of differentiability is different –  Jul 16 '13 at 09:43
  • 9
    Why is it obvious that two things which are defined differently are in fact different? – jwg Jul 16 '13 at 10:06
  • 3
    @fhyve: true, cardinality isn't worth much. But they're not only isomorphic but homeomorphic, which is quite a bit more already. – leftaroundabout Jul 16 '13 at 15:58
  • But, why you need to make bijective map (a,b) -> a+ib? Why don't just remark that $(a,b)=a+ib$? –  Jan 12 '14 at 14:03
  • 1
    @laovultai: Because they're not "equal". They're *isomorphic*, but in order to prove that, I have to choose an isomorphism. I chose the "obvious" one, $(a,b) \mapsto a+ib$. (There are lots more, e.g. $(a,b) \mapsto 2a + b - 5ia$, or $(a,b) \mapsto ia - b$.) – Billy Jan 17 '14 at 03:49
  • @Billy:: I'm interested. Where I can find this approach, where you make them isomorphic? Do you mean by "it" the last operation of $a,b\in \mathbb{R}^2$? Do you mean that "in a way that (this previous operation?) preserves addition and the metric " and if yes, how then this previous operation preserves addition and metric? –  Jan 22 '14 at 15:34
  • @laovultai: Let $f$ be the map $\mathbb{R}^2\to \mathbb{C}$, $(a,b) \mapsto a+ib$, and let $g$ be *any* element of $\mathrm{GL}_2(\mathbb{R})$. Then $f\circ g$ is an isomorphism $\mathbb{R}^2\to \mathbb{C}$ as $\mathbb{R}$-vector spaces. Now simply "pull back" the multiplication from $\mathbb{C}$ to $\mathbb{R}^2$ along the map $f\circ g$ (e.g. when $g$ is the identity map, the multiplication inherited is $(u,v)\cdot (x,y)=(ux−vy,uy+vx)$), and you get an isomorphism of rings (or, equivalently, $\mathbb{C}$-vector spaces). Does that answer your question? – Billy Jan 25 '14 at 14:49
  • (You might also find it helpful to note that the "natural" multiplication on $\mathbb{R}^2$, namely $(u,v)\cdot (x,y) = (ux, vy)$, does not agree with the natural multiplication on $\mathbb{C}$ for *any* choice of $g\in \mathrm{GL}_2(\mathbb{R})$.) – Billy Jan 25 '14 at 14:54
  • @Billy: I found only GL((something),(something)). Is this GL Abelian group? Secondly is "it" same as $(u,v)\cdot(x,y)=(ux−vy,uy+vx)$(1) in your answer? Also in your answer I'm not sure do you mean that this (1)( in brackets) preserves addition and the metric? If yes, then I'm not sure how this operation preserves addition and the metric( what metric? Do you mean the modulus of complex number z?) –  Jan 27 '14 at 19:38
40

The big difference between $\mathbb{R}^2$ and $\mathbb{C}$: differentiability.

In general, a function from $\mathbb{R}^n$ to itself is differentiable if there is a linear transformation $J$ such that the limit exists:

$$\lim_{h \to 0} \frac{\mathbf{f}(\mathbf{x}+\mathbf{h})-\mathbf{f}(\mathbf{x})-\mathbf{J}\mathbf{h}}{\|\mathbf{h}\|} = 0$$

where $\mathbf{f}, \mathbf{x}, $ and $\mathbf{h}$ are vector quantities.

In $\mathbb{C}$, we have a stronger notion of differentiability given by the Cauchy-Riemann equations:

$$\begin{align*} f(x+iy) &\stackrel{\textrm{def}}{=} u(x,y)+iv(x,y) \\ u_x &= v_y, \\ u_y &= -v_x. \end{align*} $$

These equations, if satisfied, do certainly give rise to such an invertible linear transformation as required; however, the definition of complex multiplication and division requires that these equations hold in order for the limit

$$\lim_{h\ \to\ 0} \frac{f(z+h)-f(z)-Jh}{h} = 0$$

to exist. Note the difference here: we divide by $h$, not by its modulus.


In essence, multiplication between elements of $\mathbb{R}^2$ is not generally defined (although we could, if we wanted to), nor is division (which we could also attempt to do, given how we define multiplication). Not having these things means that differentiability in $\mathbb{R}^2$ is a little more "topological" -- we're not overly concerned with where $\mathbf{h}$ is, just that it gets small, and that a non-singular linear transformation exists at the point of differentiation. This all stems from the generalization of the inverse function theorem, which can basically be approached completely topologically.

In $\mathbb{C}$, since we can divide by $h$, because we have a rigorous notion of multiplication and division, we want to ensure that the derivative exists independent of the path $h$ takes. If there is some trickeration due to the path $h$ is taking, we can't wash it away with topology quite so easily.

In $\mathbb{R}^2$, the question of path independence is less obvious, and less severe. Such functions are analytic, and in the reals we can have differentiable functions that are not analytic. In $\mathbb{C}$, differentiability implies analyticity.


Example:

Consider $f(x+iy) = x^2-y^2+2ixy$. We have $u(x,y) = x^2-y^2$, and $v(x,y) = 2xy$. It is trivial to show that $$u_x = 2x = v_y, \\ u_y = -2y = -v_x,$$ so this function is analytic. If we take this over the reals, we have $f_1 = x^2-y^2$ and $f_2 = 2xy$, then $$J = \begin{pmatrix} 2x & -2y \\ 2y & 2x \end{pmatrix}.$$ Taking the determinant, we find $\det J = 4x^2+4y^2$, which is non-zero except at the origin.

By contrast, consider $f(x+iy) = x^2+y^2-2ixy$. Then,

$$u_x = 2x \neq -2x = v_y, \\ u_y = -2y \neq 2y = -v_x,$$

so the function is not differentiable.

However, $$J = \begin{pmatrix} 2x & 2y \\ -2y & -2x \end{pmatrix}$$ which is not everywhere singular, so we can certainly obtain a real-valued derivative of the function in $\mathbb{R}^2$.

Emily
  • 34,328
  • 6
  • 89
  • 135
  • 1
    Could you please give me an concrete example to demonstrate this? – Cancan Jul 15 '13 at 20:45
  • Sure, let me think one up. – Emily Jul 15 '13 at 20:53
  • 2
    This is kind of an apples and oranges thing. The analogues of complex differentiable functions in real spaces are those with vanishing divergence and curl. Trying to use the limit definition confuses things with directional derivatives, which are quite different. – Muphrid Jul 15 '13 at 20:54
  • The example was derived from Bak & Newman – Emily Jul 15 '13 at 21:07
  • 2
    @Muphrid Isn't it the whole point to show that these two things are different fruits? There are plenty of analogues between reals and complex numbers. The limit definition for complex numbers is effectively structurally the same as in single-variable reals. The notion of analyticity is effectively the same. The concept of a Taylor series is the same. Topology is extremely similar, what with the complex number concept that the modulus is also a norm. The one fundamental thing that is different is differentiation. – Emily Jul 15 '13 at 21:10
  • @Arkamis For all the theoretical explanations here, you are just great! – Cancan Jul 15 '13 at 21:13
  • I agree with Arkamis. The limit definition may or may not be confusing, but the point is that, by applying the 'same' definition to $\mathbb{R}^2$ and $\mathbb{C}$, you get different things. – Billy Jul 15 '13 at 21:14
  • Isn't it a little artificial to say that the big difference is differentiability when it's really that $\mathbb{C}$ is a division ring that accounts for the different notion of differentiability? – Cameron Williams Jul 15 '13 at 22:19
  • No more artificial than creating the notion of a division ring and noticing that $\mathbb{C}$ happens to be one. Note that $\langle \mathbb{R}^2,+,\cdot\rangle$ is also a division ring where $a\cdot b = (a_1b_1-a_2b_2,a_1b_2+a_2b_1)$ is so defined. – Emily Jul 15 '13 at 22:35
14

I'll explain this more from an electrical engineer's perspective (which I am) than a mathematician's perspective (which I'm not).

The complex plane has several useful properties which arise due to Euler's identity:

$$Ae^{i\theta}=A(\cos(\theta)+i\sin(\theta))$$

Unlike points in the real plane $\mathbb{R}^2$, complex numbers can be added, subtracted, multiplied, and divided. Multiplication and division have a useful meaning that comes about due to Euler's identity:

$$Ae^{i\theta_1}\cdot{Be^{i\theta_2}}=ABe^{i(\theta_1+\theta_2)}$$

$$Ae^{i\theta_1}/{Be^{i\theta_2}}=\frac{A}{B}e^{i(\theta_1-\theta_2)}$$

In other words, multiplying two numbers in the complex plane does two things: multiplies their absolute values, and adds together the angle that they make with the real number line. This makes calculating with phasors a simple matter of arithmetic.

As others have stated,addition, subtraction, multiplication, and division can simply be defined likewise on $\mathbb{R}^2$, but it makes more sense to use the complex plane, because this is a property that comes about naturally due to the definition of imaginary numbers: $i^2=-1$.

rurouniwallace
  • 5,965
  • 3
  • 26
  • 49
7

The difference is that in the complex plane, you've got a multiplication $\mathbb C\times\mathbb C\to\mathbb C$ defined, which makes $\mathbb C$ into a field (which basically means that all the usual rules of arithmetics hold.)

celtschk
  • 41,315
  • 8
  • 68
  • 125
  • I'm not sure this answers this question. If the OP knows enough math he'll just say the same is true for $\Bbb R^2$. – Git Gud Jul 15 '13 at 20:23
  • Right, that's also what I want to ask. Isn't the same for $\mathbb{R}^2$ – Cancan Jul 15 '13 at 20:24
  • 1
    @GitGud: What would that multiplication in $\mathbb R^2$ be? – celtschk Jul 15 '13 at 20:24
  • We can't say what that multiplication is until we define dot product or cross product or whatever product. – Cancan Jul 15 '13 at 20:26
  • @celtschk $(a,b)\cdot (x,y)=(ax-by, ay+bx)$. – Git Gud Jul 15 '13 at 20:26
  • 1
    Multiplication is different: $(a,b)\cdot (c,d) = ac+bd$ in $\mathbb{R}^2$ (dot product of 2 vectors); in $\mathbb{C}$ it is $(a+ib) \cdot (c+id) = (ac-bd) + i(ad+bc)$. – Oleg567 Jul 15 '13 at 20:26
  • 5
    @GitGud: As soon as you *add(!)* that multiplication to $\mathbb R^2$, you *have* $\mathbb C$. – celtschk Jul 15 '13 at 20:28
  • @celtschk Exactly, hence the OP's question. – Git Gud Jul 15 '13 at 20:28
  • 4
    @Oleg567 I'm not sure why you're talking about the dot product. Just because the word *product* is in the name of the operation it doesn't make it particularly relevant. Also the dot product isn't a function from $\Bbb R^2$ to $\Bbb R^2$. – Git Gud Jul 15 '13 at 20:29
  • 1
    @GitGud: Then in which way does my answer *not* answer his question? He asked for the difference between $\mathbb R^2$ and $\mathbb C$, and the difference is that you've got the multiplication available in $\mathbb C$. – celtschk Jul 15 '13 at 20:30
  • @celtschk And so you do in $\Bbb R^2\ldots$ – Git Gud Jul 15 '13 at 20:31
  • 2
    @GitGud: No. You can *go on and define it yourself* (and then arrive at $\mathbb C$), but you don't have it *available.* – celtschk Jul 15 '13 at 20:33
  • 1
    @celtschk I don't know what it means to have it available. – Git Gud Jul 15 '13 at 20:34
  • Guys, calm down, thanks for your help, but please don't fight for nothing :P And back to my question, for the multiplication part, I guess I can understand this theoretically, but I am still wondering why people always put something like signal into complex plane, is it because any special property of complex plane that $\mathbb{R}^2$ doesn't have? That's my main concern, – Cancan Jul 15 '13 at 20:35
  • 3
    @GitGud: It means it is part of the definition of the object under consideration. – celtschk Jul 15 '13 at 20:35
  • @Cancan Ahaha, no one's fightning. We're discussing. Don't worry. – Git Gud Jul 15 '13 at 20:35
  • It's all about context and convention, really, but it also makes it a lot easier to set it up so that $\Bbb R \subseteq \Bbb C$ and to make sense of things like vector spaces over $\Bbb C$. – dfeuer Jul 15 '13 at 20:35
  • Cancan, I think it tends to be done because these pairs of quantities tend to transform together in some sense. Look up "impedance", for example. – dfeuer Jul 15 '13 at 20:37
  • @celtschk We're implicity talking about fields here. If we're gonna be rigorous about it (definining $\Bbb C$ and the operations), we're gonna get that the fields $\Bbb R$ and $\Bbb C$ are either equal or, if not equal, isomorphic. But that's looking at the fields and not at the sets. – Git Gud Jul 15 '13 at 20:39
  • @dfeuer But in most cases, how can we know if those correlations are legitimate enough for us to use complex plane? I actually don't wanna make a decision of using complex plane because convention says so :P – Cancan Jul 15 '13 at 20:40
  • @GitGud: If you look only at the sets, all you can say is that both have the same cardinality. Indeed, you cannot even distinguish between $\mathbb R$ and $\mathbb R^2$ at the level of sets, because they also have the same cardinality. (You may claim that you can distinguish because $\mathbb R^2$ consists of pairs, but then, you can (a) define $\mathbb R$ as pairs, too (original Dedekind cuts!), and (b) the real numbers are not identical with a single model of the real numbers. – celtschk Jul 15 '13 at 20:42
  • @celtschk I didn't mean to talk about cardinality. I meant to compare them simply as they are. (Everything is a set). This means getting back to the definitions and that's too much wandering for level of detail one would want in this answer. For the record, I'm not the downvoter. – Git Gud Jul 15 '13 at 20:45
  • 1
    Cancan, it's a practical matter, not a philosophical one. It *does* work out like that. – dfeuer Jul 15 '13 at 20:45
  • 2
    @GitGud: OK, then *which* definition of the real numbers do you consider the "correct" one? The one using Dedekind curs? (And in this case, the original or the modern one?) Or the one using Cauchy sequences? My point is that it is not the *set* which makes the real numbers real numbers (although you certainly need that, too), but the *operations* which are defined on that set. This is sometimes obscured by the sloppy way we write things (namely using the same symbol for the complete structure and the underlying set), but it is really the structure that counts. – celtschk Jul 15 '13 at 20:51
  • @celtschk Then what is the natural structure in $\Bbb R^2$? If it is the one which yields $\Bbb C$, then they are the same. If it isn't, then (probably) they aren't the same. And this would lead to a philosophical discussion about what's the natural structure to consider over $\Bbb R^2$. – Git Gud Jul 15 '13 at 20:54
  • 3
    If we understand, as usual, $\mathbb R^2$ as the $\mathbb R$-vector space of pairs of real number under element-wise addition and element-wise scalar multiplication, then the structure of $\mathbb R^2$ (I don't use "natural" here because in the end, it's all part of the definition) consists of: The set $\mathbb R^2$, the field $\mathbb R$ of real numbers (with all of *its* structure), the operation $+: \mathbb R^2\times\mathbb R^2\to\mathbb R^2$, and the operation $\cdot: \mathbb R\times\mathbb R^2\to\mathbb R^2$. – celtschk Jul 15 '13 at 21:08
  • 1
    @GitGud Why does there have to be just one? It has a natural structure as a topological product space, as a vector space over $\Bbb R$, and as the complex field. And that's before you get to the quotient that produces the plane in polar coordinates. – dfeuer Jul 15 '13 at 21:14
6

If $X = \mathbb C$ (a one-dimensional vector space over the scalar field $\mathbb C$), [its] balanced sets are $\mathbb C$, the empty set $\emptyset$, and every circular disc (open or closed) centered at $0$. If $X = \mathbb R^2$ (a two-dimensional vector space over the scalar field $\mathbb R$), there are many more balanced sets; any line segment with midpoint at $(0,0)$ will do. The point is that, in spite of the well-known and obvious identification of $\mathbb C$ with $\mathbb R^2$, these two are entirely different as far as their vector space structure is concerned.

-W. Rudin (1973)

Umberto P.
  • 49,544
  • 4
  • 42
  • 90
  • But can I say $\mathbb{C}$ is a vector with one direction but $\mathbb{R}^2$ is a vector with 2 directions? – Cancan Jul 15 '13 at 20:32
  • I'm not sure I agree with this. In my opinion the proper comparation would be comparing $\Bbb C/ \Bbb R$ with $\Bbb R^2/\Bbb R$. – Git Gud Jul 15 '13 at 20:32
  • @Cancan what do you mean when you say that $\mathbb{R}^2$ or $\mathbb{C}$ are "vectors". – Squirtle Jul 16 '13 at 02:43
  • @Cancan: Neither $\mathbb{C}$ nor $\mathbb{R}^2$ are vectors. But you _can_ say that $\mathbb{C}$ is a $1$-dimensional $\mathbb{C}$-vector space, and that $\mathbb{R}^2$ is a $2$-dimensional $\mathbb{R}$-vector space. – Jesse Madnick Jul 16 '13 at 03:39
6

The relationship between $\mathbb C$ and $\mathbb R^2$ becomes clearer using Clifford algebra.

Clifford algebra admits a "geometric product" of vectors (and more than just two vectors). The so-called complex plane can instead be seen as the algebra of geometric products of two vectors.

These objects--geometric products of two vectors--have special geometric significance, both in 2d and beyond. Each product of two vectors describes a pair of reflections, which in turn describes a rotation, specifying not only the unique plane of rotation but also the angle of rotation. This is at the heart of why complex numbers are so useful for rotations; the generalization of this property to 3d generates quaternions. For this reason, these objects are sometimes called spinors.

On the 2d plane, for every vector $a$, there is an associated spinor $a e_1$, formed using the geometric product. It is this explicit correspondence that is used to convert vector algebra and calculus on the 2d plane to the algebra and calculus of spinors--of "complex numbers"--instead. Hence, much of the calculus that one associates with complex numbers is instead intrinsic to the structure of the 2d plane.

For example, the residue theorem tells us about meromorphic functions' integrals; there is an equivalent vector analysis that tells us about integrals of vector functions whose divergences are delta functions. This involves using Stokes' theorem. There is a very tight relationship between holomorphic functions and vector fields with vanishing divergence and curl.

For this reason, I regard much of the impulse to complexify problem on real vector spaces to be inherently misguided. Often, but not always, there is simply no reason to do so. Many results of "complex analysis" have real equivalents, and glossing over them deprives students of powerful theorems that would be useful outside of 2d.

Muphrid
  • 18,790
  • 1
  • 23
  • 56
4

Since everyone is defining the space, I figured I could give an example of why we use it (relating to your "Electrical Engineering" reference). The $i$ itself is what makes using complex numbers/variables ideal for numerous applications. For one, note that:

\begin{align*} i^1 &= \sqrt{-1}\\ i^2 &= -1\\ i^3 &= -i\\ i^4 &= 1. \end{align*} In the complex (real-imaginary) plane, this corresponds to a rotation, which is easier to visualize and manipulate mathematically. These four powers "repeat" themselves, so for geometrical applications (versus real number manipulation), the math is more explicit.

One of the immediate applications in Electrical Engineering relates to Signal Analysis and Processing. For example, Euler's formula: $$ re^{i\theta}=r\cos\theta +ir\sin\theta $$ relates complex exponentials to trigonometric formulas. Many times, in audio applications, a signal needs to be decomposed into a series of sinusoidal functions because you need to know their individual amplitudes ($r$) and phase angles ($\theta$), maybe for filtering a specific frequency:

Fourier Transform

This means the signal is being moved from the time-domain, where (time,amplitude) = $(t,y)$, to the frequency domain, where (sinusoid magnitude,phase) = $(r,\theta)$. The Fourier Transform (denoted "FT" in the picture) does this, and uses Euler's Formula to express the original signal as a sum of sinusoids of varying magnitude and phase angle. To do further signal analysis in the $\mathbb{R}^2$ domain isn't nearly as "clean" computationally.

Glorfindel
  • 3,929
  • 10
  • 24
  • 36
Kendra Lynne
  • 368
  • 1
  • 5
3

My thought is this: $\mathbb{C}$ is not $\mathbb{R}^2$. However, $\mathbb{R}^2$ paired with the operation $(a,b) \star (c,d) = (ac-bd, ac+bd)$ provides a model of the complex numbers. However, there are others. For example, a colleague of mine insists that complex numbers are $2 \times 2$ matrices of the form: $$ \left[ \begin{array}{cc} a & -b \\ b & a \end{array} \right] $$ but another insists, no, complex numbers have the form $$ \left[ \begin{array}{cc} a & b \\ -b & a \end{array} \right] $$ but they both agree that complex multiplication and addition are mere matrix multiplication rules for a specific type of matrix. Another friend says, no that's nonsense, you can't teach matrices to undergraduates, they'll never understand it. Maybe they'll calculate it, but they won't really understand. Students get algebra. We should model the complex numbers as the quotient by the ideal generated by $x^2+1$ in the polynomial ring $\mathbb{R}[x]$ in fact, $$ \mathbb{C} = \mathbb{R}[x]/\langle x^2+1\rangle$$ So, why is it that $\mathbb{C} = \mathbb{R}^2$ paired with the operation $\star$? It's because it is easily implemented by the rule $i^2=-1$ and proceed normally. In other words, if you know how to do real algebra then the rule $i^2=-1$ paired with those real algebra rules gets you fairly far, at least until you face the dangers of exponents. For example, $$ -1 = \sqrt{-1} \sqrt{-1} = \sqrt{(-1)(-1)} = \sqrt{1} = 1 $$ oops. Of course, this is easily remedied either by choosing a branch of the square root or working with sets of values as opposed to single-valued functions.

All of this said, I like Rudin's answer for your question.

Pedro
  • 116,339
  • 16
  • 202
  • 362
James S. Cook
  • 16,540
  • 3
  • 42
  • 100
  • May I understand in this stupid way that the main difference between $\mathbb{R}^2$ and $\mathbb{C}$, in the definition perspective of view, is that their multiplication defined is different. Because of this, it generates a lot of differences such as the difference in diffferentiability, etc. – Cancan Jul 16 '13 at 08:23
  • 2
    @Cancan yes, the choice of definition is merely that, a choice. However, the difference between real and complex differentiability is multifaceted and highly nontrivial. There really is no particular multiplication defined on $\mathbb{R}^2$, in fact, you could defined several other multiplications on that point set. For example, $j^2=1$ leads to the hyperbolic numbers. Or $\epsilon^2=0$ gives the null-numbers. There are all sorts of other things to do with $\mathbb{R}^2$. – James S. Cook Jul 16 '13 at 17:01
2

There are plenty of differences between $\mathbb{R}^2$ plane and $\mathbb{C}$ plane. Here I give you two interesting differences.

First, about branch points and branch lines. Suppose that we are given the function $w=z^{1/2}$. Suppose further that we allow $z$ to make a complete circuit around the origin counter-clockwisely starting from a point $A$ different from the origin. If $z=re^{i\theta}$, then $w=\sqrt re^{i\theta/2}$.

At point $A$, $\theta =\theta_1$, so $w=\sqrt re^{i\theta_1/2}$.

While after completing the circuit, back to point $A$,
$\theta =\theta_1+2\pi$, so $w=\sqrt re^{i(\theta_1+2\pi)/2}=-\sqrt re^{i\theta_1/2}$.

Problem is, if we consider $w$ as a function, we cannot get the same value at the same point. To improve, we introduce Riemann Surfaces.Imagine the whole $\mathbb{C}$ plane as two sheets superimposed on each other. On the sheets, there is a line which indicate real axis. Cut two sheet simultaneously along the POSITIVE real axis. Imagine the lower edge of the bottom sheet is joined to the upper edge of the top sheet.

We call the origin as a branch point and the positive real axis as the branch line in this case.

Now, the surface is complete, when travelling the circuit, you start at the top sheet and if you go one complete circuit, you go to the bottom sheet. Travelling again, you go back to the top sheet. So that $\theta_1$ and $\theta_1+2\pi$ become two different point (at the top and the bottom sheet respectively), and comes out two different value.

Another thing is, in $\mathbb{R}^2$ case, $f'(x)$ exist not implies $f''(x)$ exist. Try thinking that $f(x)=x^2$ if $x\ge0$ and $f(x)=-x^2$ when $x<0$. But in $\mathbb{C}$ plane. If $f'(z)$ exist (we say $f$ is analytic), it guarantees $f'(x)$ and thus $f^{(n)}(x)$ exist. It comes from the Cauchy's integral formula.

I am not going to give you the proof, but if you are interested, you should know Cauchy Riemann Equations first: $w=f(z)=f(x+yi)=u(x,y)+iv(x,y)$ is analytic iff it satisfy $\frac{\partial u}{\partial x}=\frac{\partial v}{\partial y}$, $\frac{\partial u}{\partial y}=-\frac{\partial v}{\partial x}$ both. Proof is simply comes from the definition of differentiation. Thus, once you get $u(x,y)$ you can find $v(x,y)$ from the above equation, making $f(z)$ analytic,

Y.H. Chan
  • 2,327
  • 16
  • 36
2

To augment Kendra Lynne's answer, what does it mean to say that signal analysis in $\mathbb{R}^2$ isn't as 'clean' as in $\mathbb{C}$?

Fourier series are the decomposition of periodic functions into an infinite sum of 'modes' or single-frequency signals. If a function defined on $\mathbb{R}$ is periodic, say (to make the trigonometry easier) that the period is $2\pi$, we might as well just consider the piece whose domain ins $(-\pi, \pi]$.

If the function is real-valued, we can decompose it in two ways: as a sum of sines and cosines (and a constant): $$ f(x) = \frac{a_0}{2} + \sum_{n=1}^{\infty} a_n \cos(nx) + \sum_{n=1}^{\infty} b_n \sin(nx)$$ There is a formula for the $a_k$ and the $b_k$. There is an asymmetry in that $k$ starts at $0$ for $a_k$ and at $1$ for $b_k$. There is a formula in terms of $\int_{-\pi}^{\pi} f(x) \cos(kx)dx$ for the $a_k$ and a similar formula for the $b_k$. We can write a formula for $a_0$ which has the same integral but with $\cos(0x) = 0$, but unfortunately we have to divide by 2 to make it consistent with the other formulae. $b_0$ would always be $0$ if it existed, and doesn't tell us anything about the function.

Although we wanted to decompose our function into modes, we actually have two terms for each frequency (except the constant frequency). If we wanted to say, differentiate the series term-by-term, we would have to use different rules to differentiate each term, depending on whether it's a sine or a cosine term, and the derivative of each term would be a different type of term, since sine goes to cosine and vice versa.

We can also express the Fourier series as a single series of shifted cosine waves, by transforming $$ a_k \cos(kx) + b_k \sin(kx) = r_k \cos(kx + \theta_k) .$$ However we now lost the fact of expressing all functions as a sum of the same components. If we want to add two functions expressed like this, we have to separate the $r$ and $\theta$ back into $a$ and $b$, add, and transform back. We also still have a slight asymmetry - $r_k$ has a meaning but $theta_0$ is always $0$.

The same Fourier series using complex numbers is the following: $$ \sum_{n=-\infty}^{\infty} a_n e^{inx} .$$ This expresses a function $(-\pi, \pi] \rightarrow \mathbb{C}$. We can add two functions by adding their coefficients, we can even work out the energy of a signal as a simple calculation (each component $e^{ikx}$ has the same energy. Differentiating or integrating term-by-term is easy, since we are within a constant of differentiating $e^x$. A real-valued function has $a_n = a_{-n}$ for all $n$ (which is easy to check). $a_n$ all being real, $a_{2n}$ being zero for all $n$ or $a_{n}$ being zero for all $n < 0$ all express important and simple classes of periodic functions.

We can also define $z = e^{ix}$ and now the Fourier series is actually a Laurent series: $$ \sum_{n=-\infty}^{\infty} a_n z^{n} .$$

The Fourier series with $a_n = 0$ for all $n < 0$ is a Taylor series, and the one with $a_n$ all real is a Laurent series for a function $\mathbb{R} \rightarrow \mathbb{R}$. We are drawing a deep connection between the behavior of a complex function on the unit circle and its behavior on the real line - either of these is enough to specify the function uniquely, given a couple of quite general conditions.

jwg
  • 2,666
  • 19
  • 28
0

For the sake of easy communication, it is common to identify $\ \mathbb C\ $ and $\ \mathbb R^2\ $ via the algebraic connecting $\ \mathbb C\ $ with field $\mathbb R[i]/(i^2+1).\ $ However, there are many other equivalent ways to define $\ \mathbb C,\ $ e.g. as $\mathbb R[\epsilon]/(\epsilon^2+\epsilon+1).\ $ Thus, in principle, an axiomatic way would be cleaner -- for instance, as an algebraically closed field with an automorphism called conjugation, etc.


Complex analysis feels very different from real analysis. Formally, the vector spaces are different in an essential way. E.g. there is always an eigenvalue and an eigenvector over $\ \mathbb C\ $ but not always over $\ \mathbb R.\ $ The complex field is much more algebraic and geometric. The real smooth (infinitely differentiable) functions on manifolds are very flexible (see the partition of the unit!), they remind you of the real-valued continuous functions on topological normal and paracompact spaces. On the other hand, complex-differentiable functions are right away infinitely differentiable (analytic), they are quite rigid, and they feel almost like polynomials. To Riemann, analytic functions were global creatures rather than local. Euler already looked at analytic functions as at infinite degree polynomials, and that how he was able to find/compute $\ \sum_{n=1}^\infty\, \frac 1{n^2}\ =\ \pi^2/6.$

And this goes on and on.

Wlod AA
  • 1,922
  • 5
  • 7
0

The basic difference between $\mathrm C$ and $\mathrm R^2$ which makes electrical engineers prefer working with complex quantities is that $\mathrm C$ is not usually thought of as just a set (yes, it's an abuse of notation, but that's common -- it's almost impossible to imagine some set without thinking of at least some structure on it). It has an algebra over it very similar to the usual algebra with real numbers, so we can manipulate these vectors almost as effortlessly as with real numbers -- perhaps sometimes even more effortlessly.

They come into their very own when we start doing analysis -- that is, dealing with functions. Functions of a complex variable have remarkable analytic properties which make them easier to work with in many cases. Also, such functions are just an elegant way to model many natural phenomenon we may want to analyse. In electrical engineering in particular, they're interested in oscillations. These find a very natural interpretation in terms of complex variables since they can be thought of as oscillations too. Couple this with their algebraic properties and you have a powerful system of tools to literally calculate with oscillations (or whatever other object you're dealing with).

Allawonder
  • 12,848
  • 1
  • 15
  • 26
0

One might consider $\mathbb C$ and $\mathbb R^2$ to be isomorphic in some sense, since $f(a+bi)=(a,b)$ defines a bijection. If we define addition in $\mathbb R^2$ coordinate-wise, then this (group-)isomorphism holds as $$f((a+bi)+(c+di))=(a,b)+(c,d)=(a+c,b+d)=f((a+c)+(b+d)i)$$

One way you might define multiplication in $\mathbb R^2$ is also coordinate-wise. But the above bijection would not form a (ring-)isomorphism in that case, since $f(1)=(1,1)$, $f(i)=(a,b)$, and $$f(i^2)=f(-1)=(-1,-1)=f(ii)=f(i)f(i)=(a^2,b^2)$$ demonstrates a contradiction $a^2=-1$ for real-valued $a$.

However, as is stated elsewhere, defining multiplication by $(a,b)(c,d)=(ac-bd)+(bc+ad)i$ does in fact allow $f$ to be a ring isomorphism.

(N.B. if you have a bijection, you can always coerce it to be a ring isomorphism... provided you're flexible in how you define addition and multiplication for one of the sets in question!)

StevenClontz
  • 874
  • 6
  • 14
0

Electrical engineers are much entangled in the complex numbers field just because a fundamental circuit block, like it is a RC, "works perfectly" with the complex numbers: its impedance looks to be "naturally" complex.
And then linear analogic circuits naturally lead to Fourier and Laplace and Transfer function, Bode diagrams and so on.

G Cab
  • 33,333
  • 3
  • 19
  • 60