On the answers of the question Is $\frac{\textrm{d}y}{\textrm{d}x}$ not a ratio? it was told that $\frac{dy}{dx}$ cannot be seen as a quotient, even though it looks like a fraction. My question is: does $dxdy$ in the double integral represent a multiplication of differentials? The problem than can be generalized for a multiple integral.

11For «$dxdy$» to be a product, first $dx$ and $dy$ have to be things which can be multiplied —numbers, or numbervalues functios. And they are not: they are just notational devices to indicate with respect to which variables integration is being done, exactly in the same way that $dx/dy$ is not a quotient. (And yes, when you move along you will learn about differential forms and what not, but that doesn't chanhe absolutely anything) – Mariano SuárezÁlvarez Mar 07 '14 at 17:10

2@Emin, since you included the nonstandard analysis tag I thought you were looking for an answer in this framework. If you wish an answer in a traditional framework, you should specify it. The problem then would be to explain the meaning of your term "differential", which only has a kind of a tautological meaning in the traditional framework. Basically what is going on is that one is paritioning the domain of integration into tiny squares, whose area is indeed the product of the sides. This basic idea can be formalized in various ways, but you have to be clearer about what you are looking for. – Mikhail Katz Mar 09 '14 at 17:53

2Something is usually defined as multiplication when it distributes with addition, not just because it is an extension of the common numbering system. You could; for example, multiply two polynomials and get another polynomial, but a polynomial isn't a number. As to the original question; it would be better phrased as "can we assume multiplication properties for dx dy" rather than "is dx dy multiplication". – DanielV Mar 15 '14 at 20:16

@MarianoSuárezAlvarez : I disagree. See my answer below. – Michael Hardy Jul 29 '14 at 17:07

2I have seldom seen anyone miss a point more thoroughly that Mariano SuárezAlvarez in comments further below. I wrote that intuitive ideas can be made rigorous but the intuitions exist independently of the ways of making them rigorous. He seems to have construed my comment as "Intuitive ideas can be made rigorous so let's do that and then explain the rigorous ideas to students." That the opposite of my point. My point was they exist independently of ways of making them rigorous and so can be presented in the classroom to students who can't understand rigorous arguments. – Michael Hardy Jul 29 '14 at 19:40

Here is something I wrote on a closely related question: http://math.stackexchange.com/questions/200393/whatisdxinintegration/200403#200403 – Michael Hardy Jul 29 '14 at 19:46
9 Answers
In a double integral, you are actually integrating a differential twoform:
$$\int_R \mathrm{f}(x,y) \ \mathrm{d}x \wedge \mathrm{d}y$$
Here, $\mathrm{d}x$ and $\mathrm{d}y$ are the basis differential oneforms and $\mathrm{d}x \wedge \mathrm{d}y$ is their exterior product.
 30,650
 3
 49
 95

No, they're not forms, they're $k$vectors. This should be obvious if you consider a line integral in 2d: parameterizing the line naturally generates objects of the form $d\ell/dt$, which is obviously a tangent vector. The reason you've confused them for forms is that you've considered volume forms each time, for which the action of the Jacobian is confused for its transpose (because they both give the same answer: the determinant). – Muphrid Mar 07 '14 at 23:03


4I'm well aware of differential forms. Nevertheless, what you've written is more of a convenient abbreviation. If $e_x, e_y$ are basis vectors, then what you're really integrating is $\int_R f(x,y) (\mathrm dx \wedge \mathrm dy)(e_x \wedge e_y \, dx \, dy)$. The question pertains to the latter $dx, dy$, not to the basis oneforms $\mathrm dx, \mathrm dy$. – Muphrid Mar 08 '14 at 19:37

@Muphrid I didn't mean to insult your intelligence. I suggested that you look at the article because it deals with the integration of differential forms. – Fly by Night Mar 08 '14 at 19:50

Then I'm not sure what you want me to glean from the article. It's true that $(\mathrm dx \wedge \mathrm dy)(e_x \wedge e_y \, dx \, dy) = 1 \, dx \, dy$, but that doesn't change that the differential form is *part of the integrand* and not intrinsically related to $dx$ or $dy$. – Muphrid Mar 08 '14 at 20:05

4@Muphrid Your point of view disagrees with the Wikipedia article, and I have never seen the notation that you are using in the setting of integral calculus. Can you please supply a reliable thirdparty reference for you claims? – Fly by Night Mar 08 '14 at 21:42

That integration on a $k$dimensional manifold involves the tangent $k$vector of the manifold is common knowledge in geometric calculus. See http://en.wikipedia.org/wiki/Geometric_calculus#Integration for example, and also the subsequent section "Relation to differential forms." – Muphrid Mar 08 '14 at 22:32

@Muphrid The section that you linkto displays an equality between my idea of integrating a differential form and your notation. I suspect that you and I are viewing the same coin from different sides. – Fly by Night Mar 08 '14 at 22:44

1I have no doubt that the methods are mathematically equivalent, but I still think it is worth emphasizing that the tangent $k$vector is involved in integration. The apparent relationship between basis forms and differentials cannot exist otherwise. Moreover, sometimes you may wish to integrate something other than a form. Maybe you want to integrate a $k$vector. Knowing that integration inherently involves the tangent $k$vector of the manifold, you can properly identify the factors of the metric that should be there. Differentials will be in this integral, even though there are no forms. – Muphrid Mar 08 '14 at 23:01

@Muphrid: You don't have to talk about tangent vectors *at all* in integration. Rather than pushing forward the standard basis vectors from the real plane to the target manifold, you can pull the differential form back from the target manifold to the plane, and all you need is to assert the equivalence between the product of the standard basis forms and the usual area form: i.e. $\int f dx \wedge dy = \iint f dx dy$. In my opinion, this is a *far* cleaner approach. – Mar 09 '14 at 19:05

@Hurkyl That may be the case, so that the work of the tangent vectors is trivial when the integrand is pulled back, but that doesn't change that the basis forms are part of the *integrand* while the differentials $dx, dy$ come with the tangent vectors, even when those tangent vectors will be trivially annihilated by the form being integrated. – Muphrid Mar 09 '14 at 19:24

@Muphrid: I'm rather confused by your terminology: $f \, dx \wedge dy$ *is* the integrand of $\int f \, dx \wedge dy$. – Mar 09 '14 at 19:54

@Hurkyl I'm distinguishing between the basis form $\mathrm dx$ and the differential $dx$. – Muphrid Mar 09 '14 at 20:38

@Muphrid: What do you think a differential *is*, if not a differential form? I could understand if you were referring to a measure, but then the integrand would still be $f d\mu$ and not merely $f$. – Mar 09 '14 at 21:30

@Hurkyl: My position is that the differential is merely notational, that it only tells us what variables to integrate over, that integrals could just as easily be written without differentials. – Muphrid Mar 09 '14 at 21:33

3@Muprhid: At the cost of becoming more cumbersome and error prone. Don't overlook the fact that if you have notation that is "merely notational", but you have rules for manipulating the notation and doing arithmetic with it consistently, you have in fact defined an abstract mathematical object. And differential 1forms happen to be a model of that abstract mathematical object. – Mar 09 '14 at 21:53

@Hurkyl: But differential forms are just covector fields. They do not intrinsically have anything to do with integrals. An integral is just a map that takes a function (which could be a covector field, sure) and an oriented open set to return returns a number. In practice, we represent the functionthe integrandin terms of some dummy variables, and the differentials tell us which variables are dummies. The basis 1forms you use to express a differential form need not be, and are not constrained to be, corresponding to the variables of integration. – Muphrid Mar 09 '14 at 23:32

@Muphrid: Elementary is also often done with algebraically dependent variables, rather than with functions. e.g. many often speak of variables $x,y$ related by the equation $y = x^2$, rather than about a function $f$ defined by $f(t) = t^2$. Although this kind of *notation* isn't often made precise until you start talking about scalar fields and differnetial forms, the notation itself is rather insensitive to the low level details of just what you think is going on. – Mar 10 '14 at 00:05

1And path integrals and such can be defined directly, without even referencing the ordinary integral. e.g. the path integral could be defined directly as a suitable Riemann sum, or instead via Stoke's theorem. Integration  particularly path integrals, surface integrals, and the like  is very easy to see as a natural operation that gives a number when you combine a suitable geometric region with a differential form. – Mar 10 '14 at 00:14

@Hurkyl: But that approach seems needlessly cumbersome. How do you get a scalar from a $k$form evaluated at a given point? Feed it a $k$vector in the tangent space at that point. That scalar can then be integrated using the usual techniques, and this is exactly the way of thinking I've described. It accurately captures the way integrals of forms are done, so I see no compelling reason why you're going to such great lengths to avoid it. – Muphrid Mar 10 '14 at 14:07

@Muphrid: But my goal *isn't* to get a scalar from a $k$form: My goal is to convert a $k$form on a manifold to a $k$form on $\mathbf{R}^k$, and then integrate the $k$form with the usual techniques! And I tend to prefer the algebraic side of things; e.g. I find it far more natural to note that, along the unit circle in the plane, $dy = \cos \theta d \theta$, which follows directly from the equation $y = \sin \theta$ without going through tangent vectors as an intermediary. (and, on a related note, I greatly prefer working with differentials than partial differentiation) – Mar 10 '14 at 19:51

(of course, sometimes I don't want to pull back to Euclidean space at all: e.g. I may have some other tools to compute the integral or relate it to interesting quantities, such as Stoke's theorem or maybe I'm taking a limit of integrals and the end result can be determined by the geometry on the manifold or some other thing: I really and truly *want* to think of the integral as something I'm doing on my manifold, rather than something I'm doing on Euclidean space: pulling back to Euclidean space is just one computation technique) – Mar 10 '14 at 19:56

4@Muphrid Please could you continue your conversation with Hurkyl elsewhere. The comments section is to ask the person making the post for more information and/or to suggest improvements. It is not to host a forum. Thank you. – Fly by Night Mar 10 '14 at 21:41

3@Hurkyl Please could you continue your conversation with Muphrid elsewhere. The comments section is to ask the person making the post for more information and/or to suggest improvements. It is not to host a forum. Thank you. – Fly by Night Mar 10 '14 at 21:41
Just as one can think of the derivative in Robinson's framework as a true ratio $\frac{\Delta y}{\Delta x}$ modulo an infinitesimal error (eliminated by applying the shadow), so also one can think of a singlevariable integral as an infinite sum of infinitesimal terms of type $dx$ (again up to applying shadow). Double integrals can naturally be viewed as double (infinite) sums, where $dx\,dy$ is most decidedly an ordinary product. And of course this generalizes to multiple integrals as the OP suggested. If one is in Euclidean space, talking about differential forms is an unnecessary obfuscation.
Edit 1: For finite Riemann sums approxiating the double integral, it is obvious that the term $\Delta x \Delta y$ is a product; it seems nobody in his right mind would deny this. The difference is that one cannot deduce the value of the integral from a finite Riemann sum. On the other hand, with an infinite Riemann sum when $\Delta x$ is replaced by $dx$, etc., the value of the integral is deduced from the value of the Riemann sum by taking the shadow (see above). That's the advantage of having the richer syntax of the hyperreal approach.
Edit 2: the OP's question is in fact equivalent to a question about singlevariable integrals, namely: does $f(x)dx$ denote multiplication of $f(x)$ by $dx$? Perhaps the right answer is that it denotes a memory of multiplication. Namely, the multiplication is still there at the level of the hyperfinite Riemann sum. To pass from this to the integral one applies the standard part function, after which we have only a "memory" left. Similarly, one can form the differential quotient Δy/Δx which is still a ratio, but one doesn't get the derivative until one applies the standard part function. Here also there is only a memory of a division left. The advantage of the hyperreal framework is that one has a direct procedure for passing from the ratio to the derivative which isn't the case in the traditional realbased framework where one must appeal to an indirect notion of an epsilon, delta limit.
A survey of various approaches to Robinson's framework is due to appear in Real Analysis Exchange.
 35,814
 3
 58
 116


2@Emin, the multiplication is still there at the level of the hyperfinite Riemann sum. To pass from this to the integral one applies the standard part function, after which we have only a "memory" left. Similarly, one can form the differential quotient $\Delta y/\Delta x$ which is still a ratio, but one doesn't get the derivative until one applies the standard part function. Here also there is only a memory of a division left. The advantage of the hyperreal framework is that one has a *direct* procedure for passing from the ratio to the derivative... – Mikhail Katz Jul 28 '14 at 14:24

... which isn't the case in the traditional realbased framework where one must appeal to an indirect notion of an epsilon, delta limit. – Mikhail Katz Jul 28 '14 at 14:25

I think that comment would be the real answer to my question! You may include it on your answer so I can accept the answer. – Emo Jul 29 '14 at 13:18

As a physicist, I find that _infinitesimals just_ __are__ _ordinary real numbers_. And they don't even have to be small. With certain astronomical applications, they can be quite large, maybe the size of a galaxy. I've been working with a [Fluid Tube Continuum](http://www.alternatievewiskunde.nl/article/SUNA38.txt) where infinitesimals are the distance between two tubes in a tube bundle. A physicist works _with different scales_. Quantities that are small on one scale can be large on another scale. A distinction which apparently tends to disappear when going to exact, by "taking the limit". – Han de Bruijn Sep 19 '14 at 20:26

In nonstandard analysis, if $y$ and $x$ are related by $y = f(x)$, then $dy/dx$ is *exactly* equal to $f'(x)$: it is *not* the difference quotient of an infinitesimal difference. – Sep 19 '14 at 20:51

@MikhailKatz Just want to let you know that the DOI mentioned on the arXiv page seems to be incorrect, it should be [10.14321/realanalexch.42.2.0193](https://doi.org/10.14321/realanalexch.42.2.0193)? – May 02 '20 at 06:17
As others have said $dx\, dy$ does not represent a product of differentials. But it represents a product of measures. We have the "natural" Lebesgue measure $\lambda$ on the $x$axis, and integration with respect to this measure is signalled by writing ${\rm d}x$ as right parenthesis of the integral. Similarly we have the "natural" Lebesgue measure $\lambda$ on the $y$axis, and integration with respect to this measure is signalled by writing ${\rm d}y$ as right parenthesis of an integral involving the variable $y$. The individual measures $\lambda$ on the $x$axis ${\mathbb R}$ and the $y$axis ${\mathbb R}$ define a product measure $\lambda\otimes\lambda$ on the cartesian product ${\mathbb R}^2$, again called Lebesgue measure on ${\mathbb R}^2$. Integration with respect to this product measure is is signalled by writing ${\rm d}(x,y)$, ${\rm d}x\otimes {\rm d}y$, or simply $dx\,dy$, as right parenthesis of an integral over some subset $A\subset{\mathbb R}^2$. Fubini's theorem then tells us that $$\int\nolimits_A f(x,y)\>{\rm d}(x,y)=\int\nolimits_{A'}\left(\int\nolimits_{A_x} f(x,y)\> {\rm d}y\right)\ {\rm d}x\ ,$$ where $A'$ denotes the projection of $A$ onto the $x$axis and $A_x:=\{y\mid (x,y)\in A\}$ collects the $y$values to be weighted in for given $x\in A'$.
 216,873
 13
 166
 425

From all answers this is to me more convinced, but still there are some gaps I think. From what I understood, you're saying that dxdy is just a notation that tells us that we are integrating with respect to $\lambda\otimes\lambda$ measure, but I'm not convinced that it's just a notation. – Emo Mar 15 '14 at 18:23

[Can someone provide an example of Lebesgue integration on ordinary functions?](http://math.stackexchange.com/questions/884803/) – Han de Bruijn Sep 19 '14 at 19:40

Improper integrals exist for functions that are not Lebesgue integrable: see [Wikipedia](http://en.wikipedia.org/wiki/Lebesgue_integration#Limitations_of_Lebesgue_integral) – Han de Bruijn Sep 19 '14 at 19:50
Let's say $f(x,y)$ is in $\dfrac{\mathbf{kg}\cdot \mathbf{m}}{\mathbf{sec}\cdot\mathbf{dollar}}$ and $x$ is in $\mathbf{sec}$ and $\mathbf{dollar}$. Then $f(x,y)\,dx\,dy$ is in $\mathbf{kg}\cdot \mathbf{m}$, just as if we are multiplying.
If an infinitely small rectangle has length and width respectively $dx$ and $dy$, then its area is $dx\,dy$; if $f(x,y)$ is density of something (mass, probability, energy$\ldots$) with respect to area, then $f(x,y)\,dx\,dy$ is measure in the same units as that "something". Why does $dx\,dy$ become $r\,dr\,d\theta$. Sometimes people say "because you multiply by a Jacobian". That's Ok as far as it goes, maybe. I regard it as $(dr)(r\,d\theta)$. If $r$ is in meters and $\theta$ is dimensionless and in radians, then $r\,d\theta$ is in meters. A length of an arc of a circle is the radius times the radian measure of the arc. The radius is $r$; the radian measure of the arc is $d\theta$, so $r\,d\theta$ is the length, and $dr$ is disance in a direction orthogonal to that so $(dr)(r\,d\theta)$ is area of that rectangle. It's a rectangle because an infinitely small arc is a straight line.
If you say that this is not rigorous, I agree.
If you object that this is not rigorous, I disagree.
Intuitive ideas can be made rigorous [comment inspired by comments below: The following should be obvious, but apparently there was one person to whom it wasn't, so maybe there are others. I do not condone making infinitesimals rigorous in most firstyear calculus courses.]. What is the right way to do that may be subject to philosophical disagreements. But one should not assert that intuition to be made rigorous is the rigorous endproduct. Just which way of making something rigorous is the right one depends on the context. Some other way of making something rigorous that will be discovered 100 years from now may have its place. But the idea that is to be made rigorous exists independently of the ways of making it rigorous.
 1
 30
 276
 565

2That something *can* be made rigorous (and of course infinitessimals and things like «infinitely small rectangle» can) does not imply that it *is* made rigorous. These infinitely small quantities you speak of are usually *not* made rigorous in most courses, which instead focus on *another* way of setting things up. In *that* setting, $dx\,dy$ is certainly nothing more than a notational device. – Mariano SuárezÁlvarez Jul 29 '14 at 18:21

@MarianoSuárezAlvarez : I think that is nonsense. Yes, there are lots of lousy calculus courses that don't explain the intuition. But a good introductory calculus course does that and largely eschews rigor. They shouldn't be just used as a notational device. I don't think we should tell people who ask questions here that they shouldn't want to know the answers simply because the answers won't be explained in most courses. – Michael Hardy Jul 29 '14 at 19:17

1Something in your comment is very strange: you see to think that only a course that makes intuitive ideas rigorous should speak of them at all. Rigor is a bad thing in most introductory calculus courses. Intuitive explanation of infinitesimals is a good thing in such courses. – Michael Hardy Jul 29 '14 at 19:18

1@MarianoSuárezAlvarez : You _really_ know how to miss a point and drop context. Looking at my answer, I see that I wrote "Intuitive ideas can be made rigorous", and from your comments it looks as if you construed that as meaning one should make them rigorous so that one can say that $dx\,dy$ is an instance of multiplication. If you read the WHOLE paragraph, you will see that my point was nearly the opposite of that. I wrote that those intuitions exist INDEPENDENTLY of the ways of making them rigorous. – Michael Hardy Jul 29 '14 at 19:28

I find it rather insulting that you assume I did not read the whole paragraph, so Iwill just ignore the whole thing. – Mariano SuárezÁlvarez Jul 29 '14 at 21:37

@MarianoSuárezAlvarez : I find your distortion of what I wrote insulting. – Michael Hardy Jul 29 '14 at 21:51

@MichaelHardy: You certainly took a firm stand in your answer and you have my full support on it. Quite unfortunately (+1) is all I can do. – Han de Bruijn Sep 15 '14 at 15:45

The bare existence of infinitesimals has lead to endless controversy in mathematics. And it still does, as the clash between two titans here clearly demonstrates. – Han de Bruijn Sep 19 '14 at 19:51


Archimedes used infinitesimals brilliantly but explicitly denied that they exist. Through one of two points $A$ and $B$ on a parabola draw a tangent line and through the other draw a line parallel to the axis and through both draw the secant line. Archimedes showed that that triangle has three times the area of the part of it that is inside the curve. One of his methods of doing that used infinitesimals. Another way he did it is not valid if there are infinitesimals, since it only narrows the range of possible areas down to a set of numbers differing infinitesimally from each other. – Michael Hardy Sep 19 '14 at 21:05
$dx$ and $dy$ aren't real numbers; they are things called differential forms. Thus, you can't use the real number multiplication operation to multiply them.
However, $dx \, dy$ is a thing, and it is not terribly unreasonable to define "the multiplication of $dx$ and $dy$ to be $dx \, dy$. The trick is that you have to make the inferences in the opposite direction from what you're used to  to work out the first properties, it's not because you are understanding $dx \, dy$ in terms of multiplication, it's because you are using your understanding of $dx \, dy$ to figure out what 'multiplication' means.
There are some subtleties in what $dx \, dy$ means that I'm not up to fully explaining at the moment: e.g. it needs to talk about the orientation of a region, so that the fact that $dx \, dy = dy \, dx$ can be properly explained. (you don't notice this fact when you do ordinary iterated integrals, since you flip the orientation of your region whenever you swap the order of $x$ and $y$, which cancels out the sign change)

To consider: why is it that $\int \mathrm dx \wedge \mathrm dy = \int dx \, dy$ but $\int \mathrm dy \wedge \mathrm dx =  \int dx \, dy$? How can this be reconciled with the notion that order of integration doesn't usually matter? Because what's really going on is the 2form is eating the 2vector, and changing the sign of the 2form changes the sign of the result. This also answers why $\int \mathrm dx \wedge \mathrm dy = \int dx \, dy$: it's conventional. People usually use the 2vector $e_x \wedge e_y$ as the orientation, but if you use $e_y \wedge e_x$, you get the opposite. – Muphrid Mar 09 '14 at 20:49

1@Muphrid: That's not what's "really going on"  it's merely one way to understand the integral. And is easy enough to grok orientation once you see the initial idea and, e.g. see how if $R = [a,b]\times[c,d]$, then $\int_a^b \int_c^d dy dx$ should be $\int_{R} dy \wedge dx$ since you're tracing the rectangle out in the opposite orientation than standard. – Mar 09 '14 at 21:43

@Muphrid: And it's worth noting that, even when interpreting $dx$ and $dy$ as differential forms and all integrals as path integrals, that $$\int_I \left( \int_J dx \right) \, dy = \int_J \left( \int_I dy \right) \, dx$$ corresponding to two different ways to write a surface as a curve of curves. – Mar 09 '14 at 22:00

*dx and dy aren't real numbers; they are things called differential forms.* Well, one way of interpreting them is as forms. But the question explicitly points out that this is not the only way of interpreting them; in the context of NSA, they're infinitesimals. – May 19 '14 at 22:18

@Ben: In the singlevariable case, we can indeed interpret differential forms as infinitesimals. But they're still differential forms, and we can make semantic errors if we forget that. Note, however, that here we're in the multivariable case. – May 19 '14 at 22:48

$dx$ and $dy$ at least __have been real numbers__. See my [comment](http://math.stackexchange.com/questions/703212/isdxdyreallyamultiplicationofdxanddy/705363#comment1935249_705363) in response to [user72694](http://math.stackexchange.com/users/72694/user72694) – Han de Bruijn Sep 19 '14 at 20:38
Let's see what happens to $dx dy$ under a change of variables $x = f(z,w)$, $ y = g(z,w)$.
$$ dx = \frac{df}{dz}dz + \frac{df}{dw}dw $$ $$ dy = \frac{dg}{dz}dz + \frac{dg}{dw}dw $$
So locally this change of variables is a linear transformation and the dilation factor is the determinant:
$$ \left \begin{array}{cc} \frac{df}{dz} & \frac{df}{dw} \\ \frac{dg}{dz} & \frac{dg}{dw} \end{array} \right =\frac{df}{dz}\frac{dg}{dw} \frac{df}{dw} \frac{dg}{dz} $$
We can derive this using exterior algebra and the wedge product.
$$ dx \wedge dy = \left(\frac{df}{dz}dz + \frac{df}{dw}dw \right) \wedge \left(\frac{dg}{dz}dz + \frac{dg}{dw}dw \right) $$
Using the identities $dz \wedge dz = dw \wedge dw = 0$ and $dz \wedge dw =  dw \wedge dz$. We recover the Jacobian formula.
In our case, $dx \wedge dy $ behaves like $dx dy$ since they are in a sense perpendicular, "$dx \perp dy$".
In Cartesian linear coordinates just as we sum up separate rectangle areas as $ \int y. dx $ , we sum up prismatic volumes as $ \int z.dx. dy $. Here we consider an infinitesimal(differential) area as a product of two infinitesimal(differential) lengths.
If we are considering curvilinear coordinates, we need to consider their Jacobian to multiply Cartesian differential area by the Jacobian in order to define relation between the old small area and the new distorted small area. The way product is looked upon as an area in the small is exactly the same as in Cartesian before it leads to the integrated in the large situation.While changing from Cartesian to polar,
$ dx dy = r * dr* d\theta $ where $ r$ is Jacobian $ J$ in the general case
$J = r= \partial(x,y)/\partial ( r,\theta). $
 36,354
 7
 34
 88
No, because the differentials that appear in an integral are just notation; they only signify what variables are to be integrated over.
Consider a sum:
$$\sum_{i=1}^n i^2$$
The $i$ in the bottom of the summation symbol tells you that $i$ is the dummy variable here.
You could, in principle (though no one does this), notate integrals the same way:
$$\int_{x=a}^b x^2$$
And you could do it for a double integral:
$$\int_{y=c}^d \int_{x=a}^b e^{x^2  y^2}$$
The differentials being used in integrals are just empty notation.
Edit: Fly by Night suggests that differentials mean something because they transform. There is a transformation involved when changing coordinates, but it is absolutely incorrect to say this has to do with the differential.
This can be seen by considering the integral abstractly as a map, with its internals being the limit of a Riemann sum. Let $\mathcal I(f, \ell, d)$ be the integral of a function $f$ over a region described by the function $\ell$, with an interval $d$ describing the parameter region of $\ell$. That is, if $d = [a,b)$, then we integrate $f$ from $\ell(a)$ to $\ell(b)$.
The integral can then be written as a limit of a Riemann sum:
$$\mathcal I(f, \ell, d) = \lim_{N \to \infty} \sum_{n=0}^{N1} [f \circ \ell]\left (a + n \frac{ba}{N} \right) \ell'\left(a + n \frac{ba}{N}\right) \frac{ba}{N} $$
Consider the case $\ell(x) = x$. Then $\ell'(x) = 1$, and you get the usual, familiar form of a Riemannian integral.
Now instead, consider a reparameterization. Let $x = g(u) = u^2$, as Fly by Night suggested. Then there is a new paremeterization function $m$:
$$m(u) = (\ell \circ g)(u) = u^2$$
There is also a transformed interval:
$$e = [g^{1}(a), g^{1}(b)) = [p, q)$$
Now then, since $m([p, q)) = \ell([a, b))$, the integral of $f$ should be the same for both. That is, $\mathcal I(f, \ldots)$ only really cares about the region $f$ is actually being integrated over, not how that region is parameterized. So we should conclude that
$$\mathcal I(f, \ell, d) = \mathcal I(f, m, e)$$
But we know that $m'(u) = {\color{red}{2u}}$, and we get
$$\mathcal I(f, m, e) = \lim_{N \to \infty} \sum_{n=0}^{N1} [f \circ m]\left( p + n \frac{q  p}{N} \right) \color{red}{2\left (p + n \frac{qp}{N} \right)} \frac{qp}{N}$$
What does this look like in traditional integral notation? We started with
$$\int_a^b [f \circ \ell](t) \ell'(t) \, dt = \int_a^b f(t) (1) \, dt$$
And then we changed to
$$\int_{p = \sqrt{a}}^{q = \sqrt{b}} [f \circ m](t) m'(t) \, dt = \int_p^q [f \circ m](t) 2t \, dt$$
All of this follows from the transformation of the tangent vector along the parameterized curve. Writing it as a change of differentials is nothing more than misleading voodoo. It is a fundamental mistake to say that the differentials are 1forms. They are not; they are totally vacuous and without meaning, and thus my answer to the question is that it's meaningless to talk about multiplying differentials because they are themselves meaningless. All of the supposed properties of differentials actually come from other, better defined geometric principles and concepts.
 18,790
 1
 23
 56

I disagree with you! In the definition of the integral we have those $\delta x$ and $\delta y$ which becomes dx and dy when we act with limit. This tel us that they are part of the integral, and not just a notation! – Emo Mar 07 '14 at 17:08

2Just because a $\delta x$ would be there when you write the integral as the limit of a Riemann sum doesn't mean anything. The $dx$ by itself only signifies what variable is being integrated over; this can be done in other ways. You never use the $dx$ in an integral for anything other than this purpose. – Muphrid Mar 07 '14 at 17:11

2@Muphrid I think that $\mathrm{d}x$ is a little more than simply notation: it's a differential oneform. If we change coordinates on the $x$axis then $\mathrm{d}x$ transforms. For example if $x=u^2$ then $\mathrm{d}x = 2u\, \mathrm{d}u$. – Fly by Night Mar 07 '14 at 17:33

@FlybyNight: No, this socalled property of differentials comes from the transformation of the tangent vector. Differentials themselves have no innate properties, and they do not obey the transformation laws of forms. – Muphrid Mar 15 '14 at 20:51

@Muphrid A contravariant transformation of a tangent vector corresponds to a covariant transformation of a differential oneform. – Fly by Night Mar 16 '14 at 00:55

@FlybyNight Then I would be very interested if you can come up with a Riemann sum involving a oneform, such that the limit of that sum is a Riemannian integral. – Muphrid Mar 16 '14 at 01:28

1There are multiple good reasons why we don't write $\int x^2 dx$ as $\int x^2$. (1) The units come out wrong. (2) It doesn't specify what we're integrating with respect to. – May 19 '14 at 22:17

I never suggested that we do such a thing without some means of notating what variable to integrate over, so I don't see how your (2) applies. As for (1), nobody has a problem denoting the derivative of a function $f$ as $f'$. You remember that the act of differentiation changes the units of the quantity by a factor of length. The same could easily be done with integration, instead of explicitly tying it to a $dx$. – Muphrid May 19 '14 at 22:33
I find it very helpful in this case, and with multidimensional integrals in general, to write the $dx$ and $dy$ after the $\int$ sign. I.e.
$$ \int dy \int dx\, f(x,y) \;\;\;\text{ rather than }\;\;\; \int\int f(x,y)\, dx\,dy $$
This makes it clear that the $dx$ and $dy$, far from being infinitessimally small $\Delta x$ and $\Delta y$s, are part of the integral operators. Mathematical physicists often follow this convention, not just because it makes multidimensional integrals easier to understand, but also because it emphasizes integration as an operator  and a linear operator at that.
$\int dy$ and $\int dx$ each operate on $f(x,y)$ as "atomic units" that can't be split up. $\int dy$ and $\int dx$ apply to $y$ and $x$ respectively just as $\sum_y$ and $\sum_x$ are units that apply to $y$ and $x$. It still makes perfect sense to see that when $f(x,y)$ factorizes to $f(x)f(y)$ you can rearrange $$\int dy \int dx\, f(x)f(y) \;\;\;\text{ to }\;\;\; \int dy \,f(y) \int dx\, f(x)$$ just as you can $$\sum_y \sum_x\, f(x)f(y) \;\;\;\text{ to }\;\;\; \sum_y \,f(y) \sum_x f(x)$$ However if you write $dxdy$ following the integrand it appears as though you are also factorizing $dxdy$ into $dx\times dy$ when in fact the $dx$ and $dy$ are part of the $\int$ operators.
One reason why $dx$ is often treated as if it is a standalone object to be manipulated separately is the convenience of doing just that when substituting variables. For example $\int \sin^2 x \cos x \, dx = \int u^2 \,du$, substituting $u=\sin x$ and $dx = du / \cos x$. This is very convenient and everyone does it but strictly speaking there is a rule being applied which transforms the integration operation from one to another: $\int dx\, \frac{du}{dx} = \int du$. Similarly in differentiation it is very convenient to think about cancelling the numerator of one derivative with the denominator of another, e.g. when applying the chain rule, but this is not strictly speaking correct.
 6,043
 1
 26
 49

So if I understood good, we don't need to say is dxdy a multiplication of dx and dy, but is $\iint dxdy=\int dx \int dy$ as operators, and the answer would be yes! Right? Than it means that dx, dy and dxdy are not differential forms, but they are just a notation (just as many of others said). Am I right? – Emo Mar 16 '14 at 10:14

1Yes, essentially it's about notation as at least one of the earlier answers said, but the notation I've given here makes it particular clear what's going on. So you can rewrite $\iint dxdy$ as $\int dy\int dx$ and then treat them as atomic operators. Also $\iint dA$ is common for differentating w.r.t. area. – TooTone Mar 16 '14 at 10:21


1

3What you are talking about are iterated integrals rather than double integrals. Modulo suitable hypotheses on the function, these are equal, but one cannot separate dx from dy if one is talking about the *double* integral. The claim that $dx$ and $dx$ are "far from being infinitesimal" is merely persisting in denial and refusal to face reality. – Mikhail Katz Mar 21 '14 at 08:35