I apologize in advance because this question might be a bit philosophical, but I do think it is probably a genuine question with non-vacuous content.

We know as a fact that differential forms have a much richer structure than vector fields, to name a few constructions that are built on forms but not on vectors, we have:

(1)Exterior derivative, and hence Stokes theorem, de Rham cohomology and etc.


(3)Functorality, i.e. we can always pull back a differential form but we cannot always push forward a vector field.

However, I feel it's somewhat paradoxical considering the fact that differential forms are defined to be the dual of vector fields, and this gives me the intuition that they should be almost "symmetric". Clearly this intuition is in fact far off the mark. But why? I mean, is there an at least heuristic argument to show, just by looking at the definition from scratch, that differential forms and vector fields must be very "asymmetric"?

Jia Yiyang
  • 1,013
  • 9
  • 20
  • @MikeMiller: you are right I should've been more accurate. As for (3) functorality, I still would like to take it as an example of the richness of the structure of diff forms, rather than the reason for it, because I don't see a way to "see" the functorality from the definition of diff forms. I in fact don't know what kind of answer will satisfy me, but I certainly hope what I said doesn't make an answer impossible :) – Jia Yiyang Jun 11 '14 at 03:08
  • Um, @Mike, what is a covector field? – Ted Shifrin Jun 11 '14 at 03:08
  • @TedShifrin: differential 1-form I suppose. – Jia Yiyang Jun 11 '14 at 03:09
  • 3
    Uh huh, which is a $1$-form.:) – Ted Shifrin Jun 11 '14 at 03:12

6 Answers6


The overwhelming preference to work with $k$-covector fields ("differential forms") stems from a few basic facts:

First, you might know of $\nabla$ from vector calculus. It is related to the exterior derivative $d$ in the sense that you can do $\nabla \wedge$ on a covector field and it is equivalent to $d$. $\nabla$ itself transforms as a covector does, and so it takes 1-covectors to 2-covectors, $k$-covectors to $k+1$-covectors (and these are all fields, of course). So there is a very convenient element of closure under the operation.

Second, integration on a manifold naturally involves the tangent $k$-vector of the manifold. This is something traditional differential forms notation tends to gloss over. When you see, for example, something like this:

$$\int f \, \mathrm dx^1 \wedge \mathrm dx^2$$

It really means this:

$$\int f \, (\mathrm dx^1 \wedge \mathrm dx^2)(e_1 \wedge e_2) \, dx^1 \, dx^2$$

For this reason, the basis covectors $\mathrm dx^i$ should not be confused with the differentials $dx^i$. Further, that we use $e_1 \wedge e_2$ here, and not $e_2 \wedge e_1$, reflects an implicit choice of orientation, which is usually picked by convention from the ordering of the basis, but this need not always be the case. The tangent $k$-vector, and especially its orientation, must necessarily be considered in these integrals.

So why does this make $k$-covector-fields preferred? Because the action of these fields on the manifolds' tangent $k$-vectors is inherently nonmetrical. So, differential forms allows you to do a lot of calculus without imposing a metric.

This point, however, is somewhat obfuscated when you introduce the Hodge star operator and interior differentials, for these are metrical. Then, you get a big problem with differential forms: by working exclusively with $k$-covector fields, and expunging all reference to $k$-vector fields, the treatment when we do have a metric is extremely ham-fisted. Yes, you can do everything with wedges, exterior derivatives, and Hodge stars. But it makes much more sense to use corresponding grade-lowering operations and derivatives instead. Geometric calculus does this, but let met get to that in a moment.

Regarding the pushforward vs. pullback, I must confess a lack of understanding. I do not see why we would want to pull covectors back from a target manifold while insisting we must push vectors forward. I'm very familiar with the mathematics: that under a smooth map, the adjoint Jacobian transforms covectors from the target cotangent space to the original, and the inverse Jacobian does the same for vectors. Perhaps it has to do with defining the pushforward as the inverse of this inverse.

Now, do all these remarks put together mean that $k$-vector fields are inherently disadvantaged, or less rich, than $k$-covector fields? I would say no. I mentioned geometric calculus earlier: it is the originator of the $\nabla \wedge$ notation that I used earlier, and it handles $k$-vector fields just fine. Geometric calculus is the calculus that goes with clifford algebra, and you may find it illuminating. Many of the theorems and results of differential forms translate to geometric calculus and to $k$-vector fields. Stokes' theorem? Used extensively. de Rham cohomology? Most of the same results apply.

My point above about differential forms integrals using tangent $k$-vectors implicitly? That comes from geometric calculus, too, where the tangent $k$-vector is not digested "trivially" and you have to look at all the metrical ways in which it might interact with the vector field you're integrating.

A grade-lowering derivative is natural to use with $k$-vector fields. In geometric calculus, this is notated as $\nabla \cdot$. You can see that successive chains of $\nabla \cdot$ continually lower the grade of a field, just as successive exterior derivatives raise it.

My ultimate point is that, when you do have a metric, it's quite nonsensical to treat everything as a differential form instead of using $k$-vector fields when appropriate. I feel the tendency to do this in physics divorces students from a lot of the vector calculus they had learned, unnecessarily so. I can't speak to mathematics courses, but I imagine some of that criticism applies, too.

Now, there are some properties of covector fields and exterior derivatives that are nicer than working with vector fields. For instance, under a map $f(x) = x'$ with adjoint Jacobian $\overline f$, it's true that $\overline f(\nabla ' \wedge A') = \nabla \wedge A$ for some covector field $A$. That's a very convenient result, and there's no correspondingly nice identity for vector fields.

  • 18,790
  • 1
  • 23
  • 56
  • 2
    Nicely put. A rich answer. – Emily Jun 11 '14 at 04:00
  • Thanks for the informative answer, +1. To summarize your point :diff forms and multi-vector fields aren't really that "asymmetric" despite the conventional preference of diff forms, geometric calculus(whatever it means) is an alternative. So would you say the preference of diff forms is a historical incident, that it is only because the language of diff forms got popular earlier than geometric calculus? – Jia Yiyang Jun 11 '14 at 10:42
  • Yes, that's my position. And to be fair, vanilla tensor calculus handle vector fields on the same footing, too, but that gives up some of the cleanliness of notation in forms or GC parlance. – Muphrid Jun 11 '14 at 13:18
  • 1
    Can you elaborate a bit on the distinction between differentials and covectors? And I don't really understand what $$\int f \, (\mathrm dx^1 \wedge \mathrm dx^2)(e_1 \wedge e_2) \, dx^1 \, dx^2$$ means... – goblin GONE May 13 '17 at 17:52
  • 2
    To illustrate my confusion. When we're young, we ask people "why not write $\int_{x=a}^b f(x)$ instead of the more mystical and mystifying $\int_{a}^bf(x) dx$," and they say "because $dx$ can be thought of as an infinitesimal element." Then we think about non-standard analysis just a little bit and it becomes pretty clear this is a dumb justification. Then we get a bit older, and we ask more people about the $dx$, and eventually we get the answer: "because $f(x) dx$ is a differential form!" You seem to be saying that, actually, it's not. Is that right? – goblin GONE May 13 '17 at 17:56
  • @goblin -- The integral means that the 2-form / 2-covector $(dx^1\wedge dx^2)$ is being applied to the 2-vector $(e_1\wedge e_2)$, and the resulting scalar (which should be simply $1$) is being multiplied by $f$, and integrated with respect to the coordinates $x^1,x^2$. Your question about the meaning of $dx$ (infinitesimal, or 1-form, or notation indicating the varying coordinate) is discussed elsewhere: https://math.stackexchange.com/questions/200393/what-is-dx-in-integration – mr_e_man Nov 03 '18 at 01:50
  • I'm aware this is old; but this table is helpful: $$ \begin{array}{r|c|c} & \textbf{No (non-metric) product} & \textbf{Wedge product, $\wedge$} \\\hline \text{Vector space, $V$} & \color{#a00}{\text{Plain vectors, } u \in V} & \text{$k$-vectors, }A \in \bigwedge(V) \\ \text{Dual space, $V^*$} & \text{Covectors, } \phi : V \xrightarrow{\text{linear}} \mathbb R & \color{#070}{\text{$k$-forms, }\phi : \bigwedge(V)\xrightarrow{\text{linear}} \mathbb R} \end{array} $$ – Jollywatt Jul 22 '21 at 05:00
  • Being in the dual space enables natural integration; the wedge product leads to useful higher-dimensional objects. But it's perfectly sensible to define the wedge product on plain vectors, as in geometric algebra. – Jollywatt Jul 22 '21 at 05:00

You can think of vector fields as the "Lie algebra of the diffeomorphism group"; that is, you can think of a vector field as an infinitesimal diffeomorphism. This in particular explains why you shouldn't expect vector fields to be functorial, because diffeomorphism groups are not functorial, but why you should expect vector fields to act on various other objects attached to a manifold via Lie derivative (this is some kind of "infinitesimal functoriality" for these objects).

You can think of $1$-forms as the "universal derivatives" of functions; in fact you can think of differential forms on a smooth manifold $X$ as being like the free commutative differential graded algebra generated by $C^{\infty}(X)$ (although I think this is slightly wrong as stated), which in particular explains morally why taking differential forms ought to be functorial. Because vector fields act by derivations on $C^{\infty}(X)$ this explains in some sense why vector fields pair with $1$-forms, but not why this pairing is perfect. I suspect that this is a special fact about smooth manifolds and is false in greater generality, suitably interpreted.

Qiaochu Yuan
  • 359,788
  • 42
  • 777
  • 1,145
  • Thanks and +1. This looks like good stuff but the terminologies in the 2nd paragraph are beyond my mathematical background. – Jia Yiyang Jun 11 '14 at 10:44
  • @Jia: for a simpler thing than what I said in the second paragraph see http://en.wikipedia.org/wiki/K%C3%A4hler_differential for starters (although it is slightly false that $\Omega^1(X)$ is the Kahler differentials of $C^{\infty}(X)$; one needs a notion of "smooth Kahler differential" instead). – Qiaochu Yuan Jun 11 '14 at 16:48
  • There's also something to say about Hochschild homology vs. cohomology but I've probably already gone too far. – Qiaochu Yuan Jun 11 '14 at 16:49
  • Well, certainly over my head for now but thanks for the extra information! – Jia Yiyang Jun 12 '14 at 02:09

Short answer: things that look like functions are very convenient, because we can do algebra, calculus, and such with them. Things that look like geometry are less convenient to work with.

Tangent vectors on a manifold $M$ closely relate to differentiable curves $\mathbf{R} \to M$. Infinitesimal segments of such curves give one of the visualizations of the meaning of a tangent vector; more rigorously, any tangent vector on $M$ can be identified with the image of the standard tangent vector $\partial/\partial x$ on $\mathbf{R}$ at the origin.

Cotangent vectors -- i.e. differential $1$-forms -- on a manifold $M$ closely relate to differentiable functions $M \to \mathbf{R}$. All of the things I said about tangent vectors apply in dual form to cotangent vectors.

However, we can already see one glaring asymmetry: we have a structure entirely internal to $M$ that is closely related to differentiable functions $M \to \mathbf{R}$: specifically, the notion of differentiable scalar fields.

This asymmetry was already seen in elementary calculus: functions are interesting, intervals less so. In higher dimensions, curves can be interesting, but their study is primary through the functions defining them and how they relate to functions.

(although see things like topology, homotopy, and homology for ways in which curves in a space can be made into a more primary object of study)

While tangent vectors seem a pleasing idea from an external point of view and are important to the relationship between various manifolds, they play a much less significant role internally to a manifold, predominantly serving to act as the dual space to cotangent vectors.


If you have a metric then $k$-vector fields are isomorphic to $k$-forms, so in this setting they do not have richer structure. The reason we prefer forms is that we can do a lot of operations on them without a metric.

The simplest example is integration on a line segment. Visualize this line segment as a curved piece of elastic band in space. Suppose we have a function that assigns a real value to each point on the band. Does it make sense to talk about the integral of this function invariant under diffeomorphisms? It doesn't because stretching the band in a particular area increases the contribution of that area, thus changing the value of the integral. Suppose instead we put a discrete set of dots on the band. No matter how we stretch it, the total number of dots between two points on the band does not change. So if we put a dot density on the band then we have a quantity that can be integrated invariant under diffeomorphism. A dot density can be given by a function $f(x_1, x_2)$ giving the number of dots between $x_1$ and $x_2$. Now letting $x_2$ approach $x_1$ we define $$g(x)(v) = \lim_{\epsilon \to 0} \frac{f(x, x+ \epsilon v)}{\epsilon}.$$ This is a 1-form giving the dots in a small interval $(x, x+dx)$ as $g(x)(dx)$. We can find the dots between $x_1$ and $x_2$ as the integral of $g(x)(dx)$. This is why forms are the right quantity to integrate. In higher dimensions $g(x, y)(dx, dy)$ gives the number of dots in a small square at a point $(x, y)$.

The reason we can always pullback forms but not always push forward vectors is not due to an asymmetry in vectors/forms but due to an asymmetry in functions: each $x$ has a unique $y$ but not every $y$ has a unique $x$. You can always push forward a vector along $f^{-1}$. In particular, you can push forward and pull back along diffeomorphisms.

Alex Provost
  • 18,995
  • 5
  • 30
  • 63
  • 796
  • 7
  • 14

I will add that smooth k-forms on a smooth manifold M carry an additional structure of a graded algebra, with product given by wedge product and for which both the exterior derivative and interior multiplication (“contraction”) are anti-derivations of degree 1 and -1, respectively, which square to zero, while (the Lie algebra of) smooth vector fields on a smooth manifold does not carry a nontrivial grading. There is still a product structure on vector fields given by Lie derivative turning it into a non-associative algebra, but no grading a priori.

Locally trivial
  • 386
  • 1
  • 9

I was thinking about this very topic on a walk a few weeks ago.

There is a hierarchy of possible structures you can put on a manifold. Each allows you to define a richer set of natural covariant/coordinate-free operations on the manifold: you can start with only differential structure; to this you can add a Riemannian metric, and finally you can add a connection.

While tensor algebra in its full form requires all three, the "alternating" part, exterior calculus, does not: differential structure and a metric is enough to define the differential, codifferential, and Hodge star, which gives you the deRham cohomology, Stokes's theorem, etc. (And as Muphrid has pointed out in his answer, if you don't even have a metric, you still have the differential, though I would argue that almost all applications people have in mind for differential forms, including those listed in your question, require also the Hodge star.)

Consider for instance the second derivative of functions. The alternating part of the second derivative, $d^2f=0$, is automatically covariant without needing any notion of connection. The symmetric part, the Hessian, $\nabla \nabla f$, can only be defined in a coordinate-free way by means of a connection.

So to answer your question, I wouldn't say that differential forms have richer structure than vector fields; after all they are equivalent via the musical isomorphisms. But I would say that exterior calculus, being the part of tensor calculus that is automatically coordinate-free without additional structure in the form of a connection, is more fundamental, or at least more basic, than the full tensor calculus.

  • 45,846
  • 11
  • 84
  • 142