As a physics student, I've come across mathematical objects called tensors in several different contexts. Perhaps confusingly, I've also been given both the mathematician's and physicist's definition, which I believe are slightly different.

I currently think of them in the following ways, but have a tough time reconciling the different views:

  • An extension/abstraction of scalars, vectors, and matrices in mathematics.
  • A multi-dimensional array of elements.
  • A mapping between vector spaces that represents a co-ordinate independent transformation.

In fact, I'm not even sure how correct these three definitions are. Is there a particularly relevant (rigorous, even) definition of tensors and their uses, that might be suitable for a mathematical physicist?

Direct answers/explanations, as well as links to good introductory articles, would be much appreciated.

  • 6,184
  • 6
  • 32
  • 37
  • 9
    There is a perfectly rigorous definition and a lot of exposition at the Wikipedia article. – Qiaochu Yuan Nov 14 '10 at 19:51
  • 87
    No. I have checked out the Wikipedia article, and it is not very informative. I do not trust it's definition overly-much anyway. As is commonly said, Wikipedia for mathematics is *only useful* once you understand the subject. It is not a good way to learn it. – Noldorin Nov 14 '10 at 20:17
  • 3
    What kind of mathematical physics are you doing where you see a need for tensors? Depending on the application you have in mind the level of sophistication of the answer will have to be chosen appropriately. If you're doing continuum mechanics or general relativity the simplest definition of tensors (multi-linear functions out of a product of copies of a vector space and its dual to the scalar field) would suffice. If you're interested in more sophisticated applications perhaps you'll need the general nonsense approach that Zach describes below. – Ryan Budney Nov 14 '10 at 20:33
  • See: http://en.wikipedia.org/wiki/Multilinear_map – Ryan Budney Nov 14 '10 at 20:35
  • 29
    Just a small remark: When physicists talk about tensors, they often mean tensor *fields* on manifolds. This might lead to some confusion when comparing definitions if one is not aware of it. – Hans Lundmark Nov 14 '10 at 20:44
  • 1
    If your interest is in continuum mechanics and general relativity, that kind of thing. Then Schutz's "A first course in general relativity" has a very nice (and substantial) section on tensors and tensor fields that would be appropriate. – Ryan Budney Nov 14 '10 at 20:56
  • @Ryan: Moment of inertia tensors would be a good start. I will get onto Riemann tensors and whatnot in general relativity soon, but MoI tensors are a good place to start I think. Cheers for the recommendation! – Noldorin Nov 14 '10 at 21:02
  • @Hans: That is very true, and perhaps why I have been a bit confused. I guess this definition is most apparent in general relativity? – Noldorin Nov 14 '10 at 21:02
  • Okay, then sections 3 and 4 of Schutz's book is close to exactly what you're looking for. – Ryan Budney Nov 14 '10 at 21:07
  • 3
    I recommend that you take a look at the book "Tensor Analysis on Manifolds", by Bishop & Goldberg. – Ronaldo Nov 15 '10 at 02:14
  • There's a nice discussion of tensors with lots of examples in Shafarevich's "Basic notions of algebra". In fact, that whole book is nothing but examples in algebra, though Shafarevich's idea of what "basic" means may differ from your own. – Gunnar Þór Magnússon Nov 15 '10 at 13:50
  • Why the defination of tensor is different for mathematicians and physists? –  Mar 17 '13 at 11:19
  • @RyanBudney https://en.wikipedia.org/wiki/Einstein_tensor – Déjà vu Jan 30 '16 at 15:54
  • Related https://physics.stackexchange.com/q/32011/6336 – 0x90 Nov 20 '17 at 05:26

6 Answers6


At least to me, it is helpful to think in terms of bases. (I'll only be talking about tensor products of finite-dimensional vector spaces here.) This makes the universal mapping property that Zach Conn talks about a bit less abstract (in fact, almost trivial).

First recall that if $L: V \to U$ is a linear map, then $L$ is completely determined by what it does to a basis $\{ e_i \}$ for $V$: $$L(x)=L\left( \sum_i x_i e_i \right) = \sum_i x_i L(e_i).$$ (The coefficients of $L(e_i)$ in a basis for $U$ give the $i$th column in the matrix for $L$ with respect to the given bases.)

Tensors come into the picture when one studies multilinear maps. If $B: V \times W \to U$ is a bilinear map, then $B$ is completely determined by the values $B(e_i,f_j)$ where $\{ e_i \}$ is a basis for $V$ and $\{ f_j \}$ is a basis for $W$: $$B(x,y) = B\left( \sum_i x_i e_i,\sum_j y_j f_j \right) = \sum_i \sum_j x_i y_j B(e_i,f_j).$$ For simplicity, consider the particular case when $U=\mathbf{R}$; then the values $B(e_i,f_j)$ make up a set of $N=mn$ real numbers (where $m$ and $n$ are the dimensions of $V$ and $W$), and these numbers are all that we need to keep track of in order to know everything about the bilinear map $B:V \times W \to \mathbf{R}$.

Notice that in order to compute $B(x,y)$ we don't really need to know the individual vectors $x$ and $y$, but rather the $N=mn$ numbers $\{ x_i y_j \}$. Another pair of vectors $v$ and $w$ with $v_i w_j = x_i y_j$ for all $i$ and $j$ will satisfy $B(v,w)=B(x,y)$.

This leads to the idea of splitting the computation of $B(x,y)$ into two stages. Take an $N$-dimensional vector space $T$ (they're all isomorphic so it doesn't matter which one we take) with a basis $(g_1,\dots,g_N)$. Given $x=\sum x_i e_i$ and $y=\sum y_j f_j$, first form the vector in $T$ whose coordinates with respect to the basis $\{ g_k \}$ are given by the column vector $$(x_1 y_1,\dots,x_1 y_m,x_2 y_1,\dots,x_2 y_m,\dots,x_n y_1,\dots,x_n y_m)^T.$$ Then run this vector through the linear map $\tilde{B}:T\to\mathbf{R}$ whose matrix is the row vector $$(B_{11},\dots,B_{1m},B_{21},\dots,B_{2m},\dots,B_{n1},\dots,B_{nm}),$$ where $B_{ij}=B(e_i,f_j)$. This gives, by construction, $\sum\sum B_{ij} x_i y_j=B(x,y)$.

We'll call the space $T$ the tensor product of the vector spaces $V$ and $W$ and denote it by $T=V \otimes W$; it is “uniquely defined up to isomorphism”, and its elements are called tensors. The vector in $T$ that we formed from $x\in V$ and $y\in W$ in the first stage above will be denoted $x \otimes y$; it's a “bilinear mixture” of $x$ and $y$ which doesn't allow us to reconstruct $x$ and $y$ individually, but still contains exactly all the information needed in order to compute $B(x,y)$ for any bilinear map $B$; we have $B(x,y)=\tilde{B}(x \otimes y)$. This is the “universal property”; any bilinear map $B$ from $V \times W$ can be computed by taking a “detour” through $T$, and this detour is unique, since the map $\tilde{B}$ is constructed uniquely from the values $B(e_i,f_j)$.

To tidy this up, one would like to make sure that the definition is basis-independent. One way is to check that everything transforms properly under changes of bases. Another way is to do the construction by forming a much bigger space and taking a quotient with respect to suitable relations (without ever mentioning bases). Then, by untangling definitions, one can for example show that a bilinear map $B:V \times W \to \mathbf{R}$ can be canonically identified with an element of the space $V^* \otimes W^*$, and dually an element of $V \otimes W$ can be identified with a bilinear map $V^* \times W^* \to \mathbf{R}$. Yet other authors find this a convenient starting point, so that they instead define $V \otimes W$ to be the space of bilinear maps $V^* \times W^* \to \mathbf{R}$. So it's no wonder that one can become a little confused when trying to compare different definitions...

Hans Lundmark
  • 48,535
  • 7
  • 82
  • 143
  • Now, both the linear map and the elements of $T$ have mn components. So what is the tensor ? The elements of T or the linear map? Or both ? How are these different from an abstract vector ? – Isomorphic Mar 24 '14 at 05:12
  • 2
    @Iota: Not quite sure I understand your question... But a tensor is an element of a tensor product space, so for example $x \otimes y + v \otimes u$ is a tensor in the space $V \otimes U$ if $x,v \in V$ and $y,u \in U$. The tensor product space $V \otimes U$ is a vector space, so in that respect it's no different from other abstract vector spaces, but it does have the special feature of being built up from underlying vector spaces $V$ and $U$; for example, if you make a change of basis in $V$ and/or $U$, the induced coordinates in $V \otimes U$ change in a particular way. – Hans Lundmark Mar 24 '14 at 07:38
  • I have edited the answer to swap the roles of $U$ and $W$. So now the tensor product is $V \otimes W$ instead of the previous (somewhat backwards-looking) $V \otimes U$. – Hans Lundmark Feb 19 '15 at 11:24
  • @Isomorphic the tensor is the map itself. its components are sometimes called coordinates. they are the ones that transform and the ones you calculate with so they are usually called the "tensor" – Ziad H. Muhammad Jul 22 '21 at 20:42

In mathematics, tensors are one of the first objects encountered which cannot be fully understood without their accompanying universal mapping property.

Before talking about tensors, one needs to talk about the tensor product of vector spaces. You are probably already familiar with the direct sum of vector spaces. This is an addition operation on spaces. The tensor product provides a multiplication operation on vector spaces.

The key feature of the tensor product is that it replaces bilinear maps on a cartesian product of vector spaces with linear maps on the tensor product of the two spaces. In essence, if $V,W$ are vector spaces, there is a bijective correspondence between the set of bilinear maps on $V\times W$ (to any target space) and the set of linear maps on $V\otimes W$ (the tensor product of $V$ and $W$).

This can be phrased in terms of a universal mapping property. Given vector spaces $V,W$, a tensor product $V\otimes W$ of $V$ and $W$ is a space together with a map $\otimes : V\times W \rightarrow V\otimes W$ such that for any vector space $X$ and any bilinear map $f : V\times W \rightarrow X$ there exists a unique linear map $\tilde{f} : V\otimes W \rightarrow X$ such that $f = \tilde{f}\circ \otimes$. In other words, every bilinear map on the cartesian product factors uniquely through the tensor product.

It can be shown using a basic argument that the tensor product is unique up to isomorphism, so you can speak of "the" tensor product of two spaces rather than "a" tensor product, as I did in the previous paragraph.

A tensor is just an element of a tensor product.

One must show that such a tensor product exists. The standard construction is to take the free vector space over $V\times W$ and introduce various bilinearity relations. See my link at the bottom for an article that does this explicitly. In my experience, however, the key is to be able to use the above mapping property; the particular construction doesn't matter much in the long run. The map $\otimes : V\times W \rightarrow V\otimes W$ sends the pair $(v,w) \in V\times W$ to $v\otimes w \in V\otimes W$. The image of $\otimes$ is the space of so-called elementary tensors, but a general element of $V\otimes W$ is not an elementary tensor but rather a linear combination of elementary tensors. (In fact, due to bilinearity, it is enough to say that a general tensor is a sum of elementary tensors with the coefficients all being 1.)

The most generic reason why tensors are useful is that the tensor product is a machine for replacing bilinear maps with linear ones. In much of mathematics and physics, one seeks to find linear approximations to things; tensors can be seen as one tool for this, although exactly how they accomplish it is less clear than many other tools in the same vein. Here are some more specific reasons why they are useful.

For finite-dimensional spaces $V,W$, the tensor product $V^*\otimes W$ is isomorphic to the space of homomorphisms $\text{Hom}(V,W)$. So in other words every linear map $V \rightarrow W$ has a tensor expansion, i.e., a representation as a tensor in $V^* \otimes W$. For instance, if $\{v_i\}$ is a basis of $V$ and $\{x_i\}$ is the dual basis of $V^*$, then $\sum x_i \otimes v_i \in V^* \otimes V$ is a tensor representation of the identity map on $V$.

Tensor products tend to appear in a lot of unexpected places. For instance, in analyzing the linear representations of a finite group, once the irreducible representations are known it can be of benefit to construct also a "tensor product table" which decomposes the tensor products of all pairs of irreducible representations as direct sums of irreducible representations.

In physics, one often talks about a rank $n$ tensor being an assembly of numbers which transform in a certain way under change of coordinates. What one is really describing here is all the different coordinate representations of an abstract tensor in a tensor power $V^{\otimes n}$.

If one takes the direct sum of all tensor powers of a vector space $V$, one obtains the tensor algebra over $V$. In other words, the tensor algebra is the construction $k\oplus V\oplus (V\otimes V) \oplus (V\otimes V\otimes V) \oplus \dots$, where $k$ is the base field. The tensor algebra is naturally graded, and it admits several extremely useful quotient algebras, including the well-known exterior algebra of $V$. The exterior algebra provides the natural machinery for differential forms in differential geometry.

Here's an example of the exterior algebra in practice. Suppose one wishes to classify all nonabelian two-dimensional Lie algebras $\mathfrak{g}$. The Lie bracket $[\cdot,\cdot]$ is antisymmetric and bilinear, so the machinery of tensor products turns it into a linear map $\bigwedge^2 V \rightarrow V$, where $V$ is the underlying vector space of the algebra. Now $\bigwedge^2 V$ is one-dimensional and since the algebra is nonabelian the Lie bracket is not everywhere zero; hence as a linear map the Lie bracket has a one-dimensional image. Then one can choose a basis $\{X,Y\}$ of $V$ such that $[X,Y] = X$, and we conclude that there is essentially only one nonabelian Lie algebra structure on a two-dimensional vector space.

A fantastic reference on tensor products of modules was written by Keith Conrad: http://www.math.uconn.edu/~kconrad/blurbs/linmultialg/tensorprod.pdf

Zach Conn
  • 4,763
  • 1
  • 22
  • 29
  • This looks like a very thorough and pretty well justified introduction to tensors - many thanks! Will give this a proper read-over tomorrow hopefully. – Noldorin Nov 14 '10 at 21:05
  • I think Qiaochu Yuan's answer is probably closer to what you need for mathematical physics. I still think it's worthwhile reading over at least the first few sections of Keith Conrad's notes. They are actually the nicest source I know of describing general tensor products. – Zach Conn Nov 14 '10 at 22:04
  • 1
    Roman's Advanced Linear Algebra, in Chapter 14, also has a very nice explanation of the tensor product that closely aligns with Zach's answer – ItsNotObvious Jun 08 '11 at 22:04
  • There's also a Part II of Keith Conrad's paper: https://kconrad.math.uconn.edu/blurbs/linmultialg/tensorprod2.pdf – Yakov Shklarov Jul 10 '19 at 05:27
  • Is there a simple example showing the benefit of converting a multi-linear mapping to a linear mapping? – bruin Sep 26 '19 at 07:22

Once you understand what a tensor product is and what a dual space is, then a tensor of type $(n, m)$ is an element of $V^{\ast \otimes m} \otimes V^{\otimes n}$ where $V$ is some vector space. This is the same thing as a multilinear map $V^m \to V^{\otimes n}$ or, if you don't like the asymmetry, a multilinear map $V^{\ast n} \times V^{m} \to F$ (where $F$ is the underlying field). Examples:

  • A tensor of type $(0, 0)$ is a scalar.
  • A tensor of type $(1, 0)$ is a vector.
  • A tensor of type $(0, 1)$ is a covector.
  • A tensor of type $(1, 1)$ is a linear transformation.
  • A tensor of type $(0, 2)$ is a bilinear form, for example an inner product.

When you pick a basis of $V$, you can write tensors in terms of the natural basis on $V^{\ast \otimes n} \otimes V^{\otimes m}$ coming from taking products of the basis on $V$ with the corresponding dual basis on $V^{\ast}$. This is where the "multidimensional array" definition of a tensor comes from, since this is the natural generalization of writing a matrix as a square array (which is equivalent to writing an element of $V^{\ast} \otimes V$ in terms of the basis $e_i^{\ast} \otimes e_j$ where $\{ e_i \}$ is a basis).

When a physicist says "tensor," sometimes they mean a tensor field. This is a "globalization" of the above definition: it is a compatible set of choices, for each tangent space $V = T_p(M)$ of a smooth manifold $M$, of a tensor of type $(n, m)$ as defined above. Note that $V^{\ast}$ is the cotangent space. Examples:

  • A tensor field of type $(0, 0)$ is a smooth function.
  • A tensor field of type $(1, 0)$ is a vector field.
  • A tensor field of type $(0, 1)$ is a differential $1$-form.
  • A tensor field of type $(1, 1)$ is a morphism of vector fields.
  • A tensor field of type $(0, 2)$ which is symmetric and nondegenerate is a metric tensor. If it is also positive-definite, it is a Riemannian metric. If it has signature $(1, n-1)$, it is a Lorentzian metric.
Qiaochu Yuan
  • 359,788
  • 42
  • 777
  • 1,145
  • 7
    I am not 100% sure if your $(m,n)$ convention is standard. I seem to recall physics papers using $(0,2)$ type for the metric tensor... – Willie Wong Nov 14 '10 at 21:44
  • 1
    @Willie: I have no idea. I'm going off the convention from the Wiki article. – Qiaochu Yuan Nov 14 '10 at 21:48
  • 1
    Okay. I am just curious if there is a convention preferred by the algebraic community. At least some geometers do it the other way (like I described) (Barrett O'Neill being one whose book I have lying around). I am sort of curious whether the choice of the (m,n) ordering reflect some sort of innate preference of acting on objects from the left versus from the right... – Willie Wong Nov 14 '10 at 22:13
  • 3
    @Willie: I don't know. I think it is slightly more sensible to place duals on the left because the composition map Hom(X, Y) x Hom(Y, Z) -> Hom(X, Z) looks more natural written as X* x Y x Y* x Z -> X* x Z (since one can contract indices on the inside) than the other way around. – Qiaochu Yuan Nov 14 '10 at 22:43
  • 2
    @Qiaochu: Wikipedia agrees with Willie (and with all other sources I've seen) as far as I can tell: an (m,n) tensor has m upper indices and n lower indices, and is an element of $V \otimes \dots \otimes V \otimes V^* \otimes \dots \otimes V^*$. http://en.wikipedia.org/wiki/Tensor_product#Tensor_product_of_two_tensors, http://en.wikipedia.org/wiki/Classical_treatment_of_tensors#..._as_multilinear_maps – Hans Lundmark Nov 15 '10 at 05:38
  • 1
    @Hans: wiki uses the opposite convention at http://en.wikipedia.org/wiki/Tensor#..._as_multilinear_maps . I don't think it really matters. – Qiaochu Yuan Nov 15 '10 at 09:19
  • 3
    @Qiaochu: Of course it's not a big deal, but I still think you're reading it backwards. If you specialize to n=0 and m=1, the statement on that Wiki page is that a (0,1) tensor is a map from $V$ to $R$; in other words, it's a *co*vector, not a vector. – Hans Lundmark Nov 15 '10 at 11:14
  • @Hans: ooh. You're right. Will edit. – Qiaochu Yuan Nov 15 '10 at 11:22
  • @QiaochuYuan, Do you major in physics or maths? – Pacerier Aug 28 '17 at 21:14

Mathematicians and physicists use very different languages when they talk about tensors. Fortunately, they are talking about the same thing, but unfortunately, this is not obvious at all. Let me explain.

For simplicity, I'm going to focus on covariant 2-tensors, since this case already contains the main intuition. Also, I'm not going to talk about the distinction between covariant and contravariant, but I'll get all the indices right for future study.

Physicist's definition

Definition: A covariant 2-tensor is a set of numbers $t_{ij}$ with two indices that transforms in a particular way under a change of coordinates… Wait, wait, coordinates in what space? Physicists usually don't mention it, but they mean coordinates in a given vector space $V$.

More precisely, let $\{\vec e_i\}$ be a basis of the vector space $V$. Then, every vector $\vec v$ can be expressed in terms of its coordinates $v^i$ as follows:

$$\vec v = \sum_i v^i \vec e_i .$$

So, there are two objects: the vector $\vec v$ which I think of as "solid" or "fundamental", and its coordinates $v^i$, which are "ephemeral", since I have to choose a basis $\vec e_i$ before I can talk about them at all.

Furthermore, in a different basis $\{\vec e'_i\}$ of our vector space, the coordinates of one and the same vector $\vec v$ are very different numbers.

$$ \vec v = \sum_i v^i \vec e_i = \sum_i v'^i \vec e'_i .$$

but $v^i ≠ v'^i$. So, the vector is the fundamental thing. Its coordinates are useful for calculations, but they are ephemeral and heavily depend on the choice of basis.

Now, when defining a covariant 2-tensor, physicists do something very mysterious: they define a fundamental object (= the 2-tensor) not by describing it directly, but only by specifying how its ephemeral coordinates look like and change when switching to a different basis. Namely, a change of basis

$$ \vec e'_i = \sum_j R_i^a \vec e_a $$

will change the coordinates $t_{ij}$ of the tensor via

$$ t'_{ij} = \sum_{ab} R^a_i R^b_j t_{ab} .$$

If that is not completely unintuitive, I don't know what is.

Mathematician's definition

Mathematicians define tensors differently. Namely, the give a direct, fundamental description of what a 2-tensor is and only then ponder how it looks like in different coordinate systems.

Here is the definition: a covariant 2-tensor $t$ is a bilinear map $t : V\times V \to \mathbb{R}$. That's it. (Bilinear = linear in both arguments).

In other words, a covariant 2-tensor $t$ is a thing that eats two vectors $\vec v$, $\vec w$ and returns a number $t(\vec v, \vec w) \in\mathbb{R}$.

Now, what does this thing look like in coordinates? Choosing a basis $\lbrace \vec e_i \rbrace$, bilinearity allows us to write

$$ t(\vec v, \vec w) = t(\sum_i v^i \vec e_i, \sum_j w^j \vec e_j) = \sum_{ij} v^iw^j t(\vec e_i,\vec e_j) .$$

Now, we simply call the numbers $t_{ij} = t(\vec e_i, \vec e_j)$ the coordinates of the tensor $t$ in the basis $\vec e_i$. You can calculate that these numbers will behave just like the physicists tell us when you change the basis to $\vec e_i'$. So, the physicist's tensor and the mathematician's tensor are one and the same thing.

Tensor product

Actually, mathematicians do something more advanced, they define a so called tensor product of vector spaces. The previous definition as a bilinear map is still correct, but mathematicians like to write this as "$t\in V^*\otimes V^*$" instead of "$t: V\times V \to \mathbb{R}$ and $t$ bilinear".

However, for a first understanding of the physicist's vs the mathematician's definition, it is not necessary to understand the mathematical tensor product.

Greg Graviton
  • 5,004
  • 3
  • 23
  • 37
  • 10
    This is one of the few topics were I find that mathematicians take the more intuitive approach than the physicists and you explain the contrast well. +1 – Raskolnikov Nov 29 '10 at 16:05
  • Einstein used discussions with Grossman to come up with General Relativity. Arguably Grossman could have shared the Nobel prize for that work,,, except Einstein never got a Nobel for that topic. – DWin Nov 25 '19 at 01:59
  • Unfortunately, the motive for defining covariant tensors this way has been lost down the decades, ending up a parrot mathematical definition sending some physics students mad. Covariant vectors were defined by their product with the corresponding contravariant vector being an invariant. For example, energy change as the product of a force and a displacement is an invariant in every Galilean coordinate system. Einstein gave a great introduction to tensors in his 1916 GR paper. – user10389 Sep 19 '20 at 03:38
  • @user10389 the multilinear map definition "not different but less confusing to starters than what you are describing" didn't lose the geometric idea behind it. to see it work with tensors on the vector space of "classical" vectors, line segments with orientation, you will see it's a coordinate-free map which means that just like how vectors are coordinate-free, geometric, tensors are geometric objects. – Ziad H. Muhammad Jul 22 '21 at 20:26

The most general notion I know is that tensor product of modules. You can read about this here http://en.wikipedia.org/wiki/Tensor_product_of_modules

Since vector spaces are modules, this definition specializes to vector spaces. The tensor product of elements in these vector spaces that one usually sees in engineering and physics texts (frequently matrices) is basically an element in the tensor product of the corresponding vector spaces.

Timothy Wagner
  • 9,889
  • 32
  • 44
  • Thanks for the answer. Since as a physicist, I probably don't need to know about modules, can I essentially read it as if they are vector spaces? – Noldorin Nov 14 '10 at 20:17
  • @Noldorin: Yes, modules are a sort of generalization of vector spaces where the "scalars" live in a ring (generalization of a field) just like they live in fields when we talk about vector spaces. As far as tensor products go you will lose nothing in reading the abstract definition replacing module by a vector space and a ring by a field. – Timothy Wagner Nov 14 '10 at 20:21
  • @Noldorin contd: Two other references that do a good job of this are: Chapter 2 in Atiyah Macdonald and Stephen Roman's book on Advanced linear algebra (where the case of vector spaces is treated directly). – Timothy Wagner Nov 14 '10 at 20:23
  • Excellent. I'll try to get a hold of one of those books and have a read. Thanks again. – Noldorin Nov 14 '10 at 21:03
  • @Noldorin Don't be so sure. Cohomology typically builds on a strong base in module theory. Modules of group algebras are important for doing harmonic analysis with noncommutative groups (which comes up in quantum physics) and representation theory. Lie algebra cohomology and Hochschild cohomology, on which cyclic cohomology and thus noncommutative topology is at least initially based, is defined using modules of algebras in a way analogous to how group cohomology can be defined in terms of modules of the group algebra. C*-algebras and group algebras use about the same semantics. – Loki Clock Feb 19 '15 at 11:44
  • @Noldorin And for practical purposes, yes, reading "vector space" wherever you see "module" will do you good, but I actually found I was much more comfortable with the tensor product after I worked out examples of tensor products over $\mathbb{Z}$ of small clocks. Tensor products and dual spaces of small-dimensional vector spaces over the 3-hour clock are also very effective ways to connect the abstractions to concrete algebra. – Loki Clock Feb 19 '15 at 11:58

Thanks for the nicely framed question!

In this answer, I'll not try to reconcile the three views the questioner mentioned, but as they seem to be interested in understanding tensors, I want to present an aspect of tensors that may help in understanding them intuitionally. For a formal definition and other explanations, please do look at other answers.

Tensors in physics and mathematics have two different but related interpretations - as physical entities and as transformation mapping.

From a physical entity point of view, a tensor can be interpreted as something that brings together different components of the same entity together without adding them together in a scalar or vector sense of addition. E.g.

  1. If I have 2gm of Calcium and 3gm of Calcium together, I immediately have 5gm of Calcium - this is scalar addition, and we can perceive the resulting substance.
  2. If I am moving at 5i m/s and 6j m/s at the same time, I'm moving at (5i+6j) m/s. This is vector addition, and once again, we can make sense of the resulting entity.
  3. If I have monochromatic pixels embedded in a cube that emit light at different angles, we can define pixels per unit area ($\chi$) in the cube as $\begin{bmatrix} \chi_x&\chi_y&\chi_z \end{bmatrix}$ where $\chi_x$ is the number of pixels emitting light perpendicular to the area in yz plane, and so on.
    This entity, $\chi$, has three components, and by writing $\chi$, we are writing the three components together. Apart from that, the three components cannot be added like a scalar or vector, and we cannot visualize $\chi$ as a single entity.

$\chi$ above is an example of a tensor. Though we may not be able to see $\chi$ as a single perceivable thing, it can be used to fetch or understand perfectly comprehensible entities, e.g. for a given area $\vec{s}$, we can get the total number of pixels emitting light perpendicular to it by the product: $$ \begin{bmatrix}\chi_x&\chi_y&\chi_z \end{bmatrix}\cdot \begin{bmatrix}s_x\\s_y\\s_z \end{bmatrix}$$

Change the monochromatic pixels in this example to RGB pixels, and we get something very similar to the stress tensor (a tensor of rank 2), and we can get the traction vector (force per unit area for a given unit area n) by the equation:

$$\textbf{T}^{(\textbf{n})} = \begin{bmatrix} T_x\\T_y\\T_z \end{bmatrix}^{(n)} = \textbf{n} \cdot \boldsymbol{\sigma} = \begin{bmatrix}\sigma_{xx}&\sigma_{xy}&\sigma_{xz}\\ \sigma_{yx}&\sigma_{yy}&\sigma_{yz}\\ \sigma_{zx}&\sigma_{zy}&\sigma_{zz}\\ \end{bmatrix} \begin{bmatrix}n_x\\n_y\\n_z \end{bmatrix} $$

Though it's difficult to visualize the stress tensor in totality, each of its components tells us something very discrete, e.g. $\sigma_{xx}$ tells us how much force in x-direction is being experienced by a unit surface area that is perpendicular to the x-direction (at a given point in a solid). The complete stress tensor, $\sigma$, tells us the total force a surface with unit area facing any direction will experience. Once we fix the direction, we get the traction vector from the stress tensor, or, I do not mean literally though, the stress tensor collapses to the traction vector.

Note that the possibility of interpreting tensors as a single physical entity or something that makes sense visually is not zero. E.g., vectors are tensors and we can visualize most of them (e.g. velocity, electromagnetic field).

I needed to keep it succinct here, but more explanation on similar lines can be found here.

  • 61
  • 3