Suppose (for contradiction) that there is a (if necessary associative and/or normed) division algebra over $\mathbb{R}^3$. Is there a simple way to use this to construct a nonvanishing continuous tangent vector field on $\mathbb{S}^2$, and thus contradict the hairy ball theorem?

2Sure: assume there exists a division algebra over $\mathbb R^3$. It is also true tat a division algebra does not exist. Therefore, "False" is a true statement, and since false$\implies A$ is true for any statement $A$, just take "the hairy ball theorem is false" as your statement $A$. – 5xum Apr 08 '14 at 12:42

1And there is no such thing as a division algebra over $\Bbb R^3$. Any such algebra would contain a copy of $\Bbb R^3$, which has zero divisors. – rschwieb Apr 08 '14 at 12:43

1@rschwieb I think the question is to construct a counterexample to the hairy ball theorem given a $3$dimensional division algebra over $\mathbb R$. – Dustan Levenstein Apr 08 '14 at 12:44

13Why are you guys playing dumb? I assume the OP would like a proof of the equivalence "there is a division algebra structure on $\mathbb{R}^3$ iff there is a nowhere vanishing vector field on $S^2$" without assuming either theorem. It's a common question on Hopf invariant type problems, even though we now know exactly for which $n$ there is a Hopf invariant one map... – Najib Idrissi Apr 08 '14 at 12:45

@DustanLevenstein I thought of that too, but even if it is corrected to that, isn't it true that the only $\Bbb R$ division algebras are of dimension $1,2,4,8$? To escape that theorem, one would have to talk about division algebras that aren't even alternative. – rschwieb Apr 08 '14 at 12:46

@rschwieb That's the point. – Dustan Levenstein Apr 08 '14 at 12:47

2Since there's no division algebra of dimension $3$, one might wonder whether a counterexample can be used to construct a nowhere vanishing vector field on $S^3$, perhaps in a manner less silly than @5xum's answer. – Dustan Levenstein Apr 08 '14 at 12:50

1Dear @NajibIdrissi : It's easy to ask a lot of questions when a question statement is as terse an unclear as this one, and it's easy to guess the intended meaning if you are very familiar with the material. Please keep this in mind before making judgmental comments about your fellow posters! – rschwieb Apr 08 '14 at 12:57

@DustanLevenstein That is exactly what I mean. I will try to make the question clearer. – Student G Apr 08 '14 at 12:58

4If $\mathbb{R}^n$ had the structure of a division algebra over $\mathbb{R}$ then $\mathbb{R}^n \{0\}$ would be a Lie group under multiplication. Moreover, we have a copy of $\mathbb{R}^*$ inside the center of this group acting by scalar multiplication. If we quotient by this subgroup we get a Lie group structure on $S^{n1}$, so in particular $S^{n1}$ must be parallelizable and therefore have lots of nonvanishing vector fields (that's where the hairy ball theorem could come in). – Nate Apr 08 '14 at 12:59

So the question is about one direction of the equivalence of the existence of the algebra and the counterexample to the theorem, not about a disproof of the theorem. That's more understandable. – rschwieb Apr 08 '14 at 13:02

@Nate: I've seen variants of this argument before, but something has always been a little suspect to me. Is there any reason the multiplication would need to be continuous/smooth? If not, how does this necessarily define a Lie structure on $S^{n1}$? – Jason DeVito Apr 08 '14 at 13:14

Please tell me if the question is still unclear. Thank you. – Student G Apr 08 '14 at 13:15

@JasonDeVito Correct me if I'm missing something, but, for multiplication, at least, it must be a bilinear form, which is always a smooth function, given by an element of the $9$dimensional vector space $(\mathbb R^3 \otimes \mathbb R^3)^*$. I'm not sure if there's an obvious reason why the inverse map should be smooth. – Dustan Levenstein Apr 08 '14 at 13:46

Dustan: That makes sense to me  I figured it was easy! Once multiplication is smooth, I believe you can use the implicit function theorem to prove inversion is smooth near the identity, and then somehow use the group multiplication to prove it's smooth everywhere. Thanks – Jason DeVito Apr 08 '14 at 14:07

1@JasonDeVito Oh, I guess that makes sense? Does that mean the usual axioms for a Lie Group are redundant? Anyway, what I said was slightly incorrect, and you can indeed defer to linear algebra entirely to prove that both multiplication and inversion are smooth: the multiplication map can be described as an element $\phi \in \operatorname{Hom}(X, \operatorname{Hom}(X, X))$, where $X = \mathbb R^3$ in this case, and the notation is $x \cdot y = (\phi(x))(y)$, and the inversion map is $x \mapsto (\phi(x))^{1}(1)$. Inversion is smooth for invertible linear maps. – Dustan Levenstein Apr 08 '14 at 14:25

What is a division algebra over $\mathbb R^3$? The definitions I can find state that a division algebra is over a field, and $\math R^3$ isn't a field. – Jack M May 04 '14 at 13:10
1 Answers
If I'm not mistaken, you can fairly explicitly construct nowhere vanishing continuous tangent vector fields on $S^{n1}$ from sufficiently nice multiplications on $\mathbb{R}^n$.
Theorem. Let $n \geq 2$ be a positive integer, and suppose that $*$ is a bilinear map on $\mathbb{R}^n$ with the property that there is a twodimensional subspace $W$ of $\mathbb{R}^n$ with the property that for all nonzero $y \in W$, the map $\mathbb{R}^n \to \mathbb{R}^n$ given by $x \mapsto y*x$ is invertible. Then there is a nowhere vanishing continuous tangent vector field on $S^{n1}$.
(Note that the hypothesis on $*$ is far weaker than the assumption that $*$ turns $\mathbb{R}^n$ into a division algebra.)
Some preliminaries before the proof. Regard $\mathbb{R}^n$ as a vector space in the usual way, and let $\langle \cdot,\cdot\rangle$ denote the usual inner product on $\mathbb{R}^n$. Identify $S^{n1}$ with the subset $\{x \in \mathbb{R}^n: \langle x, x\rangle = 1\}$ of $\mathbb{R}^n$. With this identification, for any $y \in S^{n1}$ we can identify the tangent space to $S^{n1}$ at $y$ with a subspace of $\mathbb{R}^n$; under this identification, the tangent space to $S^{n1}$ at $y$ is precisely the subset $\{w \in \mathbb{R}^n: \langle w,y\rangle = 0\}$ of $\mathbb{R}^n$. (This might be clearest to see when $n=3$: the set of all vectors tangent to the $2$sphere at $y \in S^2$ is precisely the plane consisting of all vectors orthogonal to $y$.)
Proof of theorem. Choose any basis $\{e_1, e_2\}$ for $W$ and for $j = 1,2$ let $L_j$ denote the map $\mathbb{R}^n \to \mathbb{R}^n$ given by $x \mapsto e_j * x$. Note that by our hypotheses on $*$ the maps $L_j$ are linear bijections.
Fix $y \in S^{n1}$, define $X(y)$ in $\mathbb{R}^n$ as follows: $$ X(y) = L_2(L_1^{1}(y))  \frac{\langle L_2(L_1^{1}(y)), y\rangle}{\langle y,y\rangle} y. $$ (This does define an element of $\mathbb{R}^n$, as we identify $S^{n1}$ with a subset of $\mathbb{R}^n$, and $y \in S^{n1}$ is nonzero.)
I claim that $X(y)$ is tangent to $S^{n1}$ at $y$. As observed before the proof, it suffices to show that $X(y)$ is orthogonal to $y$ (in the usual sense of the inner product on $\mathbb{R}^n$). But it clearly is; just do a calculation with the definition of $X(y)$ and use the bilinearity of $\langle \cdot,\cdot\rangle$. (Note: $X(y)$ is the second vector in the twoelement list that results from applying the usual GramSchmidt process, without normalization, to the twoelement list $y, L_2(L_1^{1}(y))$, so of course it is going to be orthogonal to $y$.)
I claim that $X(y)$ is nonzero. In fact, any vector of the form $L_2(L_1^{1}(y))  \lambda y$ (for some scalar $\lambda$ and some nonzero $y \in \mathbb{R}^n$) will be nonzero. To see this, note that since $y$ is nonzero and $L_1$ is a bijective linear map, there is a nonzero $z \in \mathbb{R}^n$ with $y = L_1(z)$. Since $\{e_1, e_2\}$ is linearly independent, $e_1  \lambda e_2$ is nonzero, and so by our hypothesis on $*$, the linear map $L: \mathbb{R}^n \to \mathbb{R}^n$ given by $x \mapsto (e_2  \lambda e_1) * x$ is invertible. And from the bilinearity of $*$ we have that $$ X(y) = L_2(L_1^{1}(y))  \lambda y = L_2(z)  \lambda L_1(z) = (e_2 * z)  (\lambda e_1) * z = (e_2  \lambda e_1) * z = L(z) $$ is the result of applying the invertible linear map $L$ to the nonzero vector $z$. Thus $X(y)$ is indeed nonzero.
The formula for $X$ therefore defines a nowherevanishing tangent vector field on $S^{n1}$. It is clear from the definition (since $L_1^{1}$ and $L_2$ are linear maps, and the vector space operations on $\mathbb{R}^n$ and the inner product on $\mathbb{R}^n$ are continuous) that this vector field is continuous. End of proof.
If I haven't lost my mind, with a similar idea you can explicitly show that if $*$ is bilinear on $\mathbb{R}^n$ with the property that the map $\mathbb{R}^n \to \mathbb{R}^n$ given by $x \to y*x$ is invertible for all nonzero $y$ in a $k$dimensional subspace of $\mathbb{R}^n$, then there is a linearly independent set of $k1$ nowhere vanishing vector fields on $S^{n1}$. (Define $L_1, \dots, L_k$ appropriately, and consider the last $k1$ vectors resulting from doing GramSchmidt on the list $y, L_2(L_1^{1}(y)), \dots, L_k(L_1^{1}(y))$.) This would of course establish the wellknown fact that if $\mathbb{R}^n$ is a division algebra, then the tangent bundle $TS^{n1}$ not only has a nowhere vanishing section, but is in fact trivial.
 7,203
 3
 35
 49