I am helping my brother with Linear Algebra. I am not able to motivate him to understand what double dual space is. Is there a nice way of explaining the concept? Thanks for your advices, examples and theories.

  • 5,543
  • 2
  • 24
  • 46
  • 7,367
  • 7
  • 52
  • 93

3 Answers3


If $V$ is a finite dimensional vector space over, say, $\mathbb{R}$, the dual of $V$ is the set of linear maps to $\mathbb{R}$. This is a vector space because it makes sense to add functions $(\phi + \psi)(v) = \phi(v) + \psi(v)$ and multiply them by scalars $(\lambda\phi)(v) = \lambda(\phi(v))$ and these two operations satisfy all the usual axioms.

If $V$ has dimension $n$, then the dual of $V$, which is often written $V^\vee$ or $V^*$, also has dimension $n$. Proof: pick a basis for $V$, say $e_1, \ldots, e_n$. Then for each $i$ there is a unique linear function $\phi_i$ such that $\phi_i(e_i) = 1$ and $\phi_i(e_j) = 0$ whenever $i \neq j$. It's a good exercise to see that these maps $\phi_i$ are linearly independent and span $V^*$.

So given a basis for $V$ we have a way to get a basis for $V^*$. It's true that $V$ and $V^*$ are isomorphic, but the isomorphism depends on the choice of basis (check this by seeing what happens if you change the basis).

Now let's talk about the double dual, $V^{**}$. First, what does it mean? Well, it means what it says. After all, $V^*$ is a vector space, so it makes sense to take its dual. An element of $V^{**}$ is a function that eats elements of $V^*$, i.e. a function that eats functions that eat elements of $V$. This can be a little hard to grasp the first few times you see it. I will use capital Greek letters for elements of $V^{**}$.

Now, here is the trippy thing. Let $v \in V$. I am going to build an element $\Phi_v$ of $V^{**}$. An element of $V^{**}$ should be a function that eats functions that eat vectors in $V$ and returns a number. So we are going to set $$ \Phi_v(f) = f(v). $$

You should check that the association $v \mapsto \Phi_v$ is linear (so $\Phi_{\lambda v} = \lambda\Phi_v$ and $\Phi_{v + w} = \Phi_v + \Phi_w$) and is an isomorphism (one-to-one and onto)! This isomorphism didn't depend on choosing a basis, so there's a sense in which $V$ and $V^{**}$ have more in common than $V$ and $V^*$ do.

In fancier language, $V$ and $V^*$ are isomorphic, but not naturally isomorphic (you have to make a choice of basis); $V$ and $V^{**}$ are naturally isomorphic.

Final remark: someone will surely have already said this by the time I've edited and submitted this post, but when $V$ is infinite dimensional, it's not always true anymore that $V = V^{**}$. The map $v \mapsto \Phi_v$ is injective, but not necessarily surjective, in this case.

  • Thanks! I have wanted to know about that for some time, and your explanation was very clear. – MJD Jul 13 '12 at 21:42
  • @countinghaus : So what belongs to the double dual space ? They are also functions right ? ie , these functions make the functionals of dual space to take a vector of vector space and spit out field element . right ? – Theorem Jul 13 '12 at 21:44
  • @countinghaus : So what kind of functions live in the double dual? – Theorem Jul 13 '12 at 21:47
  • can't make a comment because of tex problems in comment box. –  Jul 13 '12 at 23:05
  • @Theorem: The double dual space contains linear functions mapping linear functions from the dual space to scalars (members of the field the vector space is defined over). – celtschk Jul 14 '12 at 10:34
  • Very concise explanation. – IAmNoOne Aug 05 '17 at 20:50
  • I know that in an infinite-dimensional space we have linear maps on V** that are nor continuous but I fail to see how this would violate injectivity of isomorphim to V . Could you expand ob this? – MSIS Sep 01 '20 at 23:36

Actually it's quite simple: If you have a vector space, any vector space, you can define linear functions on that space. The set of all those functions is the dual space of the vector space. The important point here is that it doesn't matter what this original vector space is. You have a vector space $V$, you have a corresponding dual $V^*$.

OK, now you have linear functions. Now if you add two linear functions, you get again a linear function. Also if you multiply a linear function with a factor, you get again a linear function. Indeed, you can check that linear functions fulfill all the vector space axioms this way. Or in short, the dual space is a vector space in its own right.

But if $V^*$ is a vector space, then it comes with everything a vector space comes with. But as we have seen in the beginning, one thing every vector space comes with is a dual space, the space of all linear functions on it. Therefore also the dual space $V^*$ has a corresponding dual space, $V^{**}$, which is called double dual space (because "dual space of the dual space" is a bit long).

So we have the dual space, but we also want to know what sort of functions are in that double dual space. Well, such a function takes a vector from $V^*$, that is, a linear function on $V$, and maps that to a scalar (that is, to a member of the field the vector space is based on). Now, if you have a linear function on $V$, you already know a way to get a scalar from that: Just apply it to a vector from $V$. Indeed, it is not hard to show that if you just choose an arbitrary fixed element $v\in V$, then the function $F_v\colon\phi\mapsto\phi(v)$ indeed is a linear function on $V^*$, and thus a member of the double dual $V^{**}$. That way we have not only identified certain members of $V^{**}$ but in addition a natural mapping from $V$ to $V^{**}$, namely $F\colon v\mapsto F_v$. It is not hard to prove that this mapping is linear and injective, so that the functions in $V^{**}$ corresponding to vectors in $V$ form a subspace of $V^{**}$. Indeed, if $V$ is finite dimensional, it's even all of $V^{**}$. That's easy to see if you know that $\dim(V^*)=\dim{V}$ and therefore $\dim(V^{**})=\dim{V^*}=\dim{V}$. On the other hand, since $F$ is injective, $\dim(F(V))=\dim(V)$. However for finite dimensional vector spaces, the only subspace of the same dimension as the full space is the full space itself. However if $V$ is infinite dimensional, $V^{**}$ is larger than $V$. In other words, there are functions in $V^{**}$ which are not of the form $F_v$ with $v\in V$.

Note that since $V^{**}$ again is a vector space, it also has a dual space, which again has a dual space, and so on. So in principle you have an infinite series of duals (although only for infinite vector spaces they are all different).

  • 41,315
  • 8
  • 68
  • 125
  • Can you give examples, for infinite-dimensional V , in V** that don't have an " equivalent" in V? I know then there are discontinuous linear maps which there are not in the fd case. Are these the ones that break down the injectivity between V** and V for the inf-dim case? – MSIS Sep 01 '20 at 23:58
  • 1
    @MSIS: Take the vector space $V$ of real sequences with finite support. Clearly, the functions $f_n$ mapping each sequence $a$ to $a_n$ are linearly independent members of $V^*$. Therefore there exists a linear function in $V^{**}$ that maps each $f_n$ to $1$. But such a function cannot be obtained from any member of $V$, since the constant sequence $a_n=1$ does not have finite support, and therefore is no member of $V$. – celtschk Sep 25 '21 at 11:12
  • 1
    BTW, the constant $1$ sequence doesn't actually lead to a member of $V^{**}$ anyway, since there exists an element of $V^*$ that maps each element of $V$ to the sum of its elements, and that would not be well-defined on that constant sequence (or any other non-zero constant sequence). – celtschk Sep 25 '21 at 11:27

This is an old post and the existing answers are good, but I feel like I can add another perspective. It looks fairly abstract at first, but if you're dealing with double duals, you just got to accept some abstraction. (By the way, I vividly remember solving the exercise in my undergrad linear algebra class showing the natural isomorphism $V \simeq V^{**}$. It felt like I had just warped my mind to the highest level of abstraction possible for human beings. The dual of a dual, woah ...)

Let $V$ be our vector space (over a field $K$, say). Let's assume we already understand the dual $V^*$. Then we will, after a short consideration, accept that the map

$$V^* \times V \rightarrow K$$ $$(l,v) \mapsto l(v)$$

is bilinear. We call it a bilinear form because its values are just scalars.

Now imagine we have any other vector space $W$ (over the same field) with a bilinear form

$$f:W \times V \rightarrow K.$$

We want to know how far this $W$ and $f$ are from the "ideal" form $V^* \times V \rightarrow K$ above. Note that for any such $W$ and $f$ we get a map, let's call it "c" for "comparison",

$$c: W \rightarrow V^*$$ $$w \mapsto f(w, \cdot)$$

(i.e. the image of $w$ is the linear form in $V^*$ which sends a given $v$ to $f(w,v)$). So how do $W$ and $V^*$ "compare"?

  • Easy exercise: $c$ is injective $\Leftrightarrow$ for all $0\neq w \in W$, there is $v\in V$ such that $f(w,v) \neq 0$
  • Trickier exercise: $c$ is surjective $\Leftrightarrow$ for all $0\neq v \in V$, there is $w\in W$ such that $f(w,v) \neq 0$.

Note that the "trickier" part needs that $\dim(V) < \infty$.

So the vector space $W$ and the bilinear form $f$ are "ideal" (more technical term: "perfect pairing") if and only if the bilinear form is non-degenerate (which just summarises the criteria on the RHS of the above exercises).

But now look at that non-degeneracy criterion: It is symmetric in the first and second component. In vague terms, $W$ is the dual of $V$ iff that form is non-degenerate; but if that form is non-degenerate, then so is the form that you just get from flipping the components; which means that then $V$ is also the dual of $W$. I.e. $V = V^{**}$.

This whole thinking of duality in terms of non-degenerate bilinear forms, as abstract as it seems on first encounter, becomes incredibly helpful in some applications. E.g. in representation theory it's very neat to translate all the time between self-duality of certain representations and existence of certain $G$-invariant forms on them. Or, a generalisation of this kind of thinking basically leads to adjunction of tensor products and Hom, and all that wonderful general algebra stuff, I recently used that e.g. here to motivate the definition of certain Lie algebra actions.

Torsten Schoeneberg
  • 20,491
  • 2
  • 36
  • 72
  • @Torsten....nice explanation. – Sedumjoy Dec 20 '20 at 19:21
  • @ Unknown downvoter: Why the downvote? – Torsten Schoeneberg Feb 02 '21 at 23:27
  • @Torsten...I gave an upvote not a down vote. I loved your explanation. Well writen and great detail. Not sure what happened if it did not display right. I also commented "nice explanation". I don't believe I have ever given a downvote to anyone on stackexchange.. Hope my upvote displays OK now, I tried to upvote again – Sedumjoy Feb 04 '21 at 14:21
  • @Sedumjoy: All is good, thanks for your comment and upvote. But somebody else downvoted a few days ago, and I wonder why. – Torsten Schoeneberg Feb 04 '21 at 15:45