32

While monads are represented in Haskell using the bind and return functions, they can also have another representation using the join function, such as discussed here. I know the type of this function is M(M(X))->M(X), but what does this actually do?

kiamlaluno
  • 24,790
  • 16
  • 70
  • 85
Casebash
  • 100,511
  • 79
  • 236
  • 337
  • 1
    it *flattens* an (effectful computation of an (effectful computation)) into an equivalent (effectful computation). Imagine nested loops, where the inner loop is unrolled. Or like in C, where the 2D array is actually a 1D vector and a nested loop over the array's rows and columns can be turned into a simple loop along the vector with an appropriate addressing scheme, except that this works even for sub-arrays of uneven length, and when the inner loop is calculated programmatically. – Will Ness May 29 '14 at 07:18
  • 1
    ... so in imperative programming it's practically a no-op: in `for x in XS: for y in foo(x): yield (x,y)` the yield doesn't care if it's inside a *regular*, or a *nested* loop. it just *yields* (prints; logs; updates global state; whatever). the key thing is that the inner *loop* was *calculated* from *the results of the previous "computation" `XS`* (as `foo(x)`). – Will Ness Aug 07 '18 at 13:34

5 Answers5

46

Actually, in a way, join is where all the magic really happens--(>>=) is used mostly for convenience.

All Functor-based type classes describe additional structure using some type. With Functor this extra structure is often thought of as a "container", while with Monad it tends to be thought of as "side effects", but those are just (occasionally misleading) shorthands--it's the same thing either way and not really anything special[0].

The distinctive feature of Monad compared to other Functors is that it can embed control flow into the extra structure. The reason it can do this is that, unlike fmap which applies a single flat function over the entire structure, (>>=) inspects individual elements and builds new structure from that.

With a plain Functor, building new structure from each piece of the original structure would instead nest the Functor, with each layer representing a point of control flow. This obviously limits the utility, as the result is messy and has a type that reflects the structure of flow control used.

Monadic "side effects" are structures that have a few additional properties[1]:

  • Two side effects can be grouped into one (e.g., "do X" and "do Y" become "do X, then Y"), and the order of grouping doesn't matter so long as the order of the effects is maintained.
  • A "do nothing" side effect exists (e.g., "do X" and "do nothing" grouped is the same as just "do X")

The join function is nothing more than that grouping operation: A nested monad type like m (m a) describes two side effects and the order they occur in, and join groups them together into a single side effect.

So, as far as monadic side effects are concerned, the bind operation is a shorthand for "take a value with associated side effects and a function that introduces new side effects, then apply the function to the value while combining the side effects for each".

[0]: Except IO. IO is very special.

[1]: If you compare these properties to the rules for an instance of Monoid you'll see close parallels between the two--this is not a coincidence, and is in fact what that "just a monoid in the category of endofunctors, what's the problem?" line is referring to.

C. A. McCann
  • 75,408
  • 19
  • 205
  • 301
  • Put far more abstractly, we can say that join is the witness to the fact that a monad is closed under endo-functor composition (adding a new layer of m, as in (m (m a)) as opposed to merely (m a). These two are the same "kind" of thing (a monad), and join makes the relationship between them concrete. – nomen Nov 17 '12 at 16:18
  • 3
    Regarding footnote [0]: why is the `IO` side effect special/different from any other side effect? – Jasha May 05 '19 at 22:18
  • @Jasha IIUC the IO monad is used as a bridge between the impure world *outside* and the pure world *inside*. It's actually the only non-pure part of Haskell. It's needed because the environment in which a program runs is not pure, e.g. reading a line from the console or getting the current time doesn't always return the same value. The `IO` monad is very likely handled specially be the compiler itself. – Arshia001 Nov 08 '20 at 05:52
30

What join does has been adequately described by the other answers so far, I think. If you're looking for a more intuitive understanding...if you're wondering what join "means"...then unfortunately the answer is going to vary depending on the monad in question, specifically on what M(X) "means" and what M(M(X)) "means".

If M is the List monad, then M(M(X)) is a list of lists, and join means "flatten". If M is the Maybe monad, then an element of M(M(X)) could be "Just (Just x)", "Just Nothing", or "Nothing", and join means to collapse those structures in the logical way to "Just x", "Nothing", and "Nothing" respectively (similar to camccann's answer of join as combining side effects).

For more complicated monads, M(M(X)) becomes a very abstract thing and deciding what M(M(X)) and join "mean" becomes more complicated. In every case it's kinda like the List monad case, in that you're collapsing two layers of Monad abstraction into one layer, but the meaning is going to vary. For the State monad, camccann's answer of combining two side effects is bang on: join essentially means to combine two successive state transitions. The Continuation monad is especially brain-breaking, but mathematically join is actually rather neat here: M(X) corresponds to the "double dual space" of X, what mathematicians might write as X** (continuations themselves, i.e. maps from X->R where R is a set of final results, correspond to the single dual space X*), and join corresponds to an extremely natural map from X**** to X**. The fact that Continuation monads satisfy the monad laws corresponds to the mathematical fact that there's generally not much point to applying the dual space operator * more than twice.

But I digress.

Personally I try to resist the urge to apply a single analogy to all possible types of monads; monads are just too general a concept to be pigeonholed by a single descriptive analogy. What join means is going to vary depending on which analogy you're working with at any given time.

Peter Milley
  • 2,588
  • 16
  • 17
8

From the same page we recover this information join x = x >>= id, with knowledge of how the bind and id functions work you should be able to figure out what join does.

adamse
  • 11,732
  • 4
  • 32
  • 46
5

What it does, conceptually, can be determined just by looking at the type: It unwraps or flattens the outer monadic container/computation and returns the monadic value(s) produced therein.

How it actually does this is determined by the kind of Monad you are dealing with. For example, for the List monad, 'join' is equivalent to concat.

Daniel Pratt
  • 11,679
  • 1
  • 42
  • 57
3

The bind operation maps: ma -> (a -> mb) -> mb. In ma and (the first) mb, we have two ms. To my intuition, understanding bind and monadic operations has come to lie, largely, in understanding that and how those two ms (instances of monadic context) will get combined. I like to think of the Writer monad as an example for understanding join. Writer can be used to log operations. ma has a log in it. (a -> mb) will produce another log on that first mb. The second mb combines both those logs.

(And a bad example to think of is the Maybe monad, because there Just + Just = Just and Nothing + anything = Nothing (or F# Some and None) are so uninformative you overlook the fact that something important is going on. You can tend to think of Just as simply a single condition for proceeding and Nothing as simply a single flag to halt. Like signposts on the way, left behind as computation proceeds. (Which is a reasonable impressions since the final Just or Nothing appears to be created from scratch at the last step of the computation with nothing transferred into it from the previous ones.) When really you need to focus on the combinatorics of Justs and Nothings at every occasion. )

The issue crystallized for me in reading Miran Lipovaca's Learn You a Haskell For Great Good!, Chapter 12, the last section on Monad Laws. http://learnyouahaskell.com/a-fistful-of-monads#monad-laws, Associativity. This requirement is: "Doing (ma >>= f) >>= g is just like doing ma >>= (\x -> f x >>= g) [I use ma for m]." Well, on both sides the argument passes first to f then to g. So then what does he mean "It's not easy to see how those two are equal"?? It's not easy to see how they can be different!

The difference is in the associativity of the joinings of ms (contexts)--which the bindings do, along with mapping. Bind unwraps or goes around the m to get at the a which f is applied to--but that's not all. The first m (on ma) is held while f generates a second m (on mb). Then bind combines--joins--both ms. The key to bind is as much in the join as it is in the unwrap (map). And I think confusion over join is indicative of fixating on the unwrapping aspect of bind--getting the a out of ma in order to match the signature of f's argument--and overlooking the fact that the two ms (from ma and then mb) need to be reconciled. (Discarding the first m may be the appropriate way to handle it in some cases (Maybe)--but that's not true in general--as Writer illustrates.)

On the left, we bind ma to f first, then to g second. So the log will be like: ("before f" + "after f") + "after g". On the right, while the functions f and g are applied in the same order, now we bind to g first. So the log will be like: "before f" + ("after f" + "after g"). The parens are not in the string(s), so the log is the same either way and the law is observed. (Whereas if the second log had come out as "after f" + "after g" + "before f"--then we would be in mathematical trouble!).

Recasting bind as fmap plus join for Writer, we get fmap f ma, where f:a -> mb, resulting in m(mb). Think of the first m on ma as "before f". The f gets applied to the a inside that first m and now a second m (or mb) arrives--inside the first m, where mapping f takes place. Think of the second m on mb as "after f". m(mb) = ("before f"("after f" b)). Now we use Join to collapse the two logs, the ms, making a new m. Writer uses a monoid and we concatenate. Other monads combine contexts in other ways--obeying the laws. Which is maybe the main part of understanding them.

RomnieEE
  • 569
  • 2
  • 9