23

In recent implementations of Classic FRP, for instance reactive-banana, there are event streams and signals, which are step functions (reactive-banana calls them behaviours but they are nevertheless step functions). I've noticed that Elm only uses signals, and doesn't differentiate between signals and event streams. Also, reactive-banana allows to go from event streams to signals (edited: and it's sort of possible to act on behaviours using reactimate' although it not considered good practice), which kind of means that in theory we could apply all the event stream combinators on signals/behaviours by first converting the signal to event stream, applying and then converting again. So, given that it's in general easier to use and learn just one abstraction, what is the advantage of having separated signals and event streams ? Is anything lost in using just signals and converting all the event stream combinators to operate on signals ?

edit: The discussion has been very interesting. The main conclusions that I took from the discussion myself is that behaviours/event sources are both needed for mutually recursive definitions (feedback) and for having an output depend on two inputs (a behaviour and an event source) but only cause an action when one of them changes (<@>).

miguel.negrao
  • 949
  • 5
  • 13
  • 2
    Umm, as far as I understood it, Behaviours are *not* step functions but continuous over time? – Bergi Apr 10 '14 at 13:26
  • 3
    http://conal.net/papers/push-pull-frp/ might be a good read – Bergi Apr 10 '14 at 13:29
  • Yes, behaviours are not step functions, but as far as I understand, what reactive-banana calls behaviours are actually step functions, and elm and reactive-web call them signals. Thanks for the link, I've read that paper already. – miguel.negrao Apr 10 '14 at 13:36
  • @Bergi I think theoretically they should be continuous but in practical implementations true continuity is hard to achieve, that's why they're implemented as such. – xji Sep 11 '16 at 06:37
  • @miguel.negrao I think another main conclusion is that behaviors are instantaneous while events can be accumulative, i.e. supports updates instead of always depending on the current value. – xji Sep 11 '16 at 06:38
  • @JIXiang Yes, they only can be sampled every so often in actual implementations. But the theoretical difference is important, it changes how we think about them. – Bergi Sep 11 '16 at 09:56
  • @miguel.negrao Or "event streams" to be more precise. I was trying to figure out the FRP implementation of WebSharper https://github.com/intellifactory/websharper.ui.next/blob/master/docs/FRP.md – xji Sep 11 '16 at 09:59

4 Answers4

24

(Clarification: In reactive-banana, it is not possible to convert a Behavior back to an Event. The stepper function is a one-way ticket. There is a changes function, but its type indicates that it is "impure" and it comes with a warning that it does not preserve the semantics.)

I believe that having two separates concepts makes the API more elegant. In other words, it boils down to a question of API usability. I think that the two concepts behave sufficiently differently that things flow better if you have two separate types.

For example, the direct product for each type is different. A pair of Behavior is equivalent to a Behavior of pairs

(Behavior a, Behavior b) ~ Behavior (a,b)

whereas a pair of Events is equivalent to an Event of a direct sum:

(Event    a, Event    b) ~ Event (EitherOrBoth a b)

If you merge both types into one, then neither of these equivalence will hold anymore.

However, one of the main reasons for the separation of Event and Behavior is that the latter does not have a notion of changes or "updates". This may seem like an omission at first, but it is extremely useful in practice, because it leads to simpler code. For instance, consider a monadic function newInput that creates an input GUI widget that displays the text indicated in the argument Behavior,

input <- newInput (bText :: Behavior String)

The key point now is that the text displayed does not depend on how often the Behavior bText may have been updated (to the same or a different value), only on the actual value itself. This is a lot easier to reason about than the other case, where you would have to think about what happens when two successive event occurrences have the same value. Do you redraw the text while the user edits it?

(Of course, in order to actually draw the text, the library has to interface with the GUI framework and does keep track of changes in the Behavior. This is what the changes combinator is for. However, this can be seen as an optimization and is not available from "within FRP".)

The other main reason for the separation is recursion. Most Events that recursively depend on themselves are ill-defined. However, recursion is always allowed if you have mutual recursion between an Event and a Behavior

e = ((+) <$> b) <@> einput
b = stepper 0 e

There is no need to introduce delays by hand, it just works out of the box.

Heinrich Apfelmus
  • 11,039
  • 1
  • 35
  • 66
  • Your answer is very enlightening. There is just one thing that troubles me. – miguel.negrao Apr 11 '14 at 16:01
  • I find that it is very commonplace to want to react on a behaviour, because a behaviour can unify several behaviours into just one (e.g. (Behavior a, Behavior b) -> Behavior (a,b)). for instance if you have a Behaviour with x coordinate and another with y coordinate, you probably want to merge them into a point and then reactimate on that point such that the updates happen when either x or y is updated. This seems like such a basic thing to do that It seems to me weird that it is not a normal operation in reactive-banana (one would have to use changes and reactimate'). What I'm I missing ? – miguel.negrao Apr 11 '14 at 16:02
  • Another thing, I don't think there is a way in elm to say that you want a signal obtained through an Applicative functor (lift or – miguel.negrao Apr 11 '14 at 16:13
  • 1
    I have update my answer to touch on the "react to Behavior" point. Basically, the idea is that Behaviors don't include a notion of changes, so you never react to them (from within FRP, at least). The only way to get the value of a Behavior is the `apply` combinator. This is a good thing, because it saves you from the trouble of reasoning about the case where the Behavior has updated, but its value is still the same. – Heinrich Apfelmus Apr 12 '14 at 09:24
  • So to summarize, using changes by itself is bad, but (changes b) >>= reactimate' is ok ? If that is the case, why not just do reactimate' :: Frameworks t => Behavior t (IO()) -> Moment t () Which would be the function above and not expose changes at all ? – miguel.negrao Apr 12 '14 at 19:02
  • Well, it's still bad in that `reactimate'` allows you to observe the event that breaks semantics. You need to make sure that the `IO` action is idempotent in some sense. I mean, it's not going to kill anyone, but the onus of proof is on you, not the library. I didn't combine `changes` with `reactimate'`, because in some very specific circumstances, you want to merge an `Event` that you got from `changes` with something else. – Heinrich Apfelmus Apr 12 '14 at 20:54
  • Ok, but then is there a "good" way to achieve the same without using reactimate' ? This xy = (\a b -> (a,b) ) stepper 0.0 xE stepper 0.0 yE out = xy – miguel.negrao Apr 12 '14 at 22:14
  • 1
    If you really want to merge events in a Behavior-like way, then you'll have to roll your own. Here an example that does this: the [`Tidings` type](http://hackage.haskell.org/package/threepenny-gui-0.4.1.0/docs/Reactive-Threepenny.html#g:10) from `threepenny-gui`. – Heinrich Apfelmus Apr 13 '14 at 07:59
  • @HeinrichApfelmus: I disagree that the lack of `changes` in `Behavior` leads to simpler code. First, separating signals and streams leads to much more complex code in other ways that overwhelm this advantage IME. Second, reasoning about this in Elm is quite simple and doesn't seem to suffer from the drawback you mention. – John L Apr 13 '14 at 23:31
  • The main problem I have with `changes` is that it's an artifact of an operational model rather than the essential semantic model. In other words, an abstraction leak. Also, the name "changes" is misleading, because (IIUC), there can be "changes" even in a totally constant behavior (e.g., "changing" from 3 to 3). In the original FRP denotation, such implementation details are deliberately hidden, resulting in simplicity for users and optimizability for implementations. – Conal Apr 14 '14 at 04:52
  • @Conal: I agree that `changes` isn't really the correct name, but it's a useful shorthand. Otherwise I would reply that my experience with signals-and-streams APIs directly contradicts your assertions that it provides simplicity for users and optimizability for implementations (at least no more so than other models). But one of those is rather subjective, so I think there's room for alternative viewpoints. – John L Apr 14 '14 at 06:58
  • I understand now why using 'changes' might be problematic, but what is the problem of assigning an action to be associated with a behaviour to be run as often as the frp system thinks it should be run, either on a fixed rate or on some other more efficient timing, that would be up to the system.This would make no assumption on when the behaviour changes, it would just say that we want some entity outside the frp world to be in sync with a behaviour. Isn't this exactly how Yampa and other arrow based systems work ? Don't they just run one big IO action whenever the frp system has updated ? – miguel.negrao Apr 14 '14 at 14:06
  • @JohnL: One could replace the name "changes" with something more accurate, and the "main problem" I mentioned would remain, and probably be more clearly apparent. That problem being incompatibility with a simple denotation that is free of operational/implementation details such as update propagation. As for simplicity, we can reduce subjectivity by first applying semantic precision, i.e., state precisely what the denotation is. When the beguiling informality/hand-waving is gone, math remains, and we can measure simplicity without fooling ourselves. – Conal Apr 14 '14 at 16:54
  • @miguel.negrao Well, if there is no problem with the semantics, then chances are that you can already express it in terms of the existing combinators: `myReactimate behavior = reactimate (behavior – Heinrich Apfelmus Apr 15 '14 at 11:29
  • @JohnL Well, I did find the "no notion of updates" to be useful when modeling [widgets in graphical user interfaces](https://github.com/HeinrichApfelmus/threepenny-gui/blob/master/doc/design-widgets.md#three-principles-for-representing-mvc-concepts-with-frp), and I agree with Conal that the semantics are definitely simpler. That said, I also think that this may not be the end of the story yet, and I'm happy to discuss code example where having explicit updates helps. – Heinrich Apfelmus Apr 15 '14 at 11:36
  • In any case, comment threads on Stackoverflow are rather unwieldy. Do we want to move the discussion someplace else, for example to [reddit](http://www.reddit.com/r/haskell/comments/22uf79/great_discussion_about_why_having_both_events_and/)? – Heinrich Apfelmus Apr 15 '14 at 11:40
16

Something critically important to me is lost, namely the essence of behaviors, which is (possibly continuous) variation over continuous time. Precise, simple, useful semantics (independent of a particular implementation or execution) is often lost as well. Check out my answer to "Specification for a Functional Reactive Programming language", and follow the links there.

Whether in time or in space, premature discretization thwarts composability and complicates semantics. Consider vector graphics (and other spatially continuous models like Pan's). Just as with premature finitization of data structures as explained in Why Functional Programming Matters.

Community
  • 1
  • 1
Conal
  • 18,206
  • 2
  • 34
  • 40
  • BTW, continuous time is the *single fundamental idea* that grew into FRP (as mentioned in [Early inspirations and new directions in functional reactive programming](http://conal.net/blog/posts/early-inspirations-and-new-directions-in-functional-reactive-programming)), applying the elegance and rigor of denotational semantics. – Conal Apr 26 '14 at 19:44
4

I don't think there's any benefit to using the signals/behaviors abstraction over elm-style signals. As you point out, it's possible to create a signal-only API on top of the signal/behavior API (not at all ready for use, but see https://github.com/JohnLato/impulse/blob/dyn2/src/Reactive/Impulse/Syntax2.hs for an example). I'm pretty sure it's also possible to write a signal/behavior API on top of an elm-style API as well. That would make the two APIs functionally equivalent.

WRT efficiency, with a signals-only API the system should have a mechanism where only signals that have updated values will cause recomputations (e.g. if you don't move the mouse, the FRP network won't re-calculate the pointer coordinates and redraw the screen). Provided this is done, I don't think there's any loss of efficiency compared to a signals-and-streams approach. I'm pretty sure Elm works this way.

I don't think the continuous-behavior issue makes any difference here (or really at all). What people mean by saying behaviors are continuous over time is that they are defined at all times (i.e. they're functions over a continuous domain); the behavior itself isn't a continuous function. But we don't actually have a way to sample a behavior at any time; they can only be sampled at times corresponding to events, so we can't use the full power of this definition!

Semantically, starting from these definitions:

Event    == for some t ∈ T: [(t,a)]
Behavior == ∀ t ∈ T: t -> b

since behaviors can only be sampled at times where events are defined, we can create a new domain TX where TX is the set of all times t at which Events are defined. Now we can loosen the Behavior definition to

Behavior == ∀ t ∈ TX: t -> b

without losing any power (i.e. this is equivalent to the original definition within the confines of our frp system). Now we can enumerate all times in TX to transform this to

Behavior == ∀ t ∈ TX: [(t,b)]

which is identical to the original Event definition except for the domain and quantification. Now we can change the domain of Event to TX (by the definition of TX), and the quantification of Behavior (from forall to for some) and we get

Event    == for some t ∈ TX: [(t,a)]
Behavior == for some t ∈ TX: [(t,b)]

and now Event and Behavior are semantically identical, so they could obviously be represented using the same structure in an FRP system. We do lose a bit of information at this step; if we don't differentiate between Event and Behavior we don't know that a Behavior is defined at every time t, but in practice I don't think this really matters much. What elm does IIRC is require both Events and Behaviors to have values at all times and just use the previous value for an Event if it hasn't changed (i.e. change the quantification of Event to forall instead of changing the quantification of Behavior). This means you can treat everything as a signal and it all Just Works; it's just implemented so that the signal domain is exactly the subset of time that the system actually uses.

I think this idea was presented in a paper (which I can't find now, anyone else have a link?) about implementing FRP in Java, perhaps from POPL '14? Working from memory, so my outline isn't as rigorous as the original proof.

There's nothing to stop you from creating a more-defined Behavior by e.g. pure someFunction, this just means that within an FRP system you can't make use of that extra defined-ness, so nothing is lost by a more restricted implementation.

As for notional signals such as time, note that it's impossible to implement an actual continuous-time signal using typical programming languages. Since the implementation will necessarily be discrete, converting that to an event stream is trivial.

In short, I don't think anything is lost by using just signals.

John L
  • 27,467
  • 4
  • 67
  • 87
  • 2
    Afaik, people *do* mean that behaviours are continous functions over time. You for example would describe a moving thing as a behaviour of its coordinates. You can get the derivative of that (over time), and have a behaviour for its speed. Of course you're right that we can sample such a behaviour only at discrete time steps, but it's nonetheless defined to be able to yield different values at arbitrary moments. – Bergi Apr 10 '14 at 23:20
  • @Bergi: what about behaviors defined via `stepper`? Or any sort of accumulation over events? Such functions very often have discontinuities. Certainly *some* behaviors will be continuous, but I don't know of any FRP system that requires *all* behaviors be continuous, not even `reactive` (Conal Elliott's original push-pull implementation). – John L Apr 11 '14 at 04:12
  • 1
    Thanks for detailed answer. It's what I suspected. But actually, I think in pull based FRP systems with intrinsic time, behaviours really are different from step functions, since they can be recalculated in between event occurrences, for instance by depending on the time behaviour (i.e. physical simulations). I think the distinction between behaviours and signals only disappears when using purely push based systems since nothing happens in between event occurrences. – miguel.negrao Apr 11 '14 at 10:02
  • 2
    Also, the arrowed frp systems tend to be completely continuous in the semantics, even representing events with signal functions of Maybes, although some of those are again push based, so nothing happens in between occurrences (Yampa). Of course, on push systems you can still use a timer to iterate time at a fixed rate. – miguel.negrao Apr 11 '14 at 10:02
  • 1
    Sorry for being mathematically inaccurate. I meant them to be [piecewise](https://en.wikipedia.org/wiki/Piecewise_defined_function) continuous, instead of being [piecewise constant](https://en.wikipedia.org/wiki/Step_function) as your answer seems to suggest. You're right that since we only care about the TX domain, it should be enough to represent them the same way as events. Only - as you say - we are loosing the semantic information about their continuous definedness. And probably lots of efficiency, so `[(t, t->b)]` would be better. – Bergi Apr 11 '14 at 12:44
  • @Bergi: the whole point is that the information that is "lost" is completely inaccessible anyway, so there's no point to retaining it. WRT efficiency, direct implementations of these semantics are inefficient anyway, and semantically I'm not sure how `[(t,t->b)]` would be useful without introducing leaks. – John L Apr 11 '14 at 20:06
  • @miguel.negrao: the idea with arrowed frp is close to the point I'm making. The semantics are continuous, but they aren't defined over continuous time. Rather they're defined over a discrete subset of time. You can lift in a function that's defined over all time, but you can only sample the output within steps of the system. Even in pull-based systems it works essentially the same way semantically. – John L Apr 11 '14 at 20:12
  • 1
    "TX is the set of all times t at which Events are defined". I wonder what makes you think that TX is anything less than all of continuous time. Maybe you're saying that for a particular execution of a particular implementation, only finitely many temporal samplings are made. Those sampling times have nothing to do with the (execution- and implementation-independent) semantics. – Conal Apr 14 '14 at 01:48
  • @Conal: I agree with everything you wrote after "Maybe you're saying". The key observation here (as I understand it, still haven't found the paper) is that the semantics *don't change* if you restrict the domain to be the specific temporal samplings of a given run. Then you can use a finite domain for the implementation. At that point `changes` can actually have reasonable semantics. And in turn, there's nothing to prevent using a much simpler signal-only API. – John L Apr 14 '14 at 06:52
  • @JohnL: Here's an argument for restricting Haskell to have only finite data structures: any given program run can sample a data structure at only finitely many locations (paths into a structure), so the semantics don't change if we restrict sampling to that finite set. Similarly for spatially continuous graphics vs discrete (e.g., vector vs bitmap). And for time. Of course, in all these cases what's lost is [*modularity*](http://www.cse.chalmers.se/~rjmh/Papers/whyfp.html), i.e., defining things independent of their consumptions. (*Why Functional Programming Matters*.) – Conal Apr 14 '14 at 17:04
  • @JohnL: If you find that paper, we can take a look. Also, to find out whether it's true that "`changes` can actually have a reasonable semantics", how about trying to write down a denotation? Otherwise, it's too easy to be fooled by vague assumptions and operational mental habits. If you'd like help, feel free to email or IM me. – Conal Apr 14 '14 at 17:13
  • @Conal: possibly I'm thinking of "Functional Reactive Programming With Liveness Guarantees" by Alan Jeffrey, http://ect.bell-labs.com/who/ajeffrey/papers/icfp13.pdf. The abstract looks right anyway. – John L Apr 14 '14 at 21:56
1

I've unfortunately have no references in mind, but I distinctly remember different reactive authors claiming this choice is just for efficiency. You expose both to give the programmer a choice in what implementation of the same idea is more efficient for your problem.

I might be lying now, but I believe Elm implements everything as event streams under the hood. Things like time wouldn't be so nice like event streams though, since there are an infinite amount of events during any time frame. I'm not sure how Elm solves this, but I think it's a good example on something that makes more sense as a signal, both conceptually and in implementation.

monocell
  • 1,359
  • 9
  • 14