286

I've been reading a lot of stuff about functional programming lately, and I can understand most of it, but the one thing I just can't wrap my head around is stateless coding. It seems to me that simplifying programming by removing mutable state is like "simplifying" a car by removing the dashboard: the finished product may be simpler, but good luck making it interact with end-users.

Just about every user application I can think of involves state as a core concept. If you write a document (or a SO post), the state changes with every new input. Or if you play a video game, there are tons of state variables, beginning with the positions of all the characters, who tend to move around constantly. How can you possibly do anything useful without keeping track of changing values?

Every time I find something that discusses this issue, it's written in really technical functional-ese that assumes a heavy FP background that I don't have. Does anyone know a way to explain this to someone with a good, solid understanding of imperative coding but who's a complete n00b on the functional side?

EDIT: A bunch of the replies so far seem to be trying to convince me of the advantages of immutable values. I get that part. It makes perfect sense. What I don't understand is how you can keep track of values that have to change, and change constantly, without mutable variables.

Rex M
  • 135,205
  • 29
  • 270
  • 310
Mason Wheeler
  • 77,748
  • 42
  • 247
  • 453
  • 2
    See this: http://stackoverflow.com/questions/844536/advantages-of-stateless-programming – Sasha Chedygov Jun 20 '09 at 03:27
  • 1
    My personal humble opinion is that it's like strength and money. The law of diminishing returns apply. If you are fairly strong, there may be little incentive to get slightly stronger but it doesn't hurt to work at it (and some people do with a passion). The same applies to global mutable state. It is my personal preference to accept that as my coding skill progresses it is good to limit the amount of global mutable state in my code. It may never be perfect but it is good to work towards minimizing global mutable state. – AturSams Apr 25 '17 at 16:09
  • Like with money, a point will be reached were investing more time into it, is no longer highly useful and other priorities will rise to the top. If for instance, you reach the greatest amount of strength possible (per my metaphor) it may not serve any useful purpose and could even become a burden. But it is good still to strive towards that possibly unattainable goal and invest moderate resources into it. – AturSams Apr 25 '17 at 16:09
  • 9
    Briefly, in FP, functions never modify state. Eventually they'll return something that _replaces_ the current state. But the state is never modified (mutated) in-place. – jinglesthula Mar 28 '18 at 22:30
  • There are ways to get statefulness without mutation (using the stack from what I understand), but this question is in a sense beside the point (even though it's a great one). Hard to talk about succinctly, but here's a post which hopefully answers your question https://medium.com/@jbmilgrom/why-functional-and-object-oriented-programming-are-often-juxtaposed-1017699112d7. The TLDR is that the semantics of even a stateful functional program are immutable, however communication b/w runs of the program function are handled. – jbmilgrom Mar 10 '20 at 13:19
  • Also, if you are worried about efficiency of immutable data structures see https://en.wikipedia.org/wiki/Persistent_data_structure. Can find a nice explanation here (https://augustl.com/blog/2019/you_have_to_know_about_persistent_data_structures/) with reference to Clojure, which relies heavily on this implementation strategy – jbmilgrom Mar 10 '20 at 13:30
  • I don't think every application has a state as a core concept. The application has a series of events that happened as a core concept. The events can be "folded' to get the current state. The series of events is the source of truth, the state is just a point in time. – Srki Rakic May 19 '20 at 03:54

18 Answers18

171

Or if you play a video game, there are tons of state variables, beginning with the positions of all the characters, who tend to move around constantly. How can you possibly do anything useful without keeping track of changing values?

If you're interested, here's a series of articles which describe game programming with Erlang.

You probably won't like this answer, but you won't get functional program until you use it. I can post code samples and say "Here, don't you see" -- but if you don't understand the syntax and underlying principles, then your eyes just glaze over. From your point of view, it looks as if I'm doing the same thing as an imperative language, but just setting up all kinds of boundaries to purposefully make programming more difficult. My point of view, you're just experiencing the Blub paradox.

I was skeptical at first, but I jumped on the functional programming train a few years ago and fell in love with it. The trick with functional programming is being able to recognize patterns, particular variable assignments, and move the imperative state to the stack. A for-loop, for example, becomes recursion:

// Imperative
let printTo x =
    for a in 1 .. x do
        printfn "%i" a

// Recursive
let printTo x =
    let rec loop a = if a <= x then printfn "%i" a; loop (a + 1)
    loop 1

Its not very pretty, but we got the same effect with no mutation. Of course, wherever possible, we like avoid looping altogether and just abstract it away:

// Preferred
let printTo x = seq { 1 .. x } |> Seq.iter (fun a -> printfn "%i" a)

The Seq.iter method will enumerate through the collection and invoke the anonymous function for each item. Very handy :)

I know, printing numbers isn't exactly impressive. However, we can use the same approach with games: hold all state in the stack and create a new object with our changes in the recursive call. In this way, each frame is a stateless snapshot of the game, where each frame simply creates a brand new object with the desired changes of whatever stateless objects needs updating. The pseudocode for this might be:

// imperative version
pacman = new pacman(0, 0)
while true
    if key = UP then pacman.y++
    elif key = DOWN then pacman.y--
    elif key = LEFT then pacman.x--
    elif key = UP then pacman.x++
    render(pacman)

// functional version
let rec loop pacman =
    render(pacman)
    let x, y = switch(key)
        case LEFT: pacman.x - 1, pacman.y
        case RIGHT: pacman.x + 1, pacman.y
        case UP: pacman.x, pacman.y - 1
        case DOWN: pacman.x, pacman.y + 1
    loop(new pacman(x, y))

The imperative and functional versions are identical, but the functional version clearly uses no mutable state. The functional code keeps all state is held on the stack -- the nice thing about this approach is that, if something goes wrong, debugging is easy, all you need is a stack trace.

This scales up to any number of objects in the game, because all objects (or collections of related objects) can be rendered in their own thread.

Just about every user application I can think of involves state as a core concept.

In functional languages, rather than mutating the state of objects, we simply return a new object with the changes we want. Its more efficient than it sounds. Data structures, for example, are very easy to represent as immutable data structures. Stacks, for example, are notoriously easy to implement:

using System;

namespace ConsoleApplication1
{
    static class Stack
    {
        public static Stack<T> Cons<T>(T hd, Stack<T> tl) { return new Stack<T>(hd, tl); }
        public static Stack<T> Append<T>(Stack<T> x, Stack<T> y)
        {
            return x == null ? y : Cons(x.Head, Append(x.Tail, y));
        }
        public static void Iter<T>(Stack<T> x, Action<T> f) { if (x != null) { f(x.Head); Iter(x.Tail, f); } }
    }

    class Stack<T>
    {
        public readonly T Head;
        public readonly Stack<T> Tail;
        public Stack(T hd, Stack<T> tl)
        {
            this.Head = hd;
            this.Tail = tl;
        }
    }

    class Program
    {
        static void Main(string[] args)
        {
            Stack<int> x = Stack.Cons(1, Stack.Cons(2, Stack.Cons(3, Stack.Cons(4, null))));
            Stack<int> y = Stack.Cons(5, Stack.Cons(6, Stack.Cons(7, Stack.Cons(8, null))));
            Stack<int> z = Stack.Append(x, y);
            Stack.Iter(z, a => Console.WriteLine(a));
            Console.ReadKey(true);
        }
    }
}

The code above constructs two immutable lists, appends them together to make a new list, and appends the results. No mutable state is used anywhere in the application. It looks a little bulky, but that's only because C# is a verbose language. Here's the equivalent program in F#:

type 'a stack =
    | Cons of 'a * 'a stack
    | Nil

let rec append x y =
    match x with
    | Cons(hd, tl) -> Cons(hd, append tl y)
    | Nil -> y

let rec iter f = function
    | Cons(hd, tl) -> f(hd); iter f tl
    | Nil -> ()

let x = Cons(1, Cons(2, Cons(3, Cons(4, Nil))))
let y = Cons(5, Cons(6, Cons(7, Cons(8, Nil))))
let z = append x y
iter (fun a -> printfn "%i" a) z

No mutable necessary to create and manipulate lists. Nearly all data structures can be easily converted into their functional equivalents. I wrote a page here which provides immutable implementations of stacks, queues, leftist heaps, red-black trees, lazy lists. Not a single snippet of code contains any mutable state. To "mutate" a tree, I create a brand new one with new node I want -- this is very efficient because I don't need to make a copy of every node in the tree, I can reuse the old ones in my new tree.

Using a more significant example, I also wrote this SQL parser which is totally stateless (or at least my code is stateless, I don't know whether the underlying lexing library is stateless).

Stateless programming is just as expressive and powerful as stateful programming, it just requires a little practice to train yourself to start thinking statelessly. Of course, "stateless programming when possible, stateful programming where necessary" seems to be the motto of most impure functional languages. There's no harm in falling back on mutables when the functional approach just isn't as clean or efficient.

James Jenkinson
  • 1,358
  • 2
  • 13
  • 32
Juliet
  • 76,873
  • 44
  • 191
  • 224
  • 7
    I like the Pacman example. But that could solve one problem only to raise another: What if something else holds a reference to the existing Pacman object? Then it won't get garbage-collected and replaced; instead you end up with two copies of the object, one of which is invalid. How do you handle this problem? – Mason Wheeler Jun 21 '09 at 19:18
  • 10
    Obviously you need create a new "something else" with the new Pacman object ;) Of course, if we take that route too far, we end up recreating the object graph for our entire world everytime something changes. A better approach is described here (http://prog21.dadgum.com/26.html): rather than having objects update themselves and all of their dependencies, its much easier to have them pass messages about their state to an event loop which handles all of the updating. This makes it much easier to decide what objects in the graph needs updating, and which ones don't. – Juliet Jun 22 '09 at 13:42
  • 6
    @Juliet, I have one doubt - in my totally imperative mindset, recursion must end at some point, otherwise you'll eventually produce a stack overflow. In the recursive pacman example, how is the stack kept at bay - is the object implicitly popped at the beginning of the function? – BlueStrat Jul 27 '15 at 15:53
  • 9
    @BlueStrat - good question ... if it's a "tail call" ... ie the recursive call is the last thing in the function ... then the system doesn't need to generate a new stack frame ... it can just reuse the previous one. This is a common optimization for functional programming languages. https://en.wikipedia.org/wiki/Tail_call – reteptilian Oct 27 '15 at 20:23
  • @reteptilian - thanks for clearing this up for me - sorry for thanking so late – BlueStrat Nov 20 '15 at 23:36
  • Thanks for the link to the series of articles about programming a game in Erlang. Pure gold. – mydoghasworms Apr 06 '16 at 12:48
  • Helpful explanation but what about persistent state? How would you build an application that maintains customer records? And how does a functional program handle an enormous amount of data such an Internet search engine that you'd typically build with a distributed database? And if the answer is somehow that the whole thing is kept in memory on a stack, then how do functional programs recover from a hardware failure? – Michael Osofsky Jan 02 '17 at 20:25
  • 4
    @MichaelOsofsky, when interacting with databases and APIs, there is always an 'outside world' which has state to communicate with. In this case, you cannot go 100% functional. It is important to keep this 'unfunctional' code isolated and abstracted away so there is only one entry and one exit to the outside world. This way you can keep the rest of your code functional. – Chielt Apr 19 '17 at 16:29
  • @Juliet, "Debugging is easy, all you need is a stack trace" how does it work in case of a tail recursion? – vrnithinkumar Nov 04 '17 at 18:01
  • I would tweak this myself but I'm not sure I have it right, and I wouldn't want to step on any toes. When you say "The code above constructs two immutable lists, appends them together to make a new list, and appends the results." -- you mean *to the console*, right? The explicit object in the former clause makes the implicit object in the latter harder to determine (disambiguation of the method named "append" as opposed to stream insertion.) Re. large data: maybe consider a VM snapshot as immutable state; the snapshot is const wrt. the session but not wrt. merge/commit operations. – John P Dec 25 '17 at 07:57
  • 1
    @vrnithinkumar This was already discussed (@reteptilian etc.) but I believe the idea is that in a single stack, you would have the base state, diffs, and a new state to be used as the base state for the tail call. At that point an assertion of some kind would reveal that the product is invalid, and the stack would contain the exact step from a valid state to an invalid one. Otherwise, yes, TCO would leave you with a stack trace only as useful as with mutable state - you're stuck with the necropsy. Disclaimer - I'm fairly new to FP & debugging (working on that brought me here.) – John P Dec 25 '17 at 08:17
  • State does not go away entirely from the programs, but functional programs tend to push it out & make it as explicit as possible e.g. functions will now take state and actions and return state. And then we also have fancy data structures under the hood for maintaining values. – jzyamateur Dec 01 '18 at 11:02
80

Short answer: you can't.

So what's the fuss about immutability then?

If you're well-versed in imperative language, then you know that "globals are bad". Why? Because they introduce (or have the potential to introduce) some very hard-to-untangle dependencies in your code. And dependencies are not good; you want your code to be modular. Parts of program not influence other parts as little as possible. And FP brings you to the holy grail of modularity: no side effects at all. You just have your f(x) = y. Put x in, get y out. No changes to x or anything else. FP makes you stop thinking about state, and start thinking in terms of values. All of your functions simply receive values and produce new values.

This has several advantages.

First off, no side-effects means simpler programs, easier to reason about. No worrying that introducing a new part of program is going to interfere and crash an existing, working part.

Second, this makes program trivially parallelizable (efficient parallelization is another matter).

Third, there are some possible performance advantages. Say you have a function:

double x = 2 * x

Now you put in a value of 3 in, and you get a value of 6 out. Every time. But you can do that in imperative as well, right? Yep. But the problem is that in imperative, you can do even more. I can do:

int y = 2;
int double(x){ return x * y; }

but I could also do

int y = 2;
int double(x){ return x * (y++); }

The imperative compiler doesn't know whether I'm going to have side effects or not, which makes it more difficult to optimize (i.e. double 2 needn't be 4 every time). The functional one knows I won't - hence, it can optimize every time it sees "double 2".

Now, even though creating new values every time seems incredibly wasteful for complex types of values in terms of computer memory, it doesn't have to be so. Because, if you have f(x) = y, and values x and y are "mostly the same" (e.g. trees which differ only in a few leafs) then x and y can share parts of memory - because neither of them will mutate.

So if this unmutable thing is so great, why did I answer that you can't do anything useful without mutable state. Well, without mutability, your entire program would be a giant f(x) = y function. And the same would go for all parts of your program: just functions, and functions in the "pure" sense at that. As I said, this means f(x) = y every time. So e.g. readFile("myFile.txt") would need to return the same string value every time. Not too useful.

Therefore, every FP provides some means of mutating state. "Pure" functional languages (e.g. Haskell) do this using somewhat scary concepts such as monads, while "impure" ones (e.g. ML) allow this directly.

And of course, functional languages come with a host of other goodies which make programming more efficient, such as first-class functions etc.

oggy
  • 1,825
  • 1
  • 13
  • 10
  • 2
    <> I am guessing it is useful as long as you hide the global, a filesystem. If you consider it as a second parameter and let other processes return a new reference to the filesystem every time they modify it with filesystem2 = write(filesystem1, fd, pos, "string"), and let all processes exchange their reference to the filesystem, we could get a much cleaner picture of the operating system. – eel ghEEz Jan 22 '15 at 21:21
  • @eelghEEz, this is the same approach Datomic takes to databases. – Jason Nov 20 '15 at 01:48
  • 1
    +1 for the clear and concise comparison between paradigms. One suggestion is `int double(x){ return x * (++y); }` since the current one will still be 4, although still having an unadvertised side effect, whereas `++y` will return 6. – BrainFRZ Oct 06 '16 at 11:42
  • @eelghEEz I'm not sure of an alternative, really, is anyone else? To introduce information into a (pure-) FP context, you "take a measurement", e.g. "at timestamp X, the temperature is Y". If someone asks for the temperature, they may implicitly mean X=now, but they can't possibly be asking for the temperature as a universal function of time, right? FP deals with immutable state, and you have to create an immutable state - from internal *and* external sources - from a mutable one. Indices, timestamps, etc. are useful but orthogonal to mutability - like VCS are to version control itself. – John P Dec 25 '17 at 08:37
30

Note that saying functional programming does not have 'state' is a little misleading and might be the cause of the confusion. It definitely has no 'mutable state', but it can still have values that are manipulated; they just cannot be changed in-place (e.g. you have to create new values from the old values).

This is a gross over-simplification, but imagine you had an OO language, where all the properties on classes are set once only in the constructor, all methods are static functions. You could still perform pretty much any calculation by having methods take objects containing all the values they needs for their calculations and then returning new objects with the result (maybe a new instance of the same object even).

It may be 'hard' to translate existing code into this paradigm, but that is because it really requires a completely different way of thinking about code. As a side-effect though in most cases you get a lot of opportunity for parallelism for free.

Addendum: (Regarding your edit of how to keep track of values that need to change)
They would be stored in an immutable data structure of course...

This is not a suggested 'solution', but the easiest way to see that this will always work is that you could store these immutable values into a map (dictionary / hashtable) like structure, keyed by a 'variable name'.

Obviously in practical solutions you'd use a more sane approach, but this does show that worst-case if nothing else'd work you could 'simulate' mutable state with such a map that you carry around through your invocation tree.

jerryjvl
  • 18,001
  • 6
  • 38
  • 55
  • 2
    OK, I changed the title. Your answer seems to lead to an even worse problem, though. If I have to recreate every object every time something in its state changes, I'll spend all my CPU time doing nothing but constructing objects. I'm thinking about game programming here, where you've got lots of things moving around on the screen (and off-screen) at once, that need to be able to interact with each other. The whole engine has a set framerate: everything you're going to do, you have to do in X number of miliseconds. Surely there's a better way than constantly recycling entire objects? – Mason Wheeler Jun 20 '09 at 01:21
  • 4
    The beauty of it is that the imutability is on the language, not on the implementation. With a few tricks, you can have imutable state in the language while the implementation is in fact changing the state in place. See for instance Haskell's ST monad. – CesarB Jun 20 '09 at 02:14
  • 4
    @Mason: The point being that the compiler can much better decide where it is (thread)safe to change the state in-place than you can. – jerryjvl Jun 20 '09 at 02:30
  • I think for games you should avoid immutable for any parts where speed doesn't matter. While an immutable language might optimize for you, nothing is going to be faster than modifying memory which CPUs are fast at doing. And so if it turns out there are 10 or 20 places where you need imperative I think you should just avoid immutable altogether unless you can modularize it for very separated areas like game menus. And game logic in particular could be a nice place to use immutable because I feel it's great for doing complex modelling of pure systems like business rules. – LegendLength Jul 16 '17 at 14:37
  • @LegendLength you're contradicting yourself. – Ixx Mar 11 '18 at 10:11
19

I think there's a slight misunderstanding. Pure functional programs have state. The difference is how that state is modeled. In pure functional programming, state is manipulated by functions that take some state and return the next state. Sequencing through states is then achieved by passing the state through a sequence of pure functions.

Even global mutable state can be modeled this way. In Haskell, for example, a program is a function from a World to a World. That is, you pass in the entire universe, and the program returns a new universe. In practise, though, you only need to pass in the parts of the universe in which your program is actually interested. And programs actually return a sequence of actions that serve as instructions for the operating environment in which the program runs.

You wanted to see this explained in terms of imperative programming. OK, let's look at some really simple imperative programming in a functional language.

Consider this code:

int x = 1;
int y = x + 1;
x = x + y;
return x;

Pretty bog-standard imperative code. Doesn't do anything interesting, but that's OK for illustration. I think you will agree that there's state involved here. The value of the x variable changes over time. Now, let's change the notation slightly by inventing a new syntax:

let x = 1 in
let y = x + 1 in
let z = x + y in z 

Put parentheses to make it clearer what this means:

let x = 1 in (let y = x + 1 in (let z = x + y in (z)))

So you see, state is modeled by a sequence of pure expressions that bind the free variables of the following expressions.

You will find that this pattern can model any kind of state, even IO.

Apocalisp
  • 33,619
  • 8
  • 100
  • 150
  • Is that kind of like a Monad? – CMCDragonkai Jul 29 '14 at 03:49
  • Would you consider this: A is declarative at level 1 B is declarative at level 2, it considers A to be imperative. C is declarative at level 3, it considers B to be imperative. As we increase the abstraction layer, it always considers languages lower on abstraction layer to be more imperative then itself. – CMCDragonkai Jul 29 '14 at 03:57
14

It's just different ways of doing the same thing.

Consider a simple example such as adding the numbers 3, 5, and 10. Imagine thinking about doing that by first changing the value of 3 by adding 5 to it, then adding 10 to that "3", then outputting the current value of "3" (18). This seems patently ridiculous, but it is in essence the way that state-based imperative programming is often done. Indeed, you can have many different "3"s that have the value 3, yet are different. All of this seems odd, because we have been so ingrained with the, quite enormously sensible, idea that the numbers are immutable.

Now think about adding 3, 5, and 10 when you take the values to be immutable. You add 3 and 5 to produce another value, 8, then you add 10 to that value to produce yet another value, 18.

These are equivalent ways to do the same thing. All of the necessary information exists in both methods, but in different forms. In one the information exists as state and in the rules for changing state. In the other the information exists in immutable data and functional definitions.

Wedge
  • 18,614
  • 7
  • 44
  • 69
14

Here's how you write code without mutable state: instead of putting changing state into mutable variables, you put it into the parameters of functions. And instead of writing loops, you write recursive functions. So for example this imperative code:

f_imperative(y) {
  local x;
  x := e;
  while p(x, y) do
    x := g(x, y)
  return h(x, y)
}

becomes this functional code (Scheme-like syntax):

(define (f-functional y) 
  (letrec (
     (f-helper (lambda (x y)
                  (if (p x y) 
                     (f-helper (g x y) y)
                     (h x y)))))
     (f-helper e y)))

or this Haskellish code

f_fun y = h x_final y
   where x_initial = e
         x_final   = loop x_initial
         loop x = if p x y then loop (g x y) else x

As to why functional programmers like to do this (which you did not ask), the more pieces of your program are stateless, the more ways there are to put pieces together without having anything break. The power of the stateless paradigm lies not in statelessness (or purity) per se, but the ability it gives you to write powerful, reusable functions and combine them.

You can find a good tutorial with lots of examples in John Hughes's paper Why Functional Programming Matters.

DNA
  • 40,109
  • 12
  • 96
  • 136
Norman Ramsey
  • 188,173
  • 57
  • 343
  • 523
10

I'm late to the discussion, but I wanted to add a few points for people who are struggling with functional programming.

  1. Functional languages maintain the exact same state updates as imperative languages but they do it by passing the updated state to subsequent function calls. Here is a very simple example of traveling down a number line. Your state is your current location.

First the imperative way (in pseudocode)

moveTo(dest, cur):
    while (cur != dest):
         if (cur < dest):
             cur += 1
         else:
             cur -= 1
    return cur

Now the functional way (in pseudocode). I'm leaning heavily on the ternary operator because I want people from imperative backgrounds to actually be able to read this code. So if you don't use the ternary operator much (I always avoided it in my imperative days) here is how it works.

predicate ? if-true-expression : if-false-expression

You can chain the ternary expression by putting a new ternary expression in place of the false-expression

predicate1 ? if-true1-expression :
predicate2 ? if-true2-expression :
else-expression

So with that in mind, here's the functional version.

moveTo(dest, cur):
    return (
        cur == dest ? return cur :
        cur < dest ? moveTo(dest, cur + 1) : 
        moveTo(dest, cur - 1)
    )

This is a trivial example. If this were moving people around in a game world, you'd have to introduce side effects like drawing the object's current position on the screen and introducing a bit of delay in each call based on how fast the object moves. But you still wouldn't need mutable state.

  1. The lesson is that functional languages "mutate" state by calling the function with different parameters. Obviously this doesn't really mutate any variables, but that's how you get a similar effect. This means you'll have to get used to thinking recursively if you want to do functional programming.

  2. Learning to think recursively is not hard, but it does take both practice and a toolkit. That small section in that "Learn Java" book where they used recursion to calculate factorial does not cut it. You need a toolkit of skills like making iterative processes out of recursion (this is why tail recursion is essential for functional language), continuations, invariants, etc. You wouldn't do OO programming without learning about access modifiers, interfaces etc. Same thing for functional programming.

My recommendation is to do the Little Schemer (note that I say "do" and not "read") and then do all the exercises in SICP. When you're done, you'll have a different brain than when you started.

8

It is in fact quite easy to have something which looks like mutable state even in languages without mutable state.

Consider a function with type s -> (a, s). Translating from Haskell syntax, it means a function which takes one parameter of type "s" and returns a pair of values, of types "a" and "s". If s is the type of our state, this function takes one state and returns a new state, and possibly a value (you can always return "unit" aka (), which is sort of equivalent to "void" in C/C++, as the "a" type). If you chain several calls of functions with types like this (getting the state returned from one function and passing it to the next), you have "mutable" state (in fact you are in each function creating a new state and abandoning the old one).

It might be easier to understand if you imagine the mutable state as the "space" where your program is executing, and then think of the time dimension. At instant t1, the "space" is in a certain condition (say for example some memory location has value 5). At a later instant t2, it is in a different condition (for example that memory location now has value 10). Each of these time "slices" is a state, and it is immutable (you cannot go back in time to change them). So, from this point of view, you went from the full spacetime with a time arrow (your mutable state) to a set of slices of spacetime (several immutable states), and your program is just treating each slice as a value and computing each of them as a function applied to the previous one.

OK, maybe that was not easier to understand :-)

It might seem inneficient to explicitly represent the whole program state as a value, which has to be created only to be discarded the next instant (just after a new one is created). For some algorithms it might be natural, but when it is not, there is another trick. Instead of a real state, you can use a fake state which is nothing more than a marker (let's call the type of this fake state State#). This fake state exists from the point of view of the language, and is passed like any other value, but the compiler completely omits it when generating the machine code. It only serves to mark the sequence of execution.

As an example, suppose the compiler gives us the following functions:

readRef :: Ref a -> State# -> (a, State#)
writeRef :: Ref a -> a -> State# -> (a, State#)

Translating from these Haskell-like declarations, readRef receives something which resembles a pointer or a handle to a value of type "a", and the fake state, and returns the value of type "a" pointed to by the first parameter and a new fake state. writeRef is similar, but changes the value pointed to instead.

If you call readRef and then pass it the fake state returned by writeRef (perhaps with other calls to unrelated functions in the middle; these state values create a "chain" of function calls), it will return the value written. You can call writeRef again with the same pointer/handle and it will write to the same memory location — but, since conceptually it is returning a new (fake) state, the (fake) state is still imutable (a new one has been "created"). The compiler will call the functions in the order it would have to call them if there was a real state variable which had to be computed, but the only state which there is is the full (mutable) state of the real hardware.

(Those who know Haskell will notice I simplified things a lot and ommited several important details. For those who want to see more details, take a look at Control.Monad.State from the mtl, and at the ST s and IO (aka ST RealWorld) monads.)

You might wonder why doing it in such a roundabout way (instead of simply having mutable state in the language). The real advantage is that you have reified your program's state. What before was implicit (your program state was global, allowing for things like action at a distance) is now explicit. Functions which do not receive and return the state cannot modify it or be influenced by it; they are "pure". Even better, you can have separate state threads, and with a bit of type magic, they can be used to embed an imperative computation within a pure one, without making it impure (the ST monad in Haskell is the one normally used for this trick; the State# I mentioned above is in fact GHC's State# s, used by its implementation of the ST and IO monads).

CesarB
  • 39,945
  • 6
  • 58
  • 84
7

Functional programming avoids state and emphasizes functionality. There's never any such thing as no state, though the state might actually be something that's immutable or baked into the architecture of what you're working with. Consider the difference between a static web server that just loads up files off the filesystem versus a program that implements a Rubik's cube. The former is going to be implemented in terms of functions designed to turn a request into a file path request into a response from the contents of that file. Virtually no state is needed beyond a tiny bit of configuration (the filesystem 'state' is really outside the scope of the program. The program works the same way regardless of what state the files are in). In the latter though, you need to model the cube and your program implementation of how operations on that cube change its state.

Jherico
  • 26,370
  • 8
  • 58
  • 82
  • When I was more anti-functional I wondered how it could be good when something like a hard drive is mutable. My c# classes all had mutable state and could very logically simulate a hard drive or any other device. Whereas with functional there was a mismatch between the models and the actual machines they were modelling. After delving further into functional I've come to realize the benefits you get are able to outweigh that issue a fair bit. And if it were physically possible to invent a hard drive that made a copy of itself it would actually be useful (like journalling already does). – LegendLength Jul 16 '17 at 14:46
5

In addition to the great answers others are giving, think about the classes Integer and String in Java. Instances of these classes are immutable, but that doesn't make the classes useless just because their instances cannot be changed. The immutability gives you some safety. You know if you use a String or Integer instance as the key to a Map, the key cannot be changed. Compare this to the Date class in Java:

Date date = new Date();
mymap.put(date, date.toString());
// Some time later:
date.setTime(new Date().getTime());

You have silently changed a key in your map! Working with immutable objects, such as in Functional Programming, is a lot cleaner. It's easier to reason about what side effects occur -- none! This means it's easier for the programmer, and also easier for the optimizer.

Eddie
  • 51,249
  • 21
  • 117
  • 141
  • 2
    I understand that, but it doesn't answer my question. Keeping in mind that a computer program is a model of some real-world event or process, if you can't change your values, then how do you model something that changes? – Mason Wheeler Jun 20 '09 at 02:49
  • Well, you can certainly do useful things with the Integer and String classes. It's not like their immutability means you cannot have mutable state. – Eddie Jun 20 '09 at 03:19
  • @Mason Wheeler - By understanding that a thing and it's state are two different "things". What pacman is doesn't change from time A to time B. Where pacman is does change. When you move from time A to time B, you get a new combination of pacman + state... which is the same pacman, different state. Not changed state... different state. – RHSeeger Jul 02 '09 at 20:18
4

Using some creativity and pattern matching, stateless games have been created:

as well as rolling demos:

and visualizations:

Paul Sweatte
  • 22,871
  • 7
  • 116
  • 244
4

For highly interactive applications such as games, Functional Reactive Programming is your friend: if you can formulate the properties of your game's world as time-varying values (and/or event streams), you are ready! These formulae will be sometimes even more natural and intent-revealing than mutating a state, e.g. for a moving ball, you can directly use the well-known law x = v * t. And what's better, the game's rules written such way compose better than object-oriented abstractions. For example, in this case, the ball's speed can be also a time-varying value, which depends on the event stream consisting of the ball's collisions. For more concrete design considerations, see Making Games in Elm.

thSoft
  • 19,314
  • 5
  • 82
  • 97
3

That's the way FORTRAN would work without COMMON blocks: You'd write methods that had the values you passed in and local variables. That's it.

Object oriented programming brought us state and behavior together, but it was a new idea when I first encountered it from C++ back in 1994.

Geez, I was a functional programmer when I was a mechanical engineer and I didn't know it!

duffymo
  • 293,097
  • 41
  • 348
  • 541
  • 2
    I'd disagree that this is something you can pin on OO. Languages before OO encouraged coupling state and algorithms. OO just provided a better way to manage it. – Jason Baker Jun 20 '09 at 03:03
  • "Encouraged" - perhaps. OO make it an explicit part of the language. You can do encapsulation and information hiding in C, but I'd say that OO languages make it a lot easier. – duffymo Jun 20 '09 at 10:37
2

Bear in mind: functional languages are Turing complete. Therefore, any useful task you would perform in an imperitive language can be done in a functional language. At the end of the day though, I think there's something to be said of a hybrid approach. Languages like F# and Clojure (and I'm sure others) encourage stateless design, but allow for mutability when necessary.

Jason Baker
  • 171,942
  • 122
  • 354
  • 501
  • Just because two languages are Turing complete does not mean they can perform the same tasks. What it means is they can perform the same computation. Brainfuck is Turing complete, but I'm fairly certain it can't communicate over a TCP stack. – RHSeeger Jul 02 '09 at 20:19
  • 2
    Sure it can. Given the same access to hardware as say C, it can. That doesn't mean that it would be practical, but the possibility is there. – Jason Baker Jul 03 '09 at 00:58
2

You can't have a pure functional language that is useful. There will always be a level of mutability that you have to deal with, IO is one example.

Think of functional languages as just another tool that you use. Its good for certain things, but not others. The game example you gave might not be the best way to use a functional language, at least the screen will have a mutable state that you can't do anything about with FP. The way you think of problem and the type of problems you solve with FP will be different from ones you are used to with imperative programming.

Up.
  • 909
  • 9
  • 20
1

By using lots of recursion.

Tic Tac Toe in F# (A functional language.)

Spencer Ruport
  • 34,215
  • 11
  • 81
  • 141
  • Of course, tail-recursion can be implemented very efficiently, since compilers can convert it into a loop. – Eddie Jun 20 '09 at 02:39
0

JavaScript provides very clear examples of the different ways of approaching mutable or immutable state\values within its core because the ECMAScript specifications were not able to settle on a universal standard so one must continue to memorize or doublecheck which functions create a new object that they return or modify the original object passed to it. If your entire language is immutable then you know you are always getting a new (copied & possibly modified) result and never have to worry about accidentally modifying the variable before passing it into a function.

Do you know which returns a new object and which changes the original of the following examples?

Array.prototype.push()
String.prototype.slice()
Array.prototype.splice()
String.prototype.trim()
Supamic
  • 143
  • 3
  • 4
-3

This is very simple. You can use as many variables as you want in functional programming...but only if they're local variables (contained inside functions). So just wrap your code in functions, pass values back and forth among those functions (as passed parameters and returned values)...and that's all there is to it!

Here's an example:

function ReadDataFromKeyboard() {
    $input_values = $_POST[];
    return $input_values;
}
function ProcessInformation($input_values) {
    if ($input_values['a'] > 10)
        return ($input_values['a'] + $input_values['b'] + 3);
    else if ($input_values['a'] > 5)
        return ($input_values['b'] * 3);
    else
        return ($input_values['b'] - $input_values['a'] - 7);
}
function DisplayToPage($data) {
    print "Based your input, the answer is: ";
    print $data;
    print "\n";
}

/* begin: */
DisplayToPage (
    ProcessInformation (
        GetDataFromKeyboard()
    )
);
John Doe
  • 817
  • 7
  • 9