Saturday, August 30, 2014

laziness, backtracking and explicit io threading

from http://gifsoup.com/view/37759/lazy.html.
When I first heard about laziness in programming languages I thought "That's just an optimization technique". Then I saw the example of an infinite lazy list and I thought "That's just a stream, umm I mean a memoized stream [1]". Well yes, but streams are uglier and harder to think about than lists, and this quickly gets worse for more complex structures. So I've come around to being rather sympathetic to laziness by default.

And laziness seems closely related to an important emerging trend in programming, which is to allow the programmer, as much as possible, to write the program as specifications and tests rather than in the traditional style of "step by step instructions to a complete idiot". This is also related to the trend that the programmer gets to deal with entities with mathematically nice semantics rather than entities with hardware-level semantics (and this is an important aim of Wombat).

If we're lazy then we need something to inspire us to do anything at all, and in programming it is the need to do IO that provides the inspiration. And, of course, we need to do the IO in the correct order, so you come around to the need to explicitly tell the compiler when IO is needed and in what order. In Haskell that is done with the IO monad. I won't explain the details since Mercury has a similar but conceptually simpler system.

I vaguely knew that the Mercury programming language (mercurylang.org) also had explicit threading of IO. So when I learnt about laziness, I naturally assumed that Mercury was also lazy and that was the reason for the IO threading. Now that I have (finally) started to learn Mercury [2] I see that this is wrong. Mercury is a logic programming language, in the tradition of Prolog, but much more hard core about being declarative. The reason it is explicit about IO is because it supports backtracking when searching for a logical result. So it needs to thread IO to prevent backtracking into an IO operation that has already happened.

The way Mercury does this is that procedures which want to do IO (starting necessarily with main) have:
  • an extra input parameter representing the world before the procedure is called. This parameter is marked "will be destroyed" so that it can't be reused after the procedure is called.
  • an extra output parameter representing the world after the procedure returns. This value is marked "unique", meaning that there are no other copies of it. And naturally this has to be used as the input parameter of the next IO operation.
This could get tedious, but there is syntactic sugar to handle the bookkeeping. An obvious question is: Why does Mercury have eager evaluation instead of lazy, since it is declarative? Is there some interaction with logic programming and backtracking that makes laziness difficult? I don't know the answer to that, but here's a different question. 

How do you handle the important need for IO concurrency? In Haskell the guys from Facebook have recently released their Haskell library for concurrently reading from multiple sources. However they are careful to only claim to handle reading. So I'm guessing that handling concurrent changes to the world is not so easy.

In Mercury one can write code (I hesitate to say "function") where spawned procedures can fan out and rejoin (actually I don't yet know how to rejoin before end of program, but I'm sure it is possible). Anyway this is all nicely explained by Mark Brown in this email: http://www.mercurylang.org/list-archives/users/2014-August/007757.html. You need to learn some Mercury to understand it, but I recommend that.

----------------------------------------------------------------
[1] A stream is just a procedure (taking no input) which either returns an end-of-stream marker or it returns a pair consisting of the current value (like the head of a list) and the next procedure to call (like the tail of the list). By a memoized stream I mean that all those procedures are memoized (which means those procedures quickly return their result rather than recalculating it when called more than once).

[2] Learning Mercury is easier than on my previous attempt (15 years ago) because of the rather nice, though unfinished, tutorial by Ralph Beckett which is at the top of their documentation page.

-----------------------------------------------------------------
Update 2014-09-20: I just saw Bob Harper's post "The Point of Laziness". He basically endorses memoized Streams as doing the job of lazy lists. I'm inclined to be convinced, and Wombat's neat terse procedure format should suit. It is a challenge to handle more complex lazy structures nicely with procedures.

Update 2016-09-28: The current spec calls for values to start as holes and be passed around like that till set. This does some of the job of laziness. Streams can be created in interator/yield style, making streams much easier to use.

4 comments:

  1. Strict evaluation was a deliberate choice for Mercury, primarily because it can be implemented more efficiently.

    There's no fundamental problem with implementing lazy evaluation, though, and Mercury implementations are in fact allowed to provide optional lazy function evaluation if they want - see the Semantics section of the Mercury Language Reference Manual for details:

    https://mercurylang.org/information/doc-release/mercury_ref/Semantics.html

    ReplyDelete
    Replies
    1. Thanks for that. The first external comment on this blog :-).

      Actually as a recent convert to laziness I'm rather excited by the different sort of efficiency you get by moving every subexpression to the earliest point where it could be calculated, but then not doing the actual calculation till really needed. Except that you can do calculations which might be needed whenever you've got computing resources not doing anything else.

      Delete
  2. Explicit IO

    This is part of the reason, the other part is that without explicit marking of IO the language would not be declaratively pure. We believe that being declaratively pure is important because it makes programs more reliable. Consider a predicate:

    pred do_something(x::in, y::out) is det.

    By inspection, we can see that it takes some type "x" as input and produces a result whose type is "y". We can also see that it is deterministic and does not do any IO (interact with the world outside your program). However, if Mercury was not pure, then this predicate might or might-not interact with the world, we can't tell from its declaration. We behave that it's
    useful to be able to see at a glance if the predicate can interact with the world or modify data. Firstly this helps programmers debug their programs and secondly it can help the compiler compile (even optimize) the program.

    ReplyDelete
  3. Lazyness:

    "Why does Mercury have eager evaluation instead of lazy, since it is declarative?"

    Firstly, because it is faster. A lazy language must, as it reads every piece of data, check if it has been evaluated yet or not. This test takes up time and is performed very frequently. For example a binary tree is not not one piece of data but N pieces of data, looking up an item in a balanced binary tree causes log N checks. These add up and slow a program down. The Glasgow Haskell Compiler goes to great lengths to try to optimize this,
    unsuccessfully attempting to solve a problem that can be designed out by simply using an eager language with a lazy module in the standard library

    The second problem, and probably a more important problem, is that reasoning about performance is very difficult in a lazy language. Consider any arbitrary predicate or function, is it slow or fast? It might be simple, without any loops ore recursion and therefore you would assume it would be fast, however if the data it is passed hasn't been evaluated yet then it will have to evaluate that data first, which could be slow. A related problem is known as a "space leak", this occurs when the chain of unevaluated "thunks" is larger than the eventual result of the computation This can create programs that use excessive amounts of memory. These are common problem in Haskell, just ask the developers of darcs.

    Another problem, is that it's difficult to write a debugger for a lazy language. A required feature of a debugger is being able to provide a stack trace. This is difficult in a lazy language because the evaluation of a function is not always related to when and where that function was called.

    Laziness is still very useful and can help programmers express certain kinds of problems. However it's not needed most of the time so a language that is strict-by-default is faster, easier to engineer (writing a debugger) and easier to use (easier to understand your program's performance).

    ReplyDelete