On 2014-03-04, at 23:31 , Greg Ewing firstname.lastname@example.org wrote:
Steven D'Aprano wrote:
What I have in my head is some vague concept that the Python evaluation rules will somehow know when to evaluate the thunk and when to treat it as an object, which is (as I understand it) what happens in Algol.
But Algol has the benefit of static typing -- the procedure being called explicitly declares whether the argument is to be passed by name or value. Python has no idea about that at compile time.
That's not really a blocker though, Haskell thunks are implicit and not type-encoded. A name may correspond to a (unforced) thunk or to a strict value (an already forced thunk, whether through a previous implicit forcing, through an explicit forcing — a strict annotation — or through a decision of the strictness analyser).
b = [0, `1 + thunk`] # delays evaluation and creates a thunk object # equivalent to `1 + some_expression` c = b # now evaluates the thunk object d = f(2, thunk) # evaluates thunk in f's scope e = g(3, `thunk`) # passes the un-evaluated thunk object to g
When exactly does implicit evaluation of a thunk object occur? Does `b` give you an unevaluated thunk object? What if b is a custom sequence type implemented in Python -- how does its __getitem__ method avoid evaluating the thunk object prematurely?
None of these problems occur in Algol, because its thunks are not first-class values (you can't store them in arrays, etc.) and it uses static type information to tell when to create and evaluate them.
There are definitely difficulties in deciding how the decision to force a thunk comes about.