
On Fri, Feb 17, 2017 at 04:02:01PM -0600, Abe Dillon wrote:
I'm fairly novice, so I could be way off base here, but it seems like the inevitable conclusion to this problem is something like JIT compilation, right? (admittedly, I know very little about JIT compilation)
No. JIT compilation delays *compiling* the code to run-time. This is a proposal for delaying *running* the code until such time as some other piece of code actually needs the result. An example might help. Suppose we want to get the one millionth prime number, a task which is of moderate difficulty and may take a while: print("Start") result = get_nth_prime(10**6) print("Done") print(result) On my computer, using a pure-Python implementation, it takes about 11 seconds to find the millionth prime 15485863, so there'll be a delay of 11 seconds between printing Start and Done, but printing the result is instantaneous. That's true regardless of when and how the code is compiled. (Where a JIT compiler is useful is that it may be possible to use runtime information available to the interpreter to compile all or some of the Python code to efficient machine code, allowing the function to run faster. That's how PyPy works.) If we make the code *delayed* then the situation is different: print("Start") result = delayed: get_nth_prime(10**6) # I dislike this syntax print("Done") print(result) Now Start and Done are printed virtually instantaneously, but there is an 11 second delay *after* Done is printed, when the result is reified (made real; the calculation is actually performed) and printed.
Python seems to be accumulating a lot of different approaches to achieving very similar things: asynchronous and/or lazy execution. We have generators, futures, asyncio, async/await, and probably more that I'm not thinking of. It seems like it should be possible for the interpreter to determine when an expression absolutely *must* be evaluated in many cases.
If the people debating this proposal cannot even agree on when the expression must be evaluated, how could the interpreter do it?
I know code with side-effects, especially I/O related side-effects would be difficult or impossible to manage within that context (the interpreter wouldn't be able to know that a write to a file has to occur before a read from that file for instance.
I think side-effects is a red herring. The obvious rule is: side-effects occur when the delayed thunk is reified. If you care about the actual timing of the side-effects, then don't use delayed evaluation. If you don't care, then who cares if the side-effect is delayed? -- Steve