
On Wed, Oct 21, 2020 at 3:38 AM Steven D'Aprano <steve@pearwood.info> wrote:
On Wed, Oct 21, 2020 at 02:37:02AM +1100, Chris Angelico wrote:
Do you have any details to back this up? You're not just asking for a proposal to be accepted, you're actually asking for (quite a bit of) money, and then hoping to find a contractor to do the actual work.
Payment is on delivery. At each stage, if the contractor fails to deliver the promised gains, they get nothing.
(I believe that Mark is being polite by referring to a generic contractor. I think he is referring to himself.)
It's a little unclear from the proposal, as there was something about whether a suitable contractor could be found, but sure. TBH I'd be happier with this proposal as a direct offer/request for money than as "hey let's go look for potential coders", but it sounds like that's the plan anyway?
That means you're expecting that anyone would be able to achieve this, given sufficient development time.
With sufficient time, maybe the horse will learn to sing.
https://everything2.com/title/And+maybe+the+horse+will+learn+how+to+sing
But I don't think Mark believes *anyone* will be able to improve performance. If it were that easy that anyone could do it, Python would already be blazingly fast.
Yeah. And the "anyone" part is the concern I had - that the proposal was asking for funding and then for a search for a contractor. But if it's "pay me and I'll write this", then it's a bit more concrete.
BIG BIG concern: You're basically assuming that all this definition of performance is measured for repeated executions of code.
That's not what the proposal says.
"Performance should improve for all code, not just loop-heavy and numeric code."
In fact Mark goes further: he says that he's willing to allow some performance degradation on loop heavy code, if the overall performance increases.
"Overall performance" is a myth, and there's no way that CPython will magically be able to execute any code with the exact same performance improvement. So my question is: what happens to startup performance, what happens to short scripts, what happens to the interpreter's load times, etc, etc, etc? It could be that all code becomes faster, but only after it's been run a few times. That would be great for, say, a web app - it handles a request, goes back and waits for another, and then handles the next one a bit faster - but not for a command-line script. (And yes, I'm aware that it'd theoretically be possible to dump the compiler state to disk, but that has its own risks.)
What would happen if $2M were spent on improving PyPy3 instead?
Then both of the PyPy3 users will be very happy *wink*
(I know, that's a terrible, horrible, inaccurate slight on the PyPy community, which is very probably thriving, and I would delete it if I hadn't already hit Send.)
Yes, you're being horribly insulting to the third user of PyPy3, who is probably right now warming up his interpreter so he can send you an angry response :) I guess my biggest concern with this proposal is that it's heavy on mathematically pure boasts and light on actual performance metrics, and I'm talking here about the part where (so we're told) the code is all done and it just takes a swipe of a credit card to unlock it. And without the ability to run it myself, I can't be sure that it'd actually give *any* performance improvement on my code or use-cases. So there's a lot that has to be taken on faith, and I guess I'm just a bit dubious of how well it'd fulfil that. ChrisA