On Thu, 12 Mar 2020 at 16:42, Eric Wieser firstname.lastname@example.org wrote:
Do you have an example of an application where this sort of micro-optimisation is significant enough to justify a fairly major language change?
As an example of these optimizations being valuable, see https://github.com/numpy/numpy/pull/7997, which claims the optimization I described at the beginning resulted in a 1.5x-2x speedup.
I was amused by the first comment in that thread: "First, let me say that this is terrifying and this comment should not be taken as condoning it."
but that didn't need a language change to implement...
The language change I'm proposing isn't about _implementing_ these optimizations, it's about allowing users to exploit them without having to sacrifice naming their variables.
Fair point. But I still question the cost vs benefit of this. After all, this is a *language* change you're proposing - not just limited to CPython with its particular implementation of GC. Consider
x = some_complex_object() y = (del x)
What is to stop a Python implementation (not CPython, maybe, but some other one) from discarding the object in x as soon as it's unreferenced by the (del x) expression? Then y wouldn't be able to hold that value. After all, in theory, after the first line our object has a refcount of 1, then the (del x) changes that refcount to 0, then y = (the value) changes it back to 1. But it could be discarded in that period when it has refcount 0.
The above paragraph talked about refcounts, and about precisely when objects are discarded. None of which is part of the language definition. Can you describe your intended semantics for (del x) *without* using implementation-specific terminology? Because that would be important if this were to become a language feature.
Just to be clear - I see what you want to achieve here, and I understand why it might be important to you. But the bar for a *language* feature is by necessity higher than that.