Look at the following code.
def foo(a, b): x = a + b if not x: return None sleep(1) # A calculation that does not use x return a*b
This code DECREFs x when the frame is exited (at the return statement). But (assuming) we can clearly see that x is not needed during the sleep (representing a big calculation), we could insert a "del x" statement before the sleep.
I think our compiler is smart enough to find out *some* cases where it could safely insert such del instructions. And this would potentially save some memory. (For example, in the above example, if a and b are large lists, x would be an even larger list, and its memory could be freed earlier.)
For sure, we could manually insert del statements in code where it matters to us. But if the compiler could do it in all code, regardless of whether it matters to us or not, it would probably catch some useful places where we wouldn't even have thought of this idea, and we might see a modest memory saving for most programs.
Can anyone tear this idea apart?
My guess: it would overwhelmingly free tiny objects, giving a literally unmeasurable (just theoretically provable) memory savings, at the cost of adding extra trips around the eval loop. So not really attractive to me. But when I leave "large" temp objects hanging and give a rip, I already stick in "del" statements anyway. Very rarely, but it happens.
Which is addressing it at a higher level than any other feedback you're going to get ;-) Of course there can be visible consequences when people are playing with introspection gimmicks.