I see why a cycle that has multiple objects with a __del__ method is a problem.
Once you call __del__ on one of the objects, its no longer usable by
the others, and its not clear which order is correct.
My question regards the case where a cycle of objects only has 1
object which has a __del__.
I think a correct strategy to collect the entire cycle is the same one
used on a single object. On a single object Python uses:
1. Temporarily revive object
2. Call __del__
3. Unrevive object (if(--refcount == 0) then we're done), otherwise it
was resurrected).
We can apply this to the whole cycle:
1. Temporarily revive entire cycle (each of its objects)
2. Call __del__
3. Unrevive the objects of the entire cycle (each of its objects).
Step 1 will allow __del__ to run safely. Since there is only one
__del__ in the cycle, it is not dangerous that its references will
disappear from "under its feet".
(Some code restructuring will probably be necessary, because of
assumptions that are hard-coded into slot_tp_del and subtype_dealloc).
I believe this enhancement is important, because:
A. When using existing code -- you do not control whether its objects
have a __del__. In my experience, a majority of these cases only have
a single __del__-containing object in their cycles.
B. Python's exit cleanup calls __del__ in the wrong order, and
Python's runtime is full of cycles (Each global is a cycle, including
the class objects themselves: class->dict->function->func_globals)).
These cycles very often have only 1 __del__ method.
Some examples of the problem posed by B:
http://www.google.com/search?q=ignored+%22%27NoneType%27+object+has+no+attribute%22+%22__del__+of%22&btnG=Search
Ugly workarounds exist even in the standard library [subprocess]: "def
__del__(self, sys=sys):").
Example:
import os
class RunningFile(object):
filename = '/tmp/running'
def __init__(self):
open(self.filename, 'wb')
def __del__(self):
os.unlink(self.filename)
running_file = RunningFile()
The deller object is in a cycle as described above [as well as the
Deller class itself]. When Python exits, it could call
deller.__del__() and then collect the cycle. But Python does the wrong
thing here, and gets rid of the globals before calling __del__:
Exception exceptions.AttributeError: "'NoneType' object has no
attribute 'unlink'" in
Eyal Lotem wrote:
Example:
import os class RunningFile(object): filename = '/tmp/running' def __init__(self): open(self.filename, 'wb') def __del__(self): os.unlink(self.filename) running_file = RunningFile()
The deller object is in a cycle as described above [as well as the Deller class itself]. When Python exits, it could call deller.__del__() and then collect the cycle. But Python does the wrong thing here, and gets rid of the globals before calling __del__: Exception exceptions.AttributeError: "'NoneType' object has no attribute 'unlink'" in
> ignored
I don't know what you're trying to get at with this example. There isn't any cyclic GC involved at all, just referencing counting. And before the module globals are cleared, running_file is still referenced, so calling its __del__ method early would be an outright error in the interpreter (as far as I know, getting __del__ methods to run is one of the *reasons* for clearing the module globals). It's a fact of Python development: __del__ methods cannot safely reference module globals, because those globals may be gone by the time that method is invoked. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org
Nick Coghlan wrote:
It's a fact of Python development: __del__ methods cannot safely reference module globals, because those globals may be gone by the time that method is invoked.
Speaking of this, has there been any more thought given to the idea of dropping the module clearing and just relying on cyclic GC? -- Greg
On Sat, Jun 28, 2008 at 5:39 PM, Greg Ewing
Nick Coghlan wrote:
It's a fact of Python development: __del__ methods cannot safely reference module globals, because those globals may be gone by the time that method is invoked.
Speaking of this, has there been any more thought given to the idea of dropping the module clearing and just relying on cyclic GC?
No, but it is an intriguing thought nevertheless. The module clearing causes nothing but trouble... -- --Guido van Rossum (home page: http://www.python.org/~guido/)
On Sat, Jun 28, 2008 at 5:39 PM, Greg Ewing
wrote: Nick Coghlan wrote:
It's a fact of Python development: __del__ methods cannot safely reference module globals, because those globals may be gone by the time that method is invoked.
Speaking of this, has there been any more thought given to the idea of dropping the module clearing and just relying on cyclic GC?
No, but it is an intriguing thought nevertheless. The module clearing causes nothing but trouble... This is exactly what my post tried to address. I assumed it was clear that module clearing is the wrong solution, and
On Jun 29, 7:52 am, "Guido van Rossum"
-- --Guido van Rossum (home page:http://www.python.org/~guido/) _______________________________________________ Python-Dev mailing list Python-...@python.orghttp://mail.python.org/mailman/listinfo/python-dev Unsubscribe:http://mail.python.org/mailman/options/python-dev/python-dev2-garchiv...
eyal.lotem+pyutils <at> gmail.com
This is exactly what my post tried to address. I assumed it was clear that module clearing is the wrong solution, and that it was also clear that due to the cycles I mentioned (global.__class__.__dict__['any_method'].func_globals['global'] is global), all globals that have a __del__ will not be collectible. Therefore, I proposed a solution to cycles with a __del__ in them. Only with this solution it is possible to replace module clearing with normal garbage collection.
A more generic solution would be to pass a special flag to the GC when it is called as part of interpreter shutdown. This flag would instruct it to not put aside objects with finalizers, but first call all their __del__ methods and then collect them all at once regardless of whether they are part of a cycle or not.
On Jun 29, 3:04 pm, Antoine Pitrou
eyal.lotem+pyutils <at> gmail.com
writes: This is exactly what my post tried to address. I assumed it was clear that module clearing is the wrong solution, and that it was also clear that due to the cycles I mentioned (global.__class__.__dict__['any_method'].func_globals['global'] is global), all globals that have a __del__ will not be collectible. Therefore, I proposed a solution to cycles with a __del__ in them. Only with this solution it is possible to replace module clearing with normal garbage collection.
A more generic solution would be to pass a special flag to the GC when it is called as part of interpreter shutdown. This flag would instruct it to not put aside objects with finalizers, but first call all their __del__ methods and then collect them all at once regardless of whether they are part of a cycle or not.
That would be no worse than what happens now - but its still not perfect (__del__ ordering issues). Also, you would need to temporarily revive the cycles as mentioned above (to avoid accessibility of partially destructed objects).
_______________________________________________ Python-Dev mailing list Python-...@python.orghttp://mail.python.org/mailman/listinfo/python-dev Unsubscribe:http://mail.python.org/mailman/options/python-dev/python-dev2-garchiv...
eyal.lotem+pyutils <at> gmail.com
That would be no worse than what happens now - but its still not perfect (__del__ ordering issues). Also, you would need to temporarily revive the cycles as mentioned above (to avoid accessibility of partially destructed objects).
The idea is to call all __del__'s *before* any object in the cycle is deallocated (that is, call them manually rather than as part of deallocating them). That way you shouldn't have the issues mentioned above.
On Jun 29, 3:36 pm, Antoine Pitrou
eyal.lotem+pyutils <at> gmail.com
writes: That would be no worse than what happens now - but its still not perfect (__del__ ordering issues). Also, you would need to temporarily revive the cycles as mentioned above (to avoid accessibility of partially destructed objects).
The idea is to call all __del__'s *before* any object in the cycle is deallocated (that is, call them manually rather than as part of deallocating them). That way you shouldn't have the issues mentioned above.
Firstly, as I said above: you will still have __del__ ordering issues. Secondly, the destructor itself currently calls __del__, so if you call __del__ before any deallocation, it will get called again as part of the deallocation. Might be a technicality but it will still probably require some code restructuring to work around (or making that code even more hairy).
_______________________________________________ Python-Dev mailing list Python-...@python.orghttp://mail.python.org/mailman/listinfo/python-dev Unsubscribe:http://mail.python.org/mailman/options/python-dev/python-dev2-garchiv...
On Jun 29, 5:12 pm, "eyal.lotem+pyut...@gmail.com"
On Jun 29, 3:36 pm, Antoine Pitrou
wrote: eyal.lotem+pyutils <at> gmail.com
writes: That would be no worse than what happens now - but its still not perfect (__del__ ordering issues). Also, you would need to temporarily revive the cycles as mentioned above (to avoid accessibility of partially destructed objects).
The idea is to call all __del__'s *before* any object in the cycle is deallocated (that is, call them manually rather than as part of deallocating them). That way you shouldn't have the issues mentioned above.
Firstly, as I said above: you will still have __del__ ordering issues. Secondly, the destructor itself currently calls __del__, so if you call __del__ before any deallocation, it will get called again as part of the deallocation. Might be a technicality but it will still probably require some code restructuring to work around (or making that code even more hairy).
Additionally, there is another problem: If the cycle is not temporarily revived, and you call __del__ manually, it may break the cycle by removing the references. Thus, objects in the cycle will go down to refcount=0 during your attempt to call __del__'s on the objects in the cycle. The only sane thing is to temporarily revive the entire cycle so you can safely call __del__'s on it. Then, you might want to disable the normal __del__ calling that occurs as part of the later destruction of the cycle.
_______________________________________________ Python-Dev mailing list Python-...@python.orghttp://mail.python.org/mailman/listinfo/python-dev Unsubscribe:http://mail.python.org/mailman/options/python-dev/python-dev2-garchiv...
_______________________________________________ Python-Dev mailing list Python-...@python.orghttp://mail.python.org/mailman/listinfo/python-dev Unsubscribe:http://mail.python.org/mailman/options/python-dev/python-dev2-garchiv...
eyal.lotem+pyutils <at> gmail.com
Additionally, there is another problem: If the cycle is not temporarily revived, and you call __del__ manually, it may break the cycle by removing the references. Thus, objects in the cycle will go down to refcount=0 during your attempt to call __del__'s on the objects in the cycle.
But if we use gcmodule's current linked list mechanisme, wouldn't this situation be correctly handled by _PyObject_GC_UNTRACK? That is, before the object is destroyed, the gc module already automatically removes it from its current "collection". Therefore, walking the collection (in this case, the list of unreachable objects) still does the right thing. (ISTM the gc relies heavily on this property)
Then, you might want to disable the normal __del__ calling that occurs as part of the later destruction of the cycle.
I was thinking a flag in the PyObject header would do the trick but there aren't any flags in the PyObject header... *gasp*.
Antoine Pitrou
I was thinking a flag in the PyObject header would do the trick but there aren't any flags in the PyObject header... *gasp*.
Actually, we only care about GC-tracked objects (the others are deallocated simply when their refcount falls to zero), so this could be another special value in the gc_refs field, together with a specific macro.
Additionally, there is another problem: If the cycle is not temporarily revived, and you call __del__ manually, it may break the cycle by removing the references. Thus, objects in the cycle will go down to refcount=0 during your attempt to call __del__'s on the objects in the cycle. The only sane thing is to temporarily revive the entire cycle so you can safely call __del__'s on it. Then, you might want to disable the normal __del__ calling that occurs as part of the later destruction of the cycle.
I still don't understand what "revive the cycle" means. You will need to incref the object for which you call __del__, that's all. Regards, Martin
Firstly, as I said above: you will still have __del__ ordering issues.
Can you please elaborate? What would such __del__ ordering issues be?
Secondly, the destructor itself currently calls __del__, so if you call __del__ before any deallocation, it will get called again as part of the deallocation. Might be a technicality but it will still probably require some code restructuring to work around (or making that code even more hairy).
There could be a global barricade for calling __del__: you first call all __del__s of existing objects, then set the barricade, and then start breaking cycles. This could even be done with the current approach to module clearing. Regards, Martin
Since it's already possible for __del__-containing cycles to survive interpreter shutdown, I don't see that this issue ought to be a showstopper for elimination of module clearing. Also, it seems to me that the kind of cycles module clearing is designed to break, i.e. those between classes, functions and the module dict, are unlikely to contain objects with ___del__ methods, so there wouldn't be much of a problem in practice. -- Greg
That would be no worse than what happens now - but its still not perfect (__del__ ordering issues). Also, you would need to temporarily revive the cycles as mentioned above (to avoid accessibility of partially destructed objects).
I don't quite understand what you mean by "revive cycles". There is not need to revive any object in a cycle, as all objects are still alive (or else they wouldn't be garbage). Regards, Martin
On Sat, Jun 28, 2008 at 10:52 PM, Guido van Rossum
On Sat, Jun 28, 2008 at 5:39 PM, Greg Ewing
wrote: Nick Coghlan wrote:
It's a fact of Python development: __del__ methods cannot safely reference module globals, because those globals may be gone by the time that method is invoked.
Speaking of this, has there been any more thought given to the idea of dropping the module clearing and just relying on cyclic GC?
No, but it is an intriguing thought nevertheless. The module clearing causes nothing but trouble...
Already gone in python-safethread. Then again, so is __del__. ;) (Replaced with __finalize__) -- Adam Olsen, aka Rhamphoryncus
On Jun 28, 6:21 pm, Nick Coghlan
Eyal Lotem wrote:
Example:
import os class RunningFile(object): filename = '/tmp/running' def __init__(self): open(self.filename, 'wb') def __del__(self): os.unlink(self.filename) running_file = RunningFile()
The deller object is in a cycle as described above [as well as the Deller class itself]. When Python exits, it could call deller.__del__() and then collect the cycle. But Python does the wrong thing here, and gets rid of the globals before calling __del__: Exception exceptions.AttributeError: "'NoneType' object has no attribute 'unlink'" in
> ignored I don't know what you're trying to get at with this example. There isn't any cyclic GC involved at all, just referencing counting. And before the module globals are cleared, running_file is still referenced, so calling its __del__ method early would be an outright error in the interpreter (as far as I know, getting __del__ methods to run is one of the *reasons* for clearing the module globals).
It's a fact of Python development: __del__ methods cannot safely reference module globals, because those globals may be gone by the time that method is invoked. That's because globals are being cleaned up in an ad-hoc manner, instead of just using cyclic GC that supports cycles.
Cheers, Nick.
-- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org _______________________________________________ Python-Dev mailing list Python-...@python.orghttp://mail.python.org/mailman/listinfo/python-dev Unsubscribe:http://mail.python.org/mailman/options/python-dev/python-dev2-garchiv...
Example:
import os class RunningFile(object): filename = '/tmp/running' def __init__(self): open(self.filename, 'wb') def __del__(self): os.unlink(self.filename) running_file = RunningFile()
The deller object is in a cycle as described above [as well as the Deller class itself].
I think you are mistaken here. The RunningFile instance in above code is *not* part of a cycle. It doesn't have any instance variables (i.e. its __dict__ is empty), and it only refers to its class, which (AFAICT) doesn't refer back to the instance.
When Python exits, it could call deller.__del__() and then collect the cycle. But Python does the wrong thing here, and gets rid of the globals before calling __del__: Exception exceptions.AttributeError: "'NoneType' object has no attribute 'unlink'" in
> ignored
This is a different issue. For shutdown, Python doesn't rely on cyclic garbage collection (only). Instead, all modules get forcefully cleared, causing this problem.
I believe applying the above enhancement would solve these problems.
No, they wouldn't. To work around the real problem in your case, put everything that the destructor uses into an instance or class attribute: class RunningFile(object): filename = '/tmp/running' _unlink = os.unlink def __init__(self): open(self.filename, 'wb') def __del__(self): self._unlink(self.filename) Regards, Martin
On Jun 28, 6:32 pm, "Martin v. Löwis"
Example:
import os class RunningFile(object): filename = '/tmp/running' def __init__(self): open(self.filename, 'wb') def __del__(self): os.unlink(self.filename) running_file = RunningFile()
The deller object is in a cycle as described above [as well as the Deller class itself].
I think you are mistaken here. The RunningFile instance in above code is *not* part of a cycle. It doesn't have any instance variables (i.e. its __dict__ is empty), and it only refers to its class, which (AFAICT) doesn't refer back to the instance.
As I explained above, it *is* part of a cycle: """including the class objects themselves: class->dict->function->func_globals""". Note: running_file.__class.__.__dict__['__init__'].func_globals['running_file'] is running_file.
When Python exits, it could call deller.__del__() and then collect the cycle. But Python does the wrong thing here, and gets rid of the globals before calling __del__: Exception exceptions.AttributeError: "'NoneType' object has no attribute 'unlink'" in
> ignored This is a different issue. For shutdown, Python doesn't rely on cyclic garbage collection (only). Instead, all modules get forcefully cleared, causing this problem.
I know. I assumed Python does not rely on cyclic garbage collectioin for shutdown, because it wouldn't work, as _all_ globals that have *any* instance method will be part of a cycle, and any of them which have a __del__ will not be collected.
I believe applying the above enhancement would solve these problems.
No, they wouldn't.
To work around the real problem in your case, put everything that the destructor uses into an instance or class attribute:
class RunningFile(object): filename = '/tmp/running' _unlink = os.unlink def __init__(self): open(self.filename, 'wb') def __del__(self): self._unlink(self.filename) I *mentioned* this workaround. What I propose is not a workaround but a solution. You wouldn't need to clean up module globals ad-hoc'ishly, because the cyclic collection would collect your object, even with its __del__.
Regards, Martin Please read my entire mail before replying to it. Thanks! Eyal
_______________________________________________ Python-Dev mailing list Python-...@python.orghttp://mail.python.org/mailman/listinfo/python-dev Unsubscribe:http://mail.python.org/mailman/options/python-dev/python-dev2-garchiv...
As I explained above, it *is* part of a cycle: """including the class objects themselves: class->dict->function->func_globals""".
Ah, right. I must have missed that explanation.
I know. I assumed Python does not rely on cyclic garbage collectioin for shutdown, because it wouldn't work, as _all_ globals that have *any* instance method will be part of a cycle, and any of them which have a __del__ will not be collected.
No. The mechanism for cleaning modules at shutdown time predates cyclic GC, and was not removed because "it wouldn't work". This specific issue certainly contributes to the fact that it doesn't work, but there might be other problems as well (such as extension modules holding onto objects participating in cycles).
I *mentioned* this workaround. What I propose is not a workaround but a solution. You wouldn't need to clean up module globals ad-hoc'ishly, because the cyclic collection would collect your object, even with its __del__.
I don't think it solves the problem. You seem to be assuming that any such cycle will contain only a single global object with an __del__. However, as modules refer to each other, I very much doubt that is the case.
Please read my entire mail before replying to it. Thanks!
I really, really tried. I read it three times before replying. However, I found it really, really difficult to follow your writing, as it was mixing problem statement and solution, so that I couldn't tell what paragraph was about what. English is not my native language, complicating communication further. Please accept my apologies. Regards, Martin
On Sun, Jun 29, 2008 at 9:00 PM, "Martin v. Löwis"
As I explained above, it *is* part of a cycle: """including the class objects themselves: class->dict->function->func_globals""".
Ah, right. I must have missed that explanation.
I know. I assumed Python does not rely on cyclic garbage collectioin for shutdown, because it wouldn't work, as _all_ globals that have *any* instance method will be part of a cycle, and any of them which have a __del__ will not be collected.
No. The mechanism for cleaning modules at shutdown time predates cyclic GC, and was not removed because "it wouldn't work".
This specific issue certainly contributes to the fact that it doesn't work, but there might be other problems as well (such as extension modules holding onto objects participating in cycles).
I *mentioned* this workaround. What I propose is not a workaround but a solution. You wouldn't need to clean up module globals ad-hoc'ishly, because the cyclic collection would collect your object, even with its __del__.
I don't think it solves the problem. You seem to be assuming that any such cycle will contain only a single global object with an __del__. However, as modules refer to each other, I very much doubt that is the case.
Please read my entire mail before replying to it. Thanks!
I really, really tried. I read it three times before replying.
However, I found it really, really difficult to follow your writing, as it was mixing problem statement and solution, so that I couldn't tell what paragraph was about what. English is not my native language, complicating communication further. Please accept my apologies.
I apologize for my tone as well, it just seemed frustrating that all the replies seemed to completely ignore the core point I was trying to make and claim that the problem I was trying to solve was not a problem at all, but a behavior I was unaware of... Mixing another quote from another mail:
That would be no worse than what happens now - but its still not perfect (__del__ ordering issues). Also, you would need to temporarily revive the cycles as mentioned above (to avoid accessibility of partially destructed objects).
I don't quite understand what you mean by "revive cycles". There is not need to revive any object in a cycle, as all objects are still alive (or else they wouldn't be garbage). By "revive cycles", I mean make sure that they are referenced by an independent referrer (one that won't go away as part of the __del__ calling process). This is similar to how the tp_dealloc code increases the refcount (actually sets it to 1, because it was certainly 0 when entering the destructor) before calling the __del__ slot. Without reviving the object before calling its __del__ in the destructor, and without reviving the objects of a cycle before calling its __del__'s, the __del__ Pythonic code may be exposed to "dead objects" (refcount==0).
Consider the cycle: a.x = b b.x = a Lets suppose the a object has a __del__. Lets assume each object in the cycle has a refcount of 1 (and the cycle should die). Now lets say this is a's __del__ code: def __del__(self): self.x = None Running it will set 'b's refcount to 0 and call its destructor, which will set 'a's refcount to 0 and also call its destructor. But its __del__ is currently running - so "self" must not have a refcount of 0. If you only incref on 'a' before calling __del__, then you are probably alright, as long as there is only one __del__.
Can you please elaborate? What would such __del__ ordering issues be?
class A(object): def __del__(self): print self.x.attribute class B(object): def __del__(self): print "B is going down!" del self.attribute a = A() b = B() a.x = b b.attribute = 1 If you call b's __del__ first then a's __del__ will fail. If you call a's __del__ first, then all is well. Ofcourse you can create true cyclic dependencies that no order will work, and its pretty clear there is no way to deduce the right order anyway. This is what I mean by "ordering issues".
There could be a global barricade for calling __del__: you first call all __del__s of existing objects, then set the barricade, and then start breaking cycles. This could even be done with the current approach to module clearing.
Note that the __del__'s themselves may be breaking cycles and refcounts will go to 0 - unless you temporarily revive (incref) the entire cycle first.
I still don't understand what "revive the cycle" means. You will need to incref the object for which you call __del__, that's all.
Unless there are multiple __del__'s in the cycle.
Regards, Martin
By "revive cycles", I mean make sure that they are referenced by an independent referrer (one that won't go away as part of the __del__ calling process).
I think this is a) unfortunate terminology (as the cycle is not dead, so no need to revive), and b) unnecessary, as calling __del__ will add a reference, anyway (in the implicit self parameter). So the object won't go away as long as __del__ runs. It might go away immediately after __del__ returns, which may or may not be a problem.
This is similar to how the tp_dealloc code increases the refcount (actually sets it to 1, because it was certainly 0 when entering the destructor) before calling the __del__ slot. Without reviving the object before calling its __del__ in the destructor, and without reviving the objects of a cycle before calling its __del__'s, the __del__ Pythonic code may be exposed to "dead objects" (refcount==0).
No, that can't happen, and, AFAICT, is *not* the reason why the tp_dealloc resurrects the object. Instead, if it wouldn't resurrect it, tp_dealloc might become recursive, deallocating the object twice.
Consider the cycle: a.x = b b.x = a
Lets suppose the a object has a __del__. Lets assume each object in the cycle has a refcount of 1 (and the cycle should die). Now lets say this is a's __del__ code: def __del__(self): self.x = None
Running it will set 'b's refcount to 0 and call its destructor, which will set 'a's refcount to 0 and also call its destructor. But its __del__ is currently running - so "self" must not have a refcount of 0.
And it won't, because (say) PyObject_CallMethod (to call __del__) calls PyObject_GetAttrString, which returns a bound method which refers to im_self for the entire life of
If you only incref on 'a' before calling __del__, then you are probably alright, as long as there is only one __del__.
Why would you think so? We explicitly call one __del__. Assume that breaks the cycle, causing another object with __del__ to go to refcount zero. Now, tp_dealloc is called, raises the refcount, calls __del__ of the other object, and releases its storage.
Can you please elaborate? What would such __del__ ordering issues be?
If you call b's __del__ first then a's __del__ will fail. If you call a's __del__ first, then all is well. Ofcourse you can create true cyclic dependencies that no order will work, and its pretty clear there is no way to deduce the right order anyway. This is what I mean by "ordering issues".
I see. As we are in interpreter shutdown, any such exceptions should be ignored (as exceptions in __del__ are, anyway). Programs involving such cycles should be considered broken, and be rewritten to avoid them (which I claim is always possible, and straight-forward)
Note that the __del__'s themselves may be breaking cycles and refcounts will go to 0 - unless you temporarily revive (incref) the entire cycle first.
See above - you shouldn't need to.
I still don't understand what "revive the cycle" means. You will need to incref the object for which you call __del__, that's all.
Unless there are multiple __del__'s in the cycle.
Not even then. Regards, Martin
participants (8)
-
"Martin v. Löwis"
-
Adam Olsen
-
Antoine Pitrou
-
Eyal Lotem
-
eyal.lotem+pyutils@gmail.com
-
Greg Ewing
-
Guido van Rossum
-
Nick Coghlan