
[Raymond]
None of those can compete with my proposed C implementation which pre-allocates around 50 cells at a time and does it's read/writes through array lookups. It avoids all the expensive resizing of list and uses C ints instead of PyInts.
Josiah Carlson <jcarlson@uci.edu> writes:
That's pretty crazy. If I had a say, I'd be +1 just because I've seen so many people using their own {append, pop(0)} queues before.
[Zack]
As a user of Python, I am +1 on making there be a *really really fast* queue data structure somewhere, but -0 on its being something other than the existing Queue class. Can't we just optimize the one we've got? Ideally without losing the thread safety guarantees (but making it usable in an unthreaded program would be nice)?
You miss one thing -- the Queue class exists *specifically* for thread communication. It has very different semantics than I'd want for a generic queue. E.g. with Queue, if you try to get something out of an empty queue, your thread *blocks*. That's only appropriate for applications that expect this. (Imagine the newbie questions to c.l.py. :-) My own ideal solution would be a subtle hack to the list implementation to make pop(0) and insert(0, x) go a lot faster -- this can be done by adding one or two extra fields to the list head and allowing empty space at the front of the list as well as at the end. --Guido van Rossum (home page: http://www.python.org/~guido/)