
Note that if no pop(0)'s are ever done on the list there is no wrap around and the slowdown is limited to the access of blocksize plus a test; no modulo arithmetic is necessary. This is also the case if
[Kurt B. Kaiser] the
list doesn't grow and items are only removed, including list[0]. And in the latter case, the memmoves associated with internal deletions will dominate the performance in all implementations.
[Timbot]
Neither case holds for a queue; the question that started this was whether to provide a fancy queue implementation in C, and (of course) Guido would rather see Python lists efficiently usable for that purpose too. I suspected that might not be a realistic hope, and now I'm starting to *think* it too <wink>.
It needn't be especially fancy. Here is what I had in mind: n = 50 class fifo(object): def __init__(self): # block[0:n] is for data and block[n] is a link field self.head = self.tail = [None] * (n+1) self.headptr = self.tailptr = 0 def push(self, x): if self.headptr == n: newblock = [None] * (n+1) self.head[n] = newblock self.head = newblock self.headptr = 0 self.head[self.headptr] = x self.headptr += 1 def pop(self): if self.tail is self.head and self.tailptr >= self.headptr: raise IndexError if self.tailptr == n: self.tail = self.tail[n] self.tailptr = 0 x = self.tail[self.tailptr] self.tailptr += 1 return x The memory manager is called once every for 50 queue insertions and memory is freed after every 50 pops. Outside of that, every push and pop just a fast array access and pointer increment. Long queues incur about a 15-20% space overhead as compared to a straight-list implementation. Speedwise, this beats the socks off of the current approach. Raymond Hettinger ################################################################# ################################################################# ################################################################# ##### ##### ##### ################################################################# ################################################################# ################################################################# ################################################################# ################################################################# ################################################################# ##### ##### ##### ################################################################# ################################################################# #################################################################