list.pop(0) vs. collections.dequeue

Steve Howell showell30 at
Sat Jan 23 00:27:57 CET 2010

On Jan 22, 3:17 pm, Christian Heimes <li... at> wrote:
> Steve Howell wrote:
> > That maybe would be an argument for just striking the paragraph from
> > the tutorial.  If it's rare that people pop the head off the list in
> > code that is performance critical or prominent, why bother to even
> > mention it in the tutorial?
> How else are you going to teach new Python developers that they should
> favor append() and pop() in place of insert() and pop(0)? :)
> > Impossible is a strong word.  You could be lazy about giving the
> > memory back, and just wait till the whole list is garbage collected.
> > I don't think there's anything in Python's contract that says memory
> > has to be freed the exact moment you stop using it, especially since
> > we're talking about doing an O(N) memmove just to free up one
> > pointer's worth of memory.
> Your proposal would increase the size of every list object of
> sizeof(ptr) or ssize_t (32 or 64bits) in order to store the information
> where the first element is. It would also unnecessarily complicate the
> code and possible slow down a lot of list operations.

64 bits per list is negligible.

Adding a check for (ilow == 0) is even more negligible.

You are not "unnecessarily" complicating code for something as widely
used as Python lists if it achieves the desired benefit at minimal
cost.  You are complicating the code, but not "unnecessarily."

> > I know the Python programmer could simply iterate through the list
> > rather than popping off unused elements, but that just means that you
> > not only tie up the memory for the pointers just as long, but you also
> > prevent the objects themselves from being garbage collected.
> > In my case I'm not really concerned about giving the memory back right
> > away, it's more about keeping my code simple.  Once I'm done with an
> > element, I do just want to pop it off and keep all the simplicity of
> > the lists, but I am just concerned enough speed that for a 1000
> > element list, I don't want to be doing 1000 memmoves for an average of
> > 500 pointers, which effectively moves about a million bytes around for
> > no reason.
> The simplicity of Python is gained with some performance drawbacks. You
> have to learn to use Python algorithms. You can't simply re implement a
> fast C algorithm and expect it to be fast in Python, too.

I actually do expect Python to solve performance problems for me that
are more easily solved in core than in Python itself.  So I guess
that's where we differ.

More information about the Python-list mailing list