[Numpy-discussion] Double-ended queues

Will Furnass will at thearete.co.uk
Fri Nov 9 12:16:46 EST 2012

On Tue, 25 Sep 2012 07:23:54 -0600, Charles R Harris wrote:

> On Tue, Sep 25, 2012 at 6:50 AM, Nathaniel Smith <njs at pobox.com> wrote:
>> On Tue, Sep 25, 2012 at 12:31 PM, Sturla Molden <sturla at molden.no>
>> wrote:
>> > On 25.09.2012 11:38, Nathaniel Smith wrote:
>> >
>> >> Implementing a ring buffer on top of ndarray would be pretty
>> >> straightforward and probably work better than a linked-list
>> >> implementation.
>> >
>> > Amazingly, many do not know that a ringbuffer is simply an array
>> > indexed modulus its length:
>> >
>> > foo = np.zeros(n)
>> > i = 0 while 1:
>> >    foo[i % n]  # access ringbuffer i += 1
>> Good trick, but to be reliable I think you need to either be willing
>> for i to overflow into a long (arbitrary width) integer, or else make
>> sure that i is an unsigned integer and that n is 2**k where k <=
>> sizeof(i)? Just doing i %= n on each pass through the loop might be
>> less error-prone.
>> > Also, instead of writing a linked list, consider collections.deque.
>> > A deque is by definition a double-ended queue. It is just waste of
>> > time to implement a deque (double-ended queue) and hope it will
>> > perform better than Python's standard lib collections.deque object.
>> The original poster is using collections.deque now, but wants a version
>> that supports efficient vectorized operations.
> The C++ stdlib has an efficient deque object, but it moves through
> memory.
> Hmm, it wouldn't be easy to make that work with numpy arrays what with
> views and all.
> Efficient circular lists are often implemented using powers of two so
> that modulo indexing can be done using a mask.

What is meant by 'moves through memory'?  Are the reasons for avoiding 
creating ndarray/ufunc-like interfaces to Cython-wrapped STL objects to 
do with the complexity of coding such interfaces or to efficiency of 

Given that I'm simply wanting to go from

for t in timestep_arr:
    for i in len(deque1):
        deque2[i] += some_stuff[t] * deque1[i] / some_value
possibly_push_front(deque1, deque2)
possibly_pop_back_(deque1, deque2)


for t in timestep_arr:
    deque2 += some_stuff[t] * deque1 / some_value
possibly_push_front(deque1, deque2)
possibly_pop_back_(deque1, deque2)

would it make sense to use a cython wrapped STL deque class for deque1 
and deque2 but somehow also include in __add__, __div__ and __mul__ 
methods to provide ufunc-like capabilities?

Apologies if I'm missing something fairly obvious - I'm fairly new to 


Will Furnass

More information about the NumPy-Discussion mailing list