On Dec 4, 2007 3:05 AM, David Cournapeau <david@ar.media.kyoto-u.ac.jp> wrote:
Gael Varoquaux wrote:
> On Tue, Dec 04, 2007 at 02:13:53PM +0900, David Cournapeau wrote:
>
>> With recent kernels, you can get really good latency if you do it right
>> (around 1-2 ms worst case under high load, including high IO pressure).
>>
>
> As you can see on my page, I indeed measured less than 1ms latency on
> Linux under load with kernel more than a year old. These things have
> gotten much better recently and with a premptible kernel you should be
> able to get 1ms easily. Going below 0.5ms without using a realtime OS (ie
> a realtime kernel, under linux) is really pushing it.
>
Yes, 1ms is possible for quite a long time; the problem was how to get
there (kernel patches, special permissions, etc... Many of those
problems are now gone). I've read that you could get around 0.2 ms and
even below (worst case) with the last kernels + RT preempt (that is you
still use linux, and not rtlinux). Below 1 ms does not make much sense
for audio applications, so I don't know much below this range :)

But I am really curious if you can get those numbers with python,
because of malloc, the gc and co. I mean for example, 0.5 ms latency for
a 1 Ghz CPU means that you get something like a 500 000 CPU cycles, and
I can imagine a cycle of garbage collection taking that many cycles,
without even considering pages of virtual memory which are swapped (in
this case, we are talking millions of cycles).

If the garbage collector is causing a slowdown, it is possible to turn it off. Then you have to be careful to break cycles manually. Non cyclic garbage will get picked up by reference counting, so you can ignore that. Figuring out references in the context of numpy might be a little tricky given that views imply references, but it's probably not impossible.

-tim


 


cheers,

David
_______________________________________________
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion



--
.  __
.   |-\
.
.  tim.hochberg@ieee.org