numpy performance and random numbers
g.bogle at auckland.no.spam.ac.nz
Tue Dec 22 00:22:52 CET 2009
> On 19 Des, 14:06, Carl Johan Rehn <car... at gmail.com> wrote:
>> Matlab and numpy have (by chance?) the exact names for the same
> Common ancenstry, NumPy and Matlab borrowed the name from IDL.
> LabView, Octave and SciLab uses the name randn as well.
>> So the basioc question is, how can I speed up random number
> The obvious thing would be to compile ziggurat yourself, and turn on
> optimization flags for your hardware.
> P.S. Be careful if you consider using more than one processor.
> Multithreading is a very difficult issue with PRNGs, becuase it is
> difficult to guarrantee they are truely independent. But you can use a
> producer-consumer pattern, though: one thread constantly producing
> random numbers (writing into a buffer or pipe) and another thread(s)
> consuming them.
In case you're interested, I've made a multi-thread version of ziggurat
(actually in Fortran for use with OpenMP). It is a simple enough mod, there is
an additional argument (thread number) in each call to the ziggurat functions,
and ziggurat maintains separate variables for each thread (not just the seed).
There was one non-trivial issue. To avoid cache collisions, the seed values (an
array corresponding to jsr in the original code) need to be spaced sufficiently
far apart. Without this measure the performance was disappointingly slow.
I agree with your recommendation - ziggurat is really fast.
More information about the Python-list