On Thu, Jan 22, 2009 at 10:26 AM, Sturla Molden email@example.com wrote:
On 1/22/2009 3:17 PM, Jesse Noller wrote:
Interesting that you bring this up - while I'm not in the know about openMP - I have been sketching out some improvements to threading and multiprocessing that follow some of this thinking.
Here is a toy example of what I have in mind. Say you would want to compute the DFT of some signal (real apps would use an FFT in C for this, but never mind). In Python using an O(n**2) algorithm, this would look like somthing like this:
def real_dft(x): ''' DFT for a real valued sequence x ''' r =  N = len(x) M = N//2 + 1 if N%2 else N//2 for n in range(M): s = 0j for k in range(N): tmp = 2*pi*k*n/N s += x[k] * (cos(tmp) - 1j*sin(tmp)) r.append(s) return r
Then, one could 'magically' transform this algorithm into to a parallel one simply by inserting directives from the 'pymp' module:
def real_dft(x): ''' DFT for a real valued sequence x ''' ''' parallelized ''' r =  N = len(x) M = N//2 + 1 if N%2 else N//2 with Pool() as pool: for n in pool.parallel(range(M)): s = 0j for k in range(N): tmp = 2*pi*k*n/N s += x[k] * (cos(tmp) - 1j*sin(tmp)) with pool.ordered(): r.append(s) return r
The idea is that 'parallelizing' a sequential algorithm like this is much easier than writing a parallel one from scratch using the abstractions in threading or multiprocessing.
Interesting - this is a slightly more extreme series of changes then I was thinking, a good way to approach this would not to be a patch against python-core (unless you find bugs) but rather as a separate package hosted outside of core and posted to pypi. I think it has merit - but would require more use/eyeballs on it than just a few of us.