[IPython-dev] Performance sanity check: 7.21s to scatter 5000X1000 float array to 7 engines

Fernando Perez fperez.net at gmail.com
Sun Jan 13 01:53:17 EST 2008


Hi Anand,

On Jan 12, 2008 4:57 PM, Anand Patil <anand.prabhakar.patil at gmail.com> wrote:

> When I did get mpi4py up and running, I ran this script:
>
> from numpy import *
> C=ones((100,10),dtype=float)
>
> from ipython1 import *
> import ipython1.kernel.api as kernel
> rc = kernel.RemoteController(('127.0.0.1',10105))
> rc.resetAll()
> rc.executeAll('from mpi4py import MPI as mpi')
> rc.executeAll('from numpy import *')
> rc.push(0,C=C)
> rc.execute(0,'mpi.COMM_WORLD.Send(C,1)')
>
> When C was 10 by 10, the last line sent it like a champ, but when C
> was 10 by 100 or larger the ipengines hung up altogether, I had to
> kill them with KILL (though mysteriously the log says they received
> TERM). Most of the log follows this email, I lost the top bit because
> I was running them in a screen.

Well, that code is unfortunately incorrect MPI: you have a send
without a matching receive, so it's no surprise (and has nothing to do
with ipython or mpi4py) that it got wedged.

Here's a simple example of a session doing a trivial scatter:

In [1]: import mpi4py.MPI as mpi
[tlon:28505] mca: base: component_find: unable to open osc pt2pt: file
not found (ignored)

In [2]: import ipython1.kernel.api as kernel

In [3]: rc = kernel.RemoteController(('127.0.0.1',10105))

In [4]: rc.getIDs()
Out[4]: [0, 1, 2, 3]

In [5]: rc.activate
------> rc.activate()

In [6]: autopx
Auto Parallel Enabled
Type %autopx to disable

In [7]: import mpi4py.MPI as MPI
<Results List>
[0] In [1]: import mpi4py.MPI as MPI
[1] In [1]: import mpi4py.MPI as MPI
[2] In [1]: import mpi4py.MPI as MPI
[3] In [1]: import mpi4py.MPI as MPI


In [8]: rank,size = MPI.COMM_WORLD.rank, MPI.COMM_WORLD.size
<Results List>
[0] In [2]: rank,size = MPI.COMM_WORLD.rank, MPI.COMM_WORLD.size
[1] In [2]: rank,size = MPI.COMM_WORLD.rank, MPI.COMM_WORLD.size
[2] In [2]: rank,size = MPI.COMM_WORLD.rank, MPI.COMM_WORLD.size
[3] In [2]: rank,size = MPI.COMM_WORLD.rank, MPI.COMM_WORLD.size

In [9]: root = size//2
<Results List>
[0] In [3]: root = size//2
[1] In [3]: root = size//2
[2] In [3]: root = size//2
[3] In [3]: root = size//2

In [21]: sendbuf=None
<Results List>
[0] In [14]: sendbuf=None
[1] In [14]: sendbuf=None
[2] In [14]: sendbuf=None
[3] In [14]: sendbuf=None


In [22]: if rank==root:
    sendbuf = np.random.rand(size,10)
   ....:
<Results List>
[0] In [15]: if rank==root:
    sendbuf = np.random.rand(size,10)

[1] In [15]: if rank==root:
    sendbuf = np.random.rand(size,10)

[2] In [15]: if rank==root:
    sendbuf = np.random.rand(size,10)

[3] In [15]: if rank==root:
    sendbuf = np.random.rand(size,10)



In [24]: print sendbuf
<Results List>
[0] In [16]: print sendbuf
[0] Out[16]: [[ 0.59273955  0.91941819  0.18332745  0.50236023
0.23186433  0.40603873
   0.60863188  0.79174977  0.41726042  0.25742354]
 [ 0.45079829  0.74095739  0.50687041  0.45614561  0.80468414  0.82021551
   0.78716086  0.93041007  0.02055786  0.39692305]
 [ 0.6522603   0.38565446  0.00305974  0.4883121   0.91963356  0.93035331
   0.16671677  0.695877    0.88859014  0.01461159]
 [ 0.56898343  0.13195333  0.56278637  0.70708685  0.65832335  0.26670947
   0.17980937  0.34002591  0.36724169  0.36621309]]

[1] In [16]: print sendbuf
[1] Out[16]: None

[2] In [16]: print sendbuf
[2] Out[16]: None

[3] In [16]: print sendbuf
[3] Out[16]: None



In [25]: recvbuf = p.Scatter(sendbuf)
<Results List>
[0] In [17]: recvbuf = p.Scatter(sendbuf)
[1] In [17]: recvbuf = p.Scatter(sendbuf)
[2] In [17]: recvbuf = p.Scatter(sendbuf)
[3] In [17]: recvbuf = p.Scatter(sendbuf)


In [26]: print rank, recvbuf
<Results List>
[0] In [18]: print rank, recvbuf
[0] Out[18]: 2 [ 0.6522603   0.38565446  0.00305974  0.4883121
0.91963356  0.93035331
  0.16671677  0.695877    0.88859014  0.01461159]

[1] In [18]: print rank, recvbuf
[1] Out[18]: 0 [ 0.59273955  0.91941819  0.18332745  0.50236023
0.23186433  0.40603873
  0.60863188  0.79174977  0.41726042  0.25742354]

[2] In [18]: print rank, recvbuf
[2] Out[18]: 3 [ 0.56898343  0.13195333  0.56278637  0.70708685
0.65832335  0.26670947
  0.17980937  0.34002591  0.36724169  0.36621309]

[3] In [18]: print rank, recvbuf
[3] Out[18]: 1 [ 0.45079829  0.74095739  0.50687041  0.45614561
0.80468414  0.82021551
  0.78716086  0.93041007  0.02055786  0.39692305]


The above is a slightly modified version from the mpi4py tests.

MPI isn't the most pleasant API to work with, so I'd recommend having
a read of one of the MPI tutorials on the net if you want to use it,
I'm pretty sure it will save you a lot of time.

Cheers,

f



More information about the IPython-dev mailing list