Hi,
I'm trying to run mpi4py on my 4 machines, but I need a parallelized
version of Python. Tried to compile one with Python 2.5 and mpich2 but
mpich2 won't let me built dynamic /shares libraries which it needs.
Trying with the static ones involves alot of headers errors from both.
Is yt-trunk capable of doing python in parallel?
Without parallel-python, I mpdboot -n 4 then
python
import MPI rank, size = MPI.COMM_WORLD.rank, MPI.COMM_WORLD.size print 'Hello World! I am process', rank, 'of', size Hello World! I am process 0 of 1
not 4 processes, and mpirun -np 4 python just hangs. mpi4py installed on all 4 nodes.
Thanks.
R.Soares
Hi!
yt-trunk is now parallelized. Not all tasks work in parallel, but projections, profiles (if done in 'lazy' mode) and halo finding (if you use the SS_HopOutput module) are now parallelized. Slices are almost done, and the new covering grid will be. It's not documented, but those tasks should all run in parallel. We will be rolling out a 1.5 release relatively soon, likely shortly after I defend my thesis in April, that will have documentation and so forth.
I'm surprised you can't compile against the mpich libraries in a shared fashion. Unfortunately, I'm not an expert on MPI implementations, so I can't quite help out there. In my personal experience, using OpenMPI, I have needed to except when running on some form of linux without a loader -- the previous discussion about this was related to Kraken, which runs a Cray-specific form of linux called "Compute Node Linux." I don't actually know offhand (anybody else?) of any non-Cray machines at supercomputing out there require static linking as opposed to a standard installation of Python. (I'm sure they do, I just don't know of them!)
As for the second part, usually when instantiating you have to run the executable via mpirun. (On other MPI implementations, this could be something different.) One option for this -- if you're running off trunk -- would be to do something like:
mpirun -np 4 python my_script.py --parallel
where the file my_script.py has something like:
-- from yt.mods import * pf = EnzoStaticOutput("my_output") pc = PlotCollection(pf, center=[0.5,0.5,0.5]) pc.add_projection("Density",0)
The projection would be executed in parallel, in this case. (There is a command line interface called 'yt' that also works in parallel, but it's still a bit in flux.) You can't just run "python" because of the way the stdin and stdout streams work; you have to supply a script, so that it can proceed without input from the user. (IPython's parallel fanciness notwithstanding, which we do not use in yt.)
But, keep in mind, running "mpirun -np 4" by itself, wihtout setting up a means of distributing tasks (usually via a tasklist) will run them all on the current machine. I am, unfortunately, not really qualified to speak to setting up MPI implementations. But please do let us know if you have problems with the yt aspects of this!
-Matt
On Thu, Feb 12, 2009 at 6:59 PM, rsoares dlleuz@xmission.com wrote:
Hi,
I'm trying to run mpi4py on my 4 machines, but I need a parallelized version of Python. Tried to compile one with Python 2.5 and mpich2 but mpich2 won't let me built dynamic /shares libraries which it needs. Trying with the static ones involves alot of headers errors from both. Is yt-trunk capable of doing python in parallel?
Without parallel-python, I mpdboot -n 4 then
python
import MPI rank, size = MPI.COMM_WORLD.rank, MPI.COMM_WORLD.size print 'Hello World! I am process', rank, 'of', size Hello World! I am process 0 of 1
not 4 processes, and mpirun -np 4 python just hangs. mpi4py installed on all 4 nodes.
Thanks.
R.Soares
yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
Hi again,
I just realized that I should say a couple important caveats --
-Matt
On Thu, Feb 12, 2009 at 7:12 PM, Matthew Turk matthewturk@gmail.com wrote:
Hi!
yt-trunk is now parallelized. Not all tasks work in parallel, but projections, profiles (if done in 'lazy' mode) and halo finding (if you use the SS_HopOutput module) are now parallelized. Slices are almost done, and the new covering grid will be. It's not documented, but those tasks should all run in parallel. We will be rolling out a 1.5 release relatively soon, likely shortly after I defend my thesis in April, that will have documentation and so forth.
I'm surprised you can't compile against the mpich libraries in a shared fashion. Unfortunately, I'm not an expert on MPI implementations, so I can't quite help out there. In my personal experience, using OpenMPI, I have needed to except when running on some form of linux without a loader -- the previous discussion about this was related to Kraken, which runs a Cray-specific form of linux called "Compute Node Linux." I don't actually know offhand (anybody else?) of any non-Cray machines at supercomputing out there require static linking as opposed to a standard installation of Python. (I'm sure they do, I just don't know of them!)
As for the second part, usually when instantiating you have to run the executable via mpirun. (On other MPI implementations, this could be something different.) One option for this -- if you're running off trunk -- would be to do something like:
mpirun -np 4 python my_script.py --parallel
where the file my_script.py has something like:
-- from yt.mods import * pf = EnzoStaticOutput("my_output") pc = PlotCollection(pf, center=[0.5,0.5,0.5]) pc.add_projection("Density",0)
The projection would be executed in parallel, in this case. (There is a command line interface called 'yt' that also works in parallel, but it's still a bit in flux.) You can't just run "python" because of the way the stdin and stdout streams work; you have to supply a script, so that it can proceed without input from the user. (IPython's parallel fanciness notwithstanding, which we do not use in yt.)
But, keep in mind, running "mpirun -np 4" by itself, wihtout setting up a means of distributing tasks (usually via a tasklist) will run them all on the current machine. I am, unfortunately, not really qualified to speak to setting up MPI implementations. But please do let us know if you have problems with the yt aspects of this!
-Matt
On Thu, Feb 12, 2009 at 6:59 PM, rsoares dlleuz@xmission.com wrote:
Hi,
I'm trying to run mpi4py on my 4 machines, but I need a parallelized version of Python. Tried to compile one with Python 2.5 and mpich2 but mpich2 won't let me built dynamic /shares libraries which it needs. Trying with the static ones involves alot of headers errors from both. Is yt-trunk capable of doing python in parallel?
Without parallel-python, I mpdboot -n 4 then
python
import MPI rank, size = MPI.COMM_WORLD.rank, MPI.COMM_WORLD.size print 'Hello World! I am process', rank, 'of', size Hello World! I am process 0 of 1
not 4 processes, and mpirun -np 4 python just hangs. mpi4py installed on all 4 nodes.
Thanks.
R.Soares
yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
I recommend using openmpi. I have been able to build openmpi on multiple platforms and then build mpi4py with it without any customization. As Matt has said, though, you won't see any benefit to using parallel until your simulations are at least 256^3 cells or more.
On Thu, Feb 12, 2009 at 8:16 PM, Matthew Turk matthewturk@gmail.com wrote:
Hi again,
I just realized that I should say a couple important caveats --
-Matt
On Thu, Feb 12, 2009 at 7:12 PM, Matthew Turk matthewturk@gmail.com wrote:
Hi!
yt-trunk is now parallelized. Not all tasks work in parallel, but projections, profiles (if done in 'lazy' mode) and halo finding (if you use the SS_HopOutput module) are now parallelized. Slices are almost done, and the new covering grid will be. It's not documented, but those tasks should all run in parallel. We will be rolling out a 1.5 release relatively soon, likely shortly after I defend my thesis in April, that will have documentation and so forth.
I'm surprised you can't compile against the mpich libraries in a shared fashion. Unfortunately, I'm not an expert on MPI implementations, so I can't quite help out there. In my personal experience, using OpenMPI, I have needed to except when running on some form of linux without a loader -- the previous discussion about this was related to Kraken, which runs a Cray-specific form of linux called "Compute Node Linux." I don't actually know offhand (anybody else?) of any non-Cray machines at supercomputing out there require static linking as opposed to a standard installation of Python. (I'm sure they do, I just don't know of them!)
As for the second part, usually when instantiating you have to run the executable via mpirun. (On other MPI implementations, this could be something different.) One option for this -- if you're running off trunk -- would be to do something like:
mpirun -np 4 python my_script.py --parallel
where the file my_script.py has something like:
-- from yt.mods import * pf = EnzoStaticOutput("my_output") pc = PlotCollection(pf, center=[0.5,0.5,0.5]) pc.add_projection("Density",0)
The projection would be executed in parallel, in this case. (There is a command line interface called 'yt' that also works in parallel, but it's still a bit in flux.) You can't just run "python" because of the way the stdin and stdout streams work; you have to supply a script, so that it can proceed without input from the user. (IPython's parallel fanciness notwithstanding, which we do not use in yt.)
But, keep in mind, running "mpirun -np 4" by itself, wihtout setting up a means of distributing tasks (usually via a tasklist) will run them all on the current machine. I am, unfortunately, not really qualified to speak to setting up MPI implementations. But please do let us know if you have problems with the yt aspects of this!
-Matt
On Thu, Feb 12, 2009 at 6:59 PM, rsoares dlleuz@xmission.com wrote:
Hi,
I'm trying to run mpi4py on my 4 machines, but I need a parallelized version of Python. Tried to compile one with Python 2.5 and mpich2 but mpich2 won't let me built dynamic /shares libraries which it needs. Trying with the static ones involves alot of headers errors from both. Is yt-trunk capable of doing python in parallel?
Without parallel-python, I mpdboot -n 4 then
python
import MPI rank, size = MPI.COMM_WORLD.rank, MPI.COMM_WORLD.size print 'Hello World! I am process', rank, 'of', size Hello World! I am process 0 of 1
not 4 processes, and mpirun -np 4 python just hangs. mpi4py installed on all 4 nodes.
Thanks.
R.Soares
yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org