Re: [yt-users] mpi4py and yt

Here's how it works. mpi4py is a module like any other. You build it with the python installation that you built all the other modules with, ala python setup.py build and install. In order for that to work, you need some mpi libraries installed. As I said, I prefer openmpi for this because they were the easiest for me to install and build mpi4py with. Before you do python build install in the mpi4py directory, you'll need to edit the .cfg file (can't remember exactly what it's called) so that the installation has the proper paths to your mpi install.
When you've got mpi4py properly built, you will be able to run some yt operations in parallel in the following manner. 1. Whatever you want to do needs to be in some python script. As far as I know, you can't do parallel entering lines directly into the interpreter. Here's an example:
### Start from yt.mods import * from yt.config import ytcfg pf = EnzoStaticOutput("EnzoRuns/cool_core_rediculous/DD0252/DD0252") pc = PlotCollection(pf,center=[0.5,0.5,0.5]) pc.add_projection("Density",0)
if ytcfg.getint("yt","__parallel_rank") == 0: pc.save("DD0252") ### End That if statement at the end assures that the final image save is done by the root process only. The nice thing is this script can be run in exactly the same form in serial, too.
2. Let's say this script is called proj.py. You'll run it like this: mpirun -np 4 python proj.py --parallel
If you don't unclude the --parallel, you'll see 4 instances of your proj.py script running separately, but each one doing the entire projection and not working together.
Hope that helps,
Britton
On Fri, Feb 13, 2009 at 11:15 PM, rsoares dlleuz@xmission.com wrote:
What Python do you parallelize to install mpi4py into - or do you build /use mpi4py without python, then how?
R.Soares
Britton Smith wrote:
I recommend using openmpi. I have been able to build openmpi on multiple platforms and then build mpi4py with it without any customization. As Matt has said, though, you won't see any benefit to using parallel until your simulations are at least 256^3 cells or more.
On Thu, Feb 12, 2009 at 8:16 PM, Matthew Turk <matthewturk@gmail.commailto: matthewturk@gmail.com> wrote:
Hi again,
I just realized that I should say a couple important caveats --
- We haven't released 'yt-trunk' as 1.5 yet because it's not quite
done or stable. It's going well, and many people use it for production-quality work, but it's not really stamped-and-completed. 2. I should *also* note that you won't really get a lot out of parallel yt unless you have relatively large datasets or relatively large amounts of computation on each cell while creating a derived field. It might end up being a bit more work than you're looking for, if you just want to get some plots out quickly.
-Matt
On Thu, Feb 12, 2009 at 7:12 PM, Matthew Turk <matthewturk@gmail.com mailto:matthewturk@gmail.com> wrote:
Hi!
yt-trunk is now parallelized. Not all tasks work in parallel, but projections, profiles (if done in 'lazy' mode) and halo finding (if you use the SS_HopOutput module) are now parallelized. Slices are almost done, and the new covering grid will be. It's not
documented,
but those tasks should all run in parallel. We will be rolling
out a
1.5 release relatively soon, likely shortly after I defend my thesis in April, that will have documentation and so forth.
I'm surprised you can't compile against the mpich libraries in a shared fashion. Unfortunately, I'm not an expert on MPI implementations, so I can't quite help out there. In my personal experience, using OpenMPI, I have needed to except when running on some form of linux without a loader -- the previous discussion about this was related to Kraken, which runs a Cray-specific form of linux called "Compute Node Linux." I don't actually know offhand (anybody else?) of any non-Cray machines at supercomputing out there require static linking as opposed to a standard installation of Python.
(I'm
sure they do, I just don't know of them!)
As for the second part, usually when instantiating you have to
run the
executable via mpirun. (On other MPI implementations, this could be something different.) One option for this -- if you're running off trunk -- would be to do something like:
mpirun -np 4 python my_script.py --parallel
where the file my_script.py has something like:
-- from yt.mods import * pf = EnzoStaticOutput("my_output") pc = PlotCollection(pf, center=[0.5,0.5,0.5]) pc.add_projection("Density",0) pc.save("hi_there") --
The projection would be executed in parallel, in this case.
(There is
a command line interface called 'yt' that also works in
parallel, but
it's still a bit in flux.) You can't just run "python" because
of the
way the stdin and stdout streams work; you have to supply a
script, so
that it can proceed without input from the user. (IPython's
parallel
fanciness notwithstanding, which we do not use in yt.)
But, keep in mind, running "mpirun -np 4" by itself, wihtout setting up a means of distributing tasks (usually via a tasklist) will run them all on the current machine. I am, unfortunately, not really qualified to speak to setting up MPI implementations. But please do let us know if you have problems with the yt aspects of this!
-Matt
On Thu, Feb 12, 2009 at 6:59 PM, rsoares <dlleuz@xmission.com
mailto:dlleuz@xmission.com> wrote:
Hi,
I'm trying to run mpi4py on my 4 machines, but I need a
parallelized version
of Python. Tried to compile one with Python 2.5 and mpich2 but
mpich2 won't
let me built dynamic /shares libraries which it needs. Trying
with the
static ones involves alot of headers errors from both. Is yt-trunk capable of doing python in parallel?
Without parallel-python, I mpdboot -n 4 then
python
>import MPI > rank, size = MPI.COMM_WORLD.rank, MPI.COMM_WORLD.size > print 'Hello World! I am process', rank, 'of', size
Hello World! I am process 0 of 1
>
not 4 processes, and mpirun -np 4 python just hangs. mpi4py
installed on
all 4 nodes.
Thanks.
R.Soares _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org mailto:yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
yt-users mailing list yt-users@lists.spacepope.org mailto:yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org

Britton's done a great job here, I just have one note -- inside the actual save process, yt checks if it's on the root processor, so you don't necessarily have to have the if statement here. It absolutely won't hurt, however, to make the check.
I should also note, all of this machinery is only in trunk right now.
-Matt
On Sat, Feb 14, 2009 at 8:03 AM, Britton Smith brittonsmith@gmail.com wrote:
Here's how it works. mpi4py is a module like any other. You build it with the python installation that you built all the other modules with, ala python setup.py build and install. In order for that to work, you need some mpi libraries installed. As I said, I prefer openmpi for this because they were the easiest for me to install and build mpi4py with. Before you do python build install in the mpi4py directory, you'll need to edit the .cfg file (can't remember exactly what it's called) so that the installation has the proper paths to your mpi install.
When you've got mpi4py properly built, you will be able to run some yt operations in parallel in the following manner.
- Whatever you want to do needs to be in some python script. As far as I
know, you can't do parallel entering lines directly into the interpreter. Here's an example:
### Start from yt.mods import * from yt.config import ytcfg pf = EnzoStaticOutput("EnzoRuns/cool_core_rediculous/DD0252/DD0252") pc = PlotCollection(pf,center=[0.5,0.5,0.5]) pc.add_projection("Density",0)
if ytcfg.getint("yt","__parallel_rank") == 0: pc.save("DD0252") ### End That if statement at the end assures that the final image save is done by the root process only. The nice thing is this script can be run in exactly the same form in serial, too.
- Let's say this script is called proj.py. You'll run it like this:
mpirun -np 4 python proj.py --parallel
If you don't unclude the --parallel, you'll see 4 instances of your proj.py script running separately, but each one doing the entire projection and not working together.
Hope that helps,
Britton
On Fri, Feb 13, 2009 at 11:15 PM, rsoares dlleuz@xmission.com wrote:
What Python do you parallelize to install mpi4py into - or do you build /use mpi4py without python, then how?
R.Soares
Britton Smith wrote:
I recommend using openmpi. I have been able to build openmpi on multiple platforms and then build mpi4py with it without any customization. As Matt has said, though, you won't see any benefit to using parallel until your simulations are at least 256^3 cells or more.
On Thu, Feb 12, 2009 at 8:16 PM, Matthew Turk <matthewturk@gmail.com mailto:matthewturk@gmail.com> wrote:
Hi again,
I just realized that I should say a couple important caveats --
- We haven't released 'yt-trunk' as 1.5 yet because it's not quite
done or stable. It's going well, and many people use it for production-quality work, but it's not really stamped-and-completed. 2. I should *also* note that you won't really get a lot out of parallel yt unless you have relatively large datasets or relatively large amounts of computation on each cell while creating a derived field. It might end up being a bit more work than you're looking for, if you just want to get some plots out quickly.
-Matt
On Thu, Feb 12, 2009 at 7:12 PM, Matthew Turk <matthewturk@gmail.com mailto:matthewturk@gmail.com> wrote:
Hi!
yt-trunk is now parallelized. Not all tasks work in parallel, but projections, profiles (if done in 'lazy' mode) and halo finding (if you use the SS_HopOutput module) are now parallelized. Slices are almost done, and the new covering grid will be. It's not
documented,
but those tasks should all run in parallel. We will be rolling
out a
1.5 release relatively soon, likely shortly after I defend my thesis in April, that will have documentation and so forth.
I'm surprised you can't compile against the mpich libraries in a shared fashion. Unfortunately, I'm not an expert on MPI implementations, so I can't quite help out there. In my personal experience, using OpenMPI, I have needed to except when running on some form of linux without a loader -- the previous discussion about this was related to Kraken, which runs a Cray-specific form of linux called "Compute Node Linux." I don't actually know offhand (anybody else?) of any non-Cray machines at supercomputing out there require static linking as opposed to a standard installation of Python.
(I'm
sure they do, I just don't know of them!)
As for the second part, usually when instantiating you have to
run the
executable via mpirun. (On other MPI implementations, this could be something different.) One option for this -- if you're running off trunk -- would be to do something like:
mpirun -np 4 python my_script.py --parallel
where the file my_script.py has something like:
-- from yt.mods import * pf = EnzoStaticOutput("my_output") pc = PlotCollection(pf, center=[0.5,0.5,0.5]) pc.add_projection("Density",0) pc.save("hi_there") --
The projection would be executed in parallel, in this case.
(There is
a command line interface called 'yt' that also works in
parallel, but
it's still a bit in flux.) You can't just run "python" because
of the
way the stdin and stdout streams work; you have to supply a
script, so
that it can proceed without input from the user. (IPython's
parallel
fanciness notwithstanding, which we do not use in yt.)
But, keep in mind, running "mpirun -np 4" by itself, wihtout setting up a means of distributing tasks (usually via a tasklist) will run them all on the current machine. I am, unfortunately, not really qualified to speak to setting up MPI implementations. But please do let us know if you have problems with the yt aspects of this!
-Matt
On Thu, Feb 12, 2009 at 6:59 PM, rsoares <dlleuz@xmission.com
mailto:dlleuz@xmission.com> wrote:
Hi,
I'm trying to run mpi4py on my 4 machines, but I need a
parallelized version
of Python. Tried to compile one with Python 2.5 and mpich2 but
mpich2 won't
let me built dynamic /shares libraries which it needs. Trying
with the
static ones involves alot of headers errors from both. Is yt-trunk capable of doing python in parallel?
Without parallel-python, I mpdboot -n 4 then
python
>>import MPI >> rank, size = MPI.COMM_WORLD.rank, MPI.COMM_WORLD.size >> print 'Hello World! I am process', rank, 'of', size
Hello World! I am process 0 of 1
>>
not 4 processes, and mpirun -np 4 python just hangs. mpi4py
installed on
all 4 nodes.
Thanks.
R.Soares _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org mailto:yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
yt-users mailing list yt-users@lists.spacepope.org mailto:yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org

Yt /python2.6 is of course compatible with openmpi then?
Britton Smith wrote:
Here's how it works. mpi4py is a module like any other. You build it with the python installation that you built all the other modules with, ala python setup.py build and install. In order for that to work, you need some mpi libraries installed. As I said, I prefer openmpi for this because they were the easiest for me to install and build mpi4py with. Before you do python build install in the mpi4py directory, you'll need to edit the .cfg file (can't remember exactly what it's called) so that the installation has the proper paths to your mpi install.
When you've got mpi4py properly built, you will be able to run some yt operations in parallel in the following manner.
- Whatever you want to do needs to be in some python script. As far
as I know, you can't do parallel entering lines directly into the interpreter. Here's an example:
### Start from yt.mods import * from yt.config import ytcfg pf = EnzoStaticOutput("EnzoRuns/cool_core_rediculous/DD0252/DD0252") pc = PlotCollection(pf,center=[0.5,0.5,0.5]) pc.add_projection("Density",0)
if ytcfg.getint("yt","__parallel_rank") == 0: pc.save("DD0252") ### End That if statement at the end assures that the final image save is done by the root process only. The nice thing is this script can be run in exactly the same form in serial, too.
- Let's say this script is called proj.py. You'll run it like this:
mpirun -np 4 python proj.py --parallel
If you don't unclude the --parallel, you'll see 4 instances of your proj.py script running separately, but each one doing the entire projection and not working together.
Hope that helps,
Britton
On Fri, Feb 13, 2009 at 11:15 PM, rsoares <dlleuz@xmission.com mailto:dlleuz@xmission.com> wrote:
What Python do you parallelize to install mpi4py into - or do you build /use mpi4py without python, then how? R.Soares Britton Smith wrote: I recommend using openmpi. I have been able to build openmpi on multiple platforms and then build mpi4py with it without any customization. As Matt has said, though, you won't see any benefit to using parallel until your simulations are at least 256^3 cells or more. On Thu, Feb 12, 2009 at 8:16 PM, Matthew Turk <matthewturk@gmail.com <mailto:matthewturk@gmail.com> <mailto:matthewturk@gmail.com <mailto:matthewturk@gmail.com>>> wrote: Hi again, I just realized that I should say a couple important caveats -- 1. We haven't released 'yt-trunk' as 1.5 yet because it's not quite done or stable. It's going well, and many people use it for production-quality work, but it's not really stamped-and-completed. 2. I should *also* note that you won't really get a lot out of parallel yt unless you have relatively large datasets or relatively large amounts of computation on each cell while creating a derived field. It might end up being a bit more work than you're looking for, if you just want to get some plots out quickly. -Matt On Thu, Feb 12, 2009 at 7:12 PM, Matthew Turk <matthewturk@gmail.com <mailto:matthewturk@gmail.com> <mailto:matthewturk@gmail.com <mailto:matthewturk@gmail.com>>> wrote: > Hi! > > yt-trunk is now parallelized. Not all tasks work in parallel, but > projections, profiles (if done in 'lazy' mode) and halo finding (if > you use the SS_HopOutput module) are now parallelized. Slices are > almost done, and the new covering grid will be. It's not documented, > but those tasks should all run in parallel. We will be rolling out a > 1.5 release relatively soon, likely shortly after I defend my thesis > in April, that will have documentation and so forth. > > I'm surprised you can't compile against the mpich libraries in a > shared fashion. Unfortunately, I'm not an expert on MPI > implementations, so I can't quite help out there. In my personal > experience, using OpenMPI, I have needed to except when running on > some form of linux without a loader -- the previous discussion about > this was related to Kraken, which runs a Cray-specific form of linux > called "Compute Node Linux." I don't actually know offhand (anybody > else?) of any non-Cray machines at supercomputing out there require > static linking as opposed to a standard installation of Python. (I'm > sure they do, I just don't know of them!) > > As for the second part, usually when instantiating you have to run the > executable via mpirun. (On other MPI implementations, this could be > something different.) One option for this -- if you're running off > trunk -- would be to do something like: > > mpirun -np 4 python my_script.py --parallel > > where the file my_script.py has something like: > > -- > from yt.mods import * > pf = EnzoStaticOutput("my_output") > pc = PlotCollection(pf, center=[0.5,0.5,0.5]) > pc.add_projection("Density",0) > pc.save("hi_there") > -- > > The projection would be executed in parallel, in this case. (There is > a command line interface called 'yt' that also works in parallel, but > it's still a bit in flux.) You can't just run "python" because of the > way the stdin and stdout streams work; you have to supply a script, so > that it can proceed without input from the user. (IPython's parallel > fanciness notwithstanding, which we do not use in yt.) > > But, keep in mind, running "mpirun -np 4" by itself, wihtout setting > up a means of distributing tasks (usually via a tasklist) will run > them all on the current machine. I am, unfortunately, not really > qualified to speak to setting up MPI implementations. But please do > let us know if you have problems with the yt aspects of this! > > -Matt > > On Thu, Feb 12, 2009 at 6:59 PM, rsoares <dlleuz@xmission.com <mailto:dlleuz@xmission.com> <mailto:dlleuz@xmission.com <mailto:dlleuz@xmission.com>>> wrote: >> Hi, >> >> I'm trying to run mpi4py on my 4 machines, but I need a parallelized version >> of Python. Tried to compile one with Python 2.5 and mpich2 but mpich2 won't >> let me built dynamic /shares libraries which it needs. Trying with the >> static ones involves alot of headers errors from both. >> Is yt-trunk capable of doing python in parallel? >> >> Without parallel-python, I mpdboot -n 4 then >> >> python >>>>>import MPI >>>>> rank, size = MPI.COMM_WORLD.rank, MPI.COMM_WORLD.size >>>>> print 'Hello World! I am process', rank, 'of', size >> Hello World! I am process 0 of 1 >>>>> >> >> not 4 processes, and mpirun -np 4 python just hangs. mpi4py installed on >> all 4 nodes. >> >> Thanks. >> >> R.Soares >> _______________________________________________ >> yt-users mailing list >> yt-users@lists.spacepope.org <mailto:yt-users@lists.spacepope.org> <mailto:yt-users@lists.spacepope.org <mailto:yt-users@lists.spacepope.org>> >> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org >> > _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org <mailto:yt-users@lists.spacepope.org> <mailto:yt-users@lists.spacepope.org <mailto:yt-users@lists.spacepope.org>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org ------------------------------------------------------------------------ _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org <mailto:yt-users@lists.spacepope.org> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org

Well, yes.
On Sat, Feb 14, 2009 at 1:03 PM, rsoares dlleuz@xmission.com wrote:
Yt /python2.6 is of course compatible with openmpi then?
Britton Smith wrote:
Here's how it works. mpi4py is a module like any other. You build it with the python installation that you built all the other modules with, ala python setup.py build and install. In order for that to work, you need some mpi libraries installed. As I said, I prefer openmpi for this because they were the easiest for me to install and build mpi4py with. Before you do python build install in the mpi4py directory, you'll need to edit the .cfg file (can't remember exactly what it's called) so that the installation has the proper paths to your mpi install.
When you've got mpi4py properly built, you will be able to run some yt operations in parallel in the following manner.
- Whatever you want to do needs to be in some python script. As far as I
know, you can't do parallel entering lines directly into the interpreter. Here's an example:
### Start from yt.mods import * from yt.config import ytcfg pf = EnzoStaticOutput("EnzoRuns/cool_core_rediculous/DD0252/DD0252") pc = PlotCollection(pf,center=[0.5,0.5,0.5]) pc.add_projection("Density",0)
if ytcfg.getint("yt","__parallel_rank") == 0: pc.save("DD0252") ### End That if statement at the end assures that the final image save is done by the root process only. The nice thing is this script can be run in exactly the same form in serial, too.
- Let's say this script is called proj.py. You'll run it like this:
mpirun -np 4 python proj.py --parallel
If you don't unclude the --parallel, you'll see 4 instances of your proj.py script running separately, but each one doing the entire projection and not working together.
Hope that helps,
Britton
On Fri, Feb 13, 2009 at 11:15 PM, rsoares <dlleuz@xmission.com mailto: dlleuz@xmission.com> wrote:
What Python do you parallelize to install mpi4py into - or do you build /use mpi4py without python, then how?
R.Soares
Britton Smith wrote:
I recommend using openmpi. I have been able to build openmpi on multiple platforms and then build mpi4py with it without any customization. As Matt has said, though, you won't see any benefit to using parallel until your simulations are at least 256^3 cells or more. On Thu, Feb 12, 2009 at 8:16 PM, Matthew Turk <matthewturk@gmail.com <mailto:matthewturk@gmail.com> <mailto:matthewturk@gmail.com <mailto:matthewturk@gmail.com>>> wrote: Hi again, I just realized that I should say a couple important caveats -- 1. We haven't released 'yt-trunk' as 1.5 yet because it's not quite done or stable. It's going well, and many people use it for production-quality work, but it's not really stamped-and-completed. 2. I should *also* note that you won't really get a lot out of parallel yt unless you have relatively large datasets or relatively large amounts of computation on each cell while creating a derived field. It might end up being a bit more work than you're looking for, if you just want to get some plots out quickly. -Matt On Thu, Feb 12, 2009 at 7:12 PM, Matthew Turk <matthewturk@gmail.com <mailto:matthewturk@gmail.com> <mailto:matthewturk@gmail.com <mailto:matthewturk@gmail.com>>> wrote: > Hi! > > yt-trunk is now parallelized. Not all tasks work in parallel, but > projections, profiles (if done in 'lazy' mode) and halo finding (if > you use the SS_HopOutput module) are now parallelized. Slices are > almost done, and the new covering grid will be. It's not documented, > but those tasks should all run in parallel. We will be rolling out a > 1.5 release relatively soon, likely shortly after I defend my thesis > in April, that will have documentation and so forth. > > I'm surprised you can't compile against the mpich libraries in a > shared fashion. Unfortunately, I'm not an expert on MPI > implementations, so I can't quite help out there. In my personal > experience, using OpenMPI, I have needed to except when running on > some form of linux without a loader -- the previous discussion about > this was related to Kraken, which runs a Cray-specific form of linux > called "Compute Node Linux." I don't actually know offhand (anybody > else?) of any non-Cray machines at supercomputing out there require > static linking as opposed to a standard installation of Python. (I'm > sure they do, I just don't know of them!) > > As for the second part, usually when instantiating you have to run the > executable via mpirun. (On other MPI implementations, this could be > something different.) One option for this -- if you're running off > trunk -- would be to do something like: > > mpirun -np 4 python my_script.py --parallel > > where the file my_script.py has something like: > > -- > from yt.mods import * > pf = EnzoStaticOutput("my_output") > pc = PlotCollection(pf, center=[0.5,0.5,0.5]) > pc.add_projection("Density",0) > pc.save("hi_there") > -- > > The projection would be executed in parallel, in this case. (There is > a command line interface called 'yt' that also works in parallel, but > it's still a bit in flux.) You can't just run "python" because of the > way the stdin and stdout streams work; you have to supply a script, so > that it can proceed without input from the user. (IPython's parallel > fanciness notwithstanding, which we do not use in yt.) > > But, keep in mind, running "mpirun -np 4" by itself, wihtout setting > up a means of distributing tasks (usually via a tasklist) will run > them all on the current machine. I am, unfortunately, not really > qualified to speak to setting up MPI implementations. But please do > let us know if you have problems with the yt aspects of this! > > -Matt > > On Thu, Feb 12, 2009 at 6:59 PM, rsoares <dlleuz@xmission.com <mailto:dlleuz@xmission.com> <mailto:dlleuz@xmission.com <mailto:dlleuz@xmission.com>>> wrote: >> Hi, >> >> I'm trying to run mpi4py on my 4 machines, but I need a parallelized version >> of Python. Tried to compile one with Python 2.5 and mpich2 but mpich2 won't >> let me built dynamic /shares libraries which it needs. Trying with the >> static ones involves alot of headers errors from both. >> Is yt-trunk capable of doing python in parallel? >> >> Without parallel-python, I mpdboot -n 4 then >> >> python >>>>>import MPI >>>>> rank, size = MPI.COMM_WORLD.rank, MPI.COMM_WORLD.size >>>>> print 'Hello World! I am process', rank, 'of', size >> Hello World! I am process 0 of 1 >>>>> >> >> not 4 processes, and mpirun -np 4 python just hangs. mpi4py installed on >> all 4 nodes. >> >> Thanks. >> >> R.Soares >> _______________________________________________ >> yt-users mailing list >> yt-users@lists.spacepope.org <mailto:yt-users@lists.spacepope.org> <mailto:yt-users@lists.spacepope.org <mailto:yt-users@lists.spacepope.org>> >> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org >> > _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org <mailto:yt-users@lists.spacepope.org> <mailto:yt-users@lists.spacepope.org <mailto:yt-users@lists.spacepope.org>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org <mailto:yt-users@lists.spacepope.org> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org

I meant that I use mpich2 on all my computers, some which are "server" only, no gui stuff. Not familiar with openmpi at all. Do you know if it is better than mpich.
rsoares wrote:
Yt /python2.6 is of course compatible with openmpi then?
Britton Smith wrote:
Here's how it works. mpi4py is a module like any other. You build it with the python installation that you built all the other modules with, ala python setup.py build and install. In order for that to work, you need some mpi libraries installed. As I said, I prefer openmpi for this because they were the easiest for me to install and build mpi4py with. Before you do python build install in the mpi4py directory, you'll need to edit the .cfg file (can't remember exactly what it's called) so that the installation has the proper paths to your mpi install.
When you've got mpi4py properly built, you will be able to run some yt operations in parallel in the following manner.
- Whatever you want to do needs to be in some python script. As far
as I know, you can't do parallel entering lines directly into the interpreter. Here's an example:
### Start from yt.mods import * from yt.config import ytcfg pf = EnzoStaticOutput("EnzoRuns/cool_core_rediculous/DD0252/DD0252") pc = PlotCollection(pf,center=[0.5,0.5,0.5]) pc.add_projection("Density",0)
if ytcfg.getint("yt","__parallel_rank") == 0: pc.save("DD0252") ### End That if statement at the end assures that the final image save is done by the root process only. The nice thing is this script can be run in exactly the same form in serial, too.
- Let's say this script is called proj.py. You'll run it like this:
mpirun -np 4 python proj.py --parallel
If you don't unclude the --parallel, you'll see 4 instances of your proj.py script running separately, but each one doing the entire projection and not working together.
Hope that helps,
Britton
On Fri, Feb 13, 2009 at 11:15 PM, rsoares <dlleuz@xmission.com mailto:dlleuz@xmission.com> wrote:
What Python do you parallelize to install mpi4py into - or do you build /use mpi4py without python, then how? R.Soares Britton Smith wrote: I recommend using openmpi. I have been able to build openmpi on multiple platforms and then build mpi4py with it without any customization. As Matt has said, though, you won't see any benefit to using parallel until your simulations are at least 256^3 cells or more. On Thu, Feb 12, 2009 at 8:16 PM, Matthew Turk <matthewturk@gmail.com <mailto:matthewturk@gmail.com> <mailto:matthewturk@gmail.com <mailto:matthewturk@gmail.com>>> wrote: Hi again, I just realized that I should say a couple important
caveats --
1. We haven't released 'yt-trunk' as 1.5 yet because it's not quite done or stable. It's going well, and many people use it for production-quality work, but it's not really stamped-and-completed. 2. I should *also* note that you won't really get a lot
out of parallel yt unless you have relatively large datasets or relatively large amounts of computation on each cell while creating a derived field. It might end up being a bit more work than you're looking for, if you just want to get some plots out quickly.
-Matt On Thu, Feb 12, 2009 at 7:12 PM, Matthew Turk <matthewturk@gmail.com <mailto:matthewturk@gmail.com> <mailto:matthewturk@gmail.com <mailto:matthewturk@gmail.com>>> wrote: > Hi! > > yt-trunk is now parallelized. Not all tasks work in parallel, but > projections, profiles (if done in 'lazy' mode) and halo finding (if > you use the SS_HopOutput module) are now parallelized. Slices are > almost done, and the new covering grid will be. It's not documented, > but those tasks should all run in parallel. We will be rolling out a > 1.5 release relatively soon, likely shortly after I defend my thesis > in April, that will have documentation and so forth. > > I'm surprised you can't compile against the mpich libraries in a > shared fashion. Unfortunately, I'm not an expert on MPI > implementations, so I can't quite help out there. In my personal > experience, using OpenMPI, I have needed to except when running on > some form of linux without a loader -- the previous discussion about > this was related to Kraken, which runs a Cray-specific form of linux > called "Compute Node Linux." I don't actually know offhand (anybody > else?) of any non-Cray machines at supercomputing out there require > static linking as opposed to a standard installation of Python. (I'm > sure they do, I just don't know of them!) > > As for the second part, usually when instantiating you have to run the > executable via mpirun. (On other MPI implementations, this could be > something different.) One option for this -- if you're running off > trunk -- would be to do something like: > > mpirun -np 4 python my_script.py --parallel > > where the file my_script.py has something like: > > -- > from yt.mods import * > pf = EnzoStaticOutput("my_output") > pc = PlotCollection(pf, center=[0.5,0.5,0.5]) > pc.add_projection("Density",0) > pc.save("hi_there") > -- > > The projection would be executed in parallel, in this case. (There is > a command line interface called 'yt' that also works in parallel, but > it's still a bit in flux.) You can't just run "python" because of the > way the stdin and stdout streams work; you have to supply a script, so > that it can proceed without input from the user.
(IPython's parallel > fanciness notwithstanding, which we do not use in yt.) > > But, keep in mind, running "mpirun -np 4" by itself, wihtout setting > up a means of distributing tasks (usually via a tasklist) will run > them all on the current machine. I am, unfortunately, not really > qualified to speak to setting up MPI implementations. But please do > let us know if you have problems with the yt aspects of this! > > -Matt > > On Thu, Feb 12, 2009 at 6:59 PM, rsoares <dlleuz@xmission.com mailto:dlleuz@xmission.com <mailto:dlleuz@xmission.com mailto:dlleuz@xmission.com>> wrote: >> Hi, >> >> I'm trying to run mpi4py on my 4 machines, but I need a parallelized version >> of Python. Tried to compile one with Python 2.5 and mpich2 but mpich2 won't >> let me built dynamic /shares libraries which it needs. Trying with the >> static ones involves alot of headers errors from both. >> Is yt-trunk capable of doing python in parallel? >> >> Without parallel-python, I mpdboot -n 4 then >> >> python >>>>>import MPI >>>>> rank, size = MPI.COMM_WORLD.rank, MPI.COMM_WORLD.size >>>>> print 'Hello World! I am process', rank, 'of', size >> Hello World! I am process 0 of 1 >>>>> >> >> not 4 processes, and mpirun -np 4 python just hangs. mpi4py installed on >> all 4 nodes. >> >> Thanks. >> >> R.Soares >> _______________________________________________ >> yt-users mailing list >> yt-users@lists.spacepope.org mailto:yt-users@lists.spacepope.org <mailto:yt-users@lists.spacepope.org mailto:yt-users@lists.spacepope.org>
>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org >> > _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org <mailto:yt-users@lists.spacepope.org> <mailto:yt-users@lists.spacepope.org <mailto:yt-users@lists.spacepope.org>>
http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org
mailto:yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org

On Sat, Feb 14, 2009 at 1:56 PM, rsoares dlleuz@xmission.com wrote:
I meant that I use mpich2 on all my computers, some which are "server" only, no gui stuff.
I don't really see what this has to do with anything.
I think I've covered this already, but I'll say it again. Openmpi works. It installs, it runs with both enzo and yt. It works. I use it. If you can get mpich2 to work, then use that. If not, I've given you an alternative that I know works.
rsoares wrote:
Yt /python2.6 is of course compatible with openmpi then?
Britton Smith wrote:
Here's how it works. mpi4py is a module like any other. You build it with the python installation that you built all the other modules with, ala python setup.py build and install. In order for that to work, you need some mpi libraries installed. As I said, I prefer openmpi for this because they were the easiest for me to install and build mpi4py with. Before you do python build install in the mpi4py directory, you'll need to edit the .cfg file (can't remember exactly what it's called) so that the installation has the proper paths to your mpi install.
When you've got mpi4py properly built, you will be able to run some yt operations in parallel in the following manner.
- Whatever you want to do needs to be in some python script. As far as
I know, you can't do parallel entering lines directly into the interpreter. Here's an example:
### Start from yt.mods import * from yt.config import ytcfg pf = EnzoStaticOutput("EnzoRuns/cool_core_rediculous/DD0252/DD0252") pc = PlotCollection(pf,center=[0.5,0.5,0.5]) pc.add_projection("Density",0)
if ytcfg.getint("yt","__parallel_rank") == 0: pc.save("DD0252") ### End That if statement at the end assures that the final image save is done by the root process only. The nice thing is this script can be run in exactly the same form in serial, too.
- Let's say this script is called proj.py. You'll run it like this:
mpirun -np 4 python proj.py --parallel
If you don't unclude the --parallel, you'll see 4 instances of your proj.py script running separately, but each one doing the entire projection and not working together.
Hope that helps,
Britton
On Fri, Feb 13, 2009 at 11:15 PM, rsoares <dlleuz@xmission.com mailto: dlleuz@xmission.com> wrote:
What Python do you parallelize to install mpi4py into - or do you build /use mpi4py without python, then how?
R.Soares
Britton Smith wrote:
I recommend using openmpi. I have been able to build openmpi on multiple platforms and then build mpi4py with it without any customization. As Matt has said, though, you won't see any benefit to using parallel until your simulations are at least 256^3 cells or more. On Thu, Feb 12, 2009 at 8:16 PM, Matthew Turk <matthewturk@gmail.com <mailto:matthewturk@gmail.com> <mailto:matthewturk@gmail.com <mailto:matthewturk@gmail.com>>> wrote: Hi again, I just realized that I should say a couple important caveats -- 1. We haven't released 'yt-trunk' as 1.5 yet because it's not quite done or stable. It's going well, and many people use it for production-quality work, but it's not really stamped-and-completed. 2. I should *also* note that you won't really get a lot out of parallel yt unless you have relatively large datasets or relatively large amounts of computation on each cell while creating a derived field. It might end up being a bit more work than you're looking for, if you just want to get some plots out quickly. -Matt On Thu, Feb 12, 2009 at 7:12 PM, Matthew Turk <matthewturk@gmail.com <mailto:matthewturk@gmail.com> <mailto:matthewturk@gmail.com <mailto:matthewturk@gmail.com>>> wrote: > Hi! > > yt-trunk is now parallelized. Not all tasks work in parallel, but > projections, profiles (if done in 'lazy' mode) and halo finding (if > you use the SS_HopOutput module) are now parallelized. Slices are > almost done, and the new covering grid will be. It's not documented, > but those tasks should all run in parallel. We will be rolling out a > 1.5 release relatively soon, likely shortly after I defend my thesis > in April, that will have documentation and so forth. > > I'm surprised you can't compile against the mpich libraries in a > shared fashion. Unfortunately, I'm not an expert on MPI > implementations, so I can't quite help out there. In my personal > experience, using OpenMPI, I have needed to except when running on > some form of linux without a loader -- the previous discussion about > this was related to Kraken, which runs a Cray-specific form of linux > called "Compute Node Linux." I don't actually know offhand (anybody > else?) of any non-Cray machines at supercomputing out there require > static linking as opposed to a standard installation of Python. (I'm > sure they do, I just don't know of them!) > > As for the second part, usually when instantiating you have to run the > executable via mpirun. (On other MPI implementations, this could be > something different.) One option for this -- if you're running off > trunk -- would be to do something like: > > mpirun -np 4 python my_script.py --parallel > > where the file my_script.py has something like: > > -- > from yt.mods import * > pf = EnzoStaticOutput("my_output") > pc = PlotCollection(pf, center=[0.5,0.5,0.5]) > pc.add_projection("Density",0) > pc.save("hi_there") > -- > > The projection would be executed in parallel, in this case. (There is > a command line interface called 'yt' that also works in parallel, but > it's still a bit in flux.) You can't just run "python" because of the > way the stdin and stdout streams work; you have to supply a script, so > that it can proceed without input from the user. (IPython's parallel > fanciness notwithstanding, which we do not use in yt.) > > But, keep in mind, running "mpirun -np 4" by itself, wihtout setting > up a means of distributing tasks (usually via a tasklist) will run > them all on the current machine. I am, unfortunately, not really > qualified to speak to setting up MPI implementations. But please do > let us know if you have problems with the yt aspects of this! > > -Matt > > On Thu, Feb 12, 2009 at 6:59 PM, rsoares <dlleuz@xmission.com <mailto:dlleuz@xmission.com> <mailto:dlleuz@xmission.com <mailto:dlleuz@xmission.com>>> wrote: >> Hi, >> >> I'm trying to run mpi4py on my 4 machines, but I need a parallelized version >> of Python. Tried to compile one with Python 2.5 and mpich2 but mpich2 won't >> let me built dynamic /shares libraries which it needs. Trying with the >> static ones involves alot of headers errors from both. >> Is yt-trunk capable of doing python in parallel? >> >> Without parallel-python, I mpdboot -n 4 then >> >> python >>>>>import MPI >>>>> rank, size = MPI.COMM_WORLD.rank, MPI.COMM_WORLD.size >>>>> print 'Hello World! I am process', rank, 'of', size >> Hello World! I am process 0 of 1 >>>>> >> >> not 4 processes, and mpirun -np 4 python just hangs. mpi4py installed on >> all 4 nodes. >> >> Thanks. >> >> R.Soares >> _______________________________________________ >> yt-users mailing list >> yt-users@lists.spacepope.org <mailto:yt-users@lists.spacepope.org> <mailto:yt-users@lists.spacepope.org <mailto:yt-users@lists.spacepope.org>> >> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org >> > _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org <mailto:yt-users@lists.spacepope.org> <mailto:yt-users@lists.spacepope.org <mailto:yt-users@lists.spacepope.org>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org <mailto:yt-users@lists.spacepope.org
http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
participants (3)
-
Britton Smith
-
Matthew Turk
-
rsoares