On 8 Nov 2009, at 19:06, Matthew Turk wrote:
I'm not sure what the deal is with OpenMPI on Ranger. I'll see if I can replicate this and dig a bit deeper on Ranger tomorrow...
I tested some more on ranger, and I think it's ranger's fault. parallel projections work with mvapich+icc and openmpi+icc, but fails with openmpi+gcc.
When I tried to start a batch job on ranger with "ibrun tacc_affinity" while using openmp+gcc, tacc_affinity fails. So python is never called.
I can get around this if I don't call tacc_affinity to control the processor / code placement, i.e.
ibrun mpi4py my_script.py --parallel
This is when I get the errors I posted. I'm not sure whether this is worth debugging on yt's side because I think ranger is at fault here.
Sorry about the false alarm! But I guess it's good to note this.
John