
Hi all, I'm not sure what list to send this to since it is about inline yt in enzo, but I will try it here. I'm trying to do some simple inline tasks for a non-cosmology problem (one that is not currently in the enzo-dev repo). I'm trying to do some slices; I eventually want to do some 1-d profiles and derived quantities. I have a script that works on one processor (with MPI on), but when I try to use more than one processor I get odd results. The slices I get appear to contain only part of the simulation domain (so there is stuff in part of the image and the rest is blank) and I get key errors from some processors saying they can't find the Density field. The error is pasted below. I'm guessing something about parallel yt is not working correctly? I should also mention that this run does not use AMR. My yt script is also pasted below, along with the enzo parameter file (which is a little jumbled, sorry). The yt version I'm using is the current one (I checked out the install script today). I did have to comment out the following line in yt-x86_64-shared/src/yt-hg/yt/frontends/enzo/data_structures.py: #self.periodicity = ensure_tuple(self.parameters["LeftFaceBoundaryCondition"] == 3) The enzo version I'm using is the tip of the enzo-dev-mom fork which diverged from enzo-dev after changeset e01ad22. I glanced through the accepted pull requests, but nothing jumped out at me as being a solution. Any ideas would be appreciated. Thanks Christine Error: yt : [INFO ] 2013-04-02 16:00:50,454 Parameters: current_time = 4.81533679704e-05 yt : [INFO ] 2013-04-02 16:00:50,454 Parameters: domain_dimensions = [98 98 98] yt : [INFO ] 2013-04-02 16:00:50,455 Parameters: current_time = 4.81533679704e-05 yt : [INFO ] 2013-04-02 16:00:50,455 Parameters: domain_left_edge = [ 0. 0. 0.] yt : [INFO ] 2013-04-02 16:00:50,455 Parameters: domain_dimensions = [98 98 98] yt : [INFO ] 2013-04-02 16:00:50,455 Parameters: current_time = 4.81533679704e-05 yt : [INFO ] 2013-04-02 16:00:50,455 Parameters: domain_right_edge = [ 1. 1. 1.] yt : [INFO ] 2013-04-02 16:00:50,455 Parameters: domain_left_edge = [ 0. 0. 0.] yt : [INFO ] 2013-04-02 16:00:50,455 Parameters: cosmological_simulation = 0.0 yt : [INFO ] 2013-04-02 16:00:50,455 Parameters: domain_dimensions = [98 98 98] yt : [INFO ] 2013-04-02 16:00:50,455 Parameters: domain_right_edge = [ 1. 1. 1.] yt : [INFO ] 2013-04-02 16:00:50,456 Parameters: cosmological_simulation = 0.0 yt : [INFO ] 2013-04-02 16:00:50,456 Parameters: domain_left_edge = [ 0. 0. 0.] yt : [INFO ] 2013-04-02 16:00:50,456 Parameters: domain_right_edge = [ 1. 1. 1.] yt : [INFO ] 2013-04-02 16:00:50,456 Parameters: current_time = 4.81533679704e-05 yt : [INFO ] 2013-04-02 16:00:50,456 Parameters: cosmological_simulation = 0.0 yt : [INFO ] 2013-04-02 16:00:50,456 Parameters: domain_dimensions = [98 98 98] yt : [INFO ] 2013-04-02 16:00:50,457 Parameters: current_time = 4.81533679704e-05 yt : [INFO ] 2013-04-02 16:00:50,456 Parameters: domain_left_edge = [ 0. 0. 0.] yt : [INFO ] 2013-04-02 16:00:50,457 Parameters: domain_dimensions = [98 98 98] yt : [INFO ] 2013-04-02 16:00:50,457 Parameters: domain_right_edge = [ 1. 1. 1.] yt : [INFO ] 2013-04-02 16:00:50,457 Parameters: cosmological_simulation = 0.0 yt : [INFO ] 2013-04-02 16:00:50,457 Parameters: domain_left_edge = [ 0. 0. 0.] yt : [INFO ] 2013-04-02 16:00:50,457 Parameters: current_time = 4.81533679704e-05 yt : [INFO ] 2013-04-02 16:00:50,457 Parameters: domain_right_edge = [ 1. 1. 1.] yt : [INFO ] 2013-04-02 16:00:50,457 Parameters: domain_dimensions = [98 98 98] yt : [INFO ] 2013-04-02 16:00:50,457 Parameters: cosmological_simulation = 0.0 yt : [INFO ] 2013-04-02 16:00:50,457 Parameters: domain_left_edge = [ 0. 0. 0.] yt : [INFO ] 2013-04-02 16:00:50,458 Parameters: domain_right_edge = [ 1. 1. 1.] yt : [INFO ] 2013-04-02 16:00:50,458 Parameters: cosmological_simulation = 0.0 yt : [INFO ] 2013-04-02 16:00:50,459 Gathering a field list (this may take a moment.) yt : [INFO ] 2013-04-02 16:00:50,459 Gathering a field list (this may take a moment.) yt : [INFO ] 2013-04-02 16:00:50,460 Gathering a field list (this may take a moment.) Traceback (most recent call last): File "<string>", line 1, in <module> Traceback (most recent call last): File "<string>", line 1, in <module> File "./user_script.py", line 19, in main File "./user_script.py", line 19, in main pc = PlotCollection(pf) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/visualization/plot_collection.py", line 120, in __init__ pc = PlotCollection(pf) v,self.c = pf.h.find_max("Density") # @todo: ensure no caching File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/object_finding_mixin.py", line 61, in find_max File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/visualization/plot_collection.py", line 120, in __init__ mg, mc, mv, pos = self.find_max_cell_location(field, finest_levels) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/object_finding_mixin.py", line 74, in find_max_cell_location v,self.c = pf.h.find_max("Density") # @todo: ensure no caching File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/object_finding_mixin.py", line 61, in find_max source.quantities["MaxLocation"]( field, lazy_reader=True) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/derived_quantities.py", line 92, in __call__ mg, mc, mv, pos = self.find_max_cell_location(field, finest_levels) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/object_finding_mixin.py", line 74, in find_max_cell_location return self._call_func_lazy(args, kwargs) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/derived_quantities.py", line 99, in _call_func_lazy source.quantities["MaxLocation"]( field, lazy_reader=True) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/derived_quantities.py", line 92, in __call__ rv = self.func(GridChildMaskWrapper(g, self._data_source), *args, **kwargs) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/derived_quantities.py", line 669, in _MaxLocation return self._call_func_lazy(args, kwargs) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/derived_quantities.py", line 99, in _call_func_lazy rv = self.func(GridChildMaskWrapper(g, self._data_source), *args, **kwargs) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/derived_quantities.py", line 669, in _MaxLocation if data[field].size > 0: File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/derived_quantities.py", line 60, in __getitem__ if data[field].size > 0: File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/derived_quantities.py", line 60, in __getitem__ data = self.data_source._get_data_from_grid(self.grid, item) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/data_containers.py", line 95, in save_state data = self.data_source._get_data_from_grid(self.grid, item) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/data_containers.py", line 95, in save_state tr = func(self, grid, field, *args, **kwargs) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/data_containers.py", line 2645, in _get_data_from_grid tr = func(self, grid, field, *args, **kwargs) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/data_containers.py", line 2645, in _get_data_from_grid tr = grid[field] File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/grid_patch.py", line 157, in __getitem__ tr = grid[field] File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/grid_patch.py", line 157, in __getitem__ self.get_data(key) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/grid_patch.py", line 200, in get_data self.get_data(key) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/grid_patch.py", line 200, in get_data self._generate_field(field) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/grid_patch.py", line 147, in _generate_field self._generate_field(field) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/grid_patch.py", line 147, in _generate_field raise exceptions.KeyError(field) KeyError: raise exceptions.KeyError(field) 'Density' KeyError: 'Density' yt : [INFO ] 2013-04-02 16:00:50,892 Max Value is 1.00000e-24 at 0.0051020408163265 0.0051020408163265 0.0051020408163265 in grid EnzoGrid_0001 at level 0 (0, 0, 0) yt : [INFO ] 2013-04-02 16:00:50,892 Created plot collection with default plot-center = [0.0051020408163265302, 0.0051020408163265302, 0.0051020408163265302] yt : [INFO ] 2013-04-02 16:00:51,222 Added slice of Density at y = 0.00510204081633 with 'center' = [0.0051020408163265302, 0.0051020408163265302, 0.0051020408163265302] yt : [INFO ] 2013-04-02 16:00:52,250 Saved cycle00000001_Slice_y_Density.png ['z-velocity', 'Temperature', 'GasEnergy', 'Density', 'TotalEnergy', 'x-velocity', 'y-velocity'] python script: from yt.mods import * def main(): pf = EnzoStaticOutputInMemory() pc = PlotCollection(pf) pc.add_slice("Density",1) pc.save() print pf.h.field_list enzo parameter file: # # AMR PROBLEM DEFINITION FILE: TestStarParticle # PythonTopGridSkip = 1 CellFlaggingMethod = 0 PPMDiffusionParameter = 0 PPMSteepeningParameter = 0 DensityUnits = 1.0e-24 LengthUnits = 3.01802501047e+20 #RadiativeCooling = 1 StarParticleFeedback = 2 RefineBy = 2 CourantSafetyNumber = 0.4 OutputCoolingTime = 1 dtDataDump = 0.1 TopGridDimensions = 98 98 98 TestStarParticleEnergy = 0.00104392468495 MaximumRefinementLevel = 0 Initialdt = 4.81533668755e-05 TestStarParticleDensity = 1.0 ProblemType = 90 PPMFlatteningParameter = 0 StopTime = 0.3 TopGridRank = 3 StarFeedbackKineticFraction = 1.0 StaticHierarchy = 1 Gamma = 1.66667 StarMassEjectionFraction = 0.25 TimeUnits = 3.15e13 TestStarParticleStarMass = 100.0 StarEnergyToThermalFeedback = 5.59e-6 HydroMethod = 0 DualEnergyFormalism = 1 #CycleSkipDataDump = 1 OutputTemperature = 1

Hi Christine, The only thing I notice is that there don't seem to be P??? processor number prefixes to your yt output. Mine looks like (using yt and enzo-dev tips on RotatingCylinder) : P001 yt : [DEBUG ] 2013-04-02 20:57:27,208 Received buffer of min 0.0 and max 3.34041847571e-21 (data: 1.66943640208e-23 3.34041847571e-21) P000 yt : [DEBUG ] 2013-04-02 20:57:27,209 Received buffer of min 0.0 and max 3.34041847571e-21 (data: 1.66943640208e-23 3.34041847571e-21) P001 yt : [INFO ] 2013-04-02 20:57:27,306 Added slice of Density at y = 0.5546875 with 'center' = [0.4296875, 0.5546875, 0.4296875] P000 yt : [INFO ] 2013-04-02 20:57:27,310 Added slice of Density at y = 0.5546875 with 'center' = [0.4296875, 0.5546875, 0.4296875] P001 yt : [DEBUG ] 2013-04-02 20:57:27,325 Received buffer of min 0.0 and max 3.34041847571e-21 (data: 1.66943640208e-23 3.34041847571e-21) P000 yt : [DEBUG ] 2013-04-02 20:57:27,331 Received buffer of min 0.0 and max 3.34041847571e-21 (data: 1.66943640208e-23 3.34041847571e-21) P001 yt : [INFO ] 2013-04-02 20:57:27,401 Saved cycle00000001_Slice_y_Density.png I think what this means is that it isn't picking up the parallel part of yt, as you expected. Does running a similar script on a datadump like: mpirun -np 2 python your_script.py --parallel yield different information in the logs? If you get the same thing (without the P??? prefix), then I'd say try doing pip install mpi4py or re-installing mpi4py some other way and make sure that it picks up the same mpirun that you are running enzo with. Hope that helps, Sam On Tue, Apr 2, 2013 at 5:26 PM, Christine Simpson < csimpson@astro.columbia.edu> wrote:
Hi all,
I'm not sure what list to send this to since it is about inline yt in enzo, but I will try it here. I'm trying to do some simple inline tasks for a non-cosmology problem (one that is not currently in the enzo-dev repo). I'm trying to do some slices; I eventually want to do some 1-d profiles and derived quantities.
I have a script that works on one processor (with MPI on), but when I try to use more than one processor I get odd results. The slices I get appear to contain only part of the simulation domain (so there is stuff in part of the image and the rest is blank) and I get key errors from some processors saying they can't find the Density field. The error is pasted below. I'm guessing something about parallel yt is not working correctly? I should also mention that this run does not use AMR. My yt script is also pasted below, along with the enzo parameter file (which is a little jumbled, sorry).
The yt version I'm using is the current one (I checked out the install script today). I did have to comment out the following line in yt-x86_64-shared/src/yt-hg/yt/frontends/enzo/data_structures.py:
#self.periodicity = ensure_tuple(self.parameters["LeftFaceBoundaryCondition"] == 3)
The enzo version I'm using is the tip of the enzo-dev-mom fork which diverged from enzo-dev after changeset e01ad22. I glanced through the accepted pull requests, but nothing jumped out at me as being a solution.
Any ideas would be appreciated.
Thanks Christine
Error:
yt : [INFO ] 2013-04-02 16:00:50,454 Parameters: current_time = 4.81533679704e-05 yt : [INFO ] 2013-04-02 16:00:50,454 Parameters: domain_dimensions = [98 98 98] yt : [INFO ] 2013-04-02 16:00:50,455 Parameters: current_time = 4.81533679704e-05 yt : [INFO ] 2013-04-02 16:00:50,455 Parameters: domain_left_edge = [ 0. 0. 0.] yt : [INFO ] 2013-04-02 16:00:50,455 Parameters: domain_dimensions = [98 98 98] yt : [INFO ] 2013-04-02 16:00:50,455 Parameters: current_time = 4.81533679704e-05 yt : [INFO ] 2013-04-02 16:00:50,455 Parameters: domain_right_edge = [ 1. 1. 1.] yt : [INFO ] 2013-04-02 16:00:50,455 Parameters: domain_left_edge = [ 0. 0. 0.] yt : [INFO ] 2013-04-02 16:00:50,455 Parameters: cosmological_simulation = 0.0 yt : [INFO ] 2013-04-02 16:00:50,455 Parameters: domain_dimensions = [98 98 98] yt : [INFO ] 2013-04-02 16:00:50,455 Parameters: domain_right_edge = [ 1. 1. 1.] yt : [INFO ] 2013-04-02 16:00:50,456 Parameters: cosmological_simulation = 0.0 yt : [INFO ] 2013-04-02 16:00:50,456 Parameters: domain_left_edge = [ 0. 0. 0.] yt : [INFO ] 2013-04-02 16:00:50,456 Parameters: domain_right_edge = [ 1. 1. 1.] yt : [INFO ] 2013-04-02 16:00:50,456 Parameters: current_time = 4.81533679704e-05 yt : [INFO ] 2013-04-02 16:00:50,456 Parameters: cosmological_simulation = 0.0 yt : [INFO ] 2013-04-02 16:00:50,456 Parameters: domain_dimensions = [98 98 98] yt : [INFO ] 2013-04-02 16:00:50,457 Parameters: current_time = 4.81533679704e-05 yt : [INFO ] 2013-04-02 16:00:50,456 Parameters: domain_left_edge = [ 0. 0. 0.] yt : [INFO ] 2013-04-02 16:00:50,457 Parameters: domain_dimensions = [98 98 98] yt : [INFO ] 2013-04-02 16:00:50,457 Parameters: domain_right_edge = [ 1. 1. 1.] yt : [INFO ] 2013-04-02 16:00:50,457 Parameters: cosmological_simulation = 0.0 yt : [INFO ] 2013-04-02 16:00:50,457 Parameters: domain_left_edge = [ 0. 0. 0.] yt : [INFO ] 2013-04-02 16:00:50,457 Parameters: current_time = 4.81533679704e-05 yt : [INFO ] 2013-04-02 16:00:50,457 Parameters: domain_right_edge = [ 1. 1. 1.] yt : [INFO ] 2013-04-02 16:00:50,457 Parameters: domain_dimensions = [98 98 98] yt : [INFO ] 2013-04-02 16:00:50,457 Parameters: cosmological_simulation = 0.0 yt : [INFO ] 2013-04-02 16:00:50,457 Parameters: domain_left_edge = [ 0. 0. 0.] yt : [INFO ] 2013-04-02 16:00:50,458 Parameters: domain_right_edge = [ 1. 1. 1.] yt : [INFO ] 2013-04-02 16:00:50,458 Parameters: cosmological_simulation = 0.0 yt : [INFO ] 2013-04-02 16:00:50,459 Gathering a field list (this may take a moment.) yt : [INFO ] 2013-04-02 16:00:50,459 Gathering a field list (this may take a moment.) yt : [INFO ] 2013-04-02 16:00:50,460 Gathering a field list (this may take a moment.) Traceback (most recent call last): File "<string>", line 1, in <module> Traceback (most recent call last): File "<string>", line 1, in <module> File "./user_script.py", line 19, in main File "./user_script.py", line 19, in main pc = PlotCollection(pf) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/visualization/plot_collection.py", line 120, in __init__ pc = PlotCollection(pf) v,self.c = pf.h.find_max("Density") # @todo: ensure no caching File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/object_finding_mixin.py", line 61, in find_max File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/visualization/plot_collection.py", line 120, in __init__ mg, mc, mv, pos = self.find_max_cell_location(field, finest_levels) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/object_finding_mixin.py", line 74, in find_max_cell_location v,self.c = pf.h.find_max("Density") # @todo: ensure no caching File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/object_finding_mixin.py", line 61, in find_max source.quantities["MaxLocation"]( field, lazy_reader=True) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/derived_quantities.py", line 92, in __call__ mg, mc, mv, pos = self.find_max_cell_location(field, finest_levels) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/object_finding_mixin.py", line 74, in find_max_cell_location return self._call_func_lazy(args, kwargs) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/derived_quantities.py", line 99, in _call_func_lazy source.quantities["MaxLocation"]( field, lazy_reader=True) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/derived_quantities.py", line 92, in __call__ rv = self.func(GridChildMaskWrapper(g, self._data_source), *args, **kwargs) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/derived_quantities.py", line 669, in _MaxLocation return self._call_func_lazy(args, kwargs) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/derived_quantities.py", line 99, in _call_func_lazy rv = self.func(GridChildMaskWrapper(g, self._data_source), *args, **kwargs) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/derived_quantities.py", line 669, in _MaxLocation if data[field].size > 0: File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/derived_quantities.py", line 60, in __getitem__ if data[field].size > 0: File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/derived_quantities.py", line 60, in __getitem__ data = self.data_source._get_data_from_grid(self.grid, item) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/data_containers.py", line 95, in save_state data = self.data_source._get_data_from_grid(self.grid, item) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/data_containers.py", line 95, in save_state tr = func(self, grid, field, *args, **kwargs) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/data_containers.py", line 2645, in _get_data_from_grid tr = func(self, grid, field, *args, **kwargs) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/data_containers.py", line 2645, in _get_data_from_grid tr = grid[field] File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/grid_patch.py", line 157, in __getitem__ tr = grid[field] File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/grid_patch.py", line 157, in __getitem__ self.get_data(key) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/grid_patch.py", line 200, in get_data self.get_data(key) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/grid_patch.py", line 200, in get_data self._generate_field(field) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/grid_patch.py", line 147, in _generate_field self._generate_field(field) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/data_objects/grid_patch.py", line 147, in _generate_field raise exceptions.KeyError(field) KeyError: raise exceptions.KeyError(field) 'Density' KeyError: 'Density' yt : [INFO ] 2013-04-02 16:00:50,892 Max Value is 1.00000e-24 at 0.0051020408163265 0.0051020408163265 0.0051020408163265 in grid EnzoGrid_0001 at level 0 (0, 0, 0) yt : [INFO ] 2013-04-02 16:00:50,892 Created plot collection with default plot-center = [0.0051020408163265302, 0.0051020408163265302, 0.0051020408163265302] yt : [INFO ] 2013-04-02 16:00:51,222 Added slice of Density at y = 0.00510204081633 with 'center' = [0.0051020408163265302, 0.0051020408163265302, 0.0051020408163265302] yt : [INFO ] 2013-04-02 16:00:52,250 Saved cycle00000001_Slice_y_Density.png ['z-velocity', 'Temperature', 'GasEnergy', 'Density', 'TotalEnergy', 'x-velocity', 'y-velocity']
python script:
from yt.mods import *
def main():
pf = EnzoStaticOutputInMemory()
pc = PlotCollection(pf) pc.add_slice("Density",1) pc.save()
print pf.h.field_list
enzo parameter file: # # AMR PROBLEM DEFINITION FILE: TestStarParticle
# PythonTopGridSkip = 1 CellFlaggingMethod = 0 PPMDiffusionParameter = 0 PPMSteepeningParameter = 0 DensityUnits = 1.0e-24 LengthUnits = 3.01802501047e+20 #RadiativeCooling = 1 StarParticleFeedback = 2 RefineBy = 2 CourantSafetyNumber = 0.4 OutputCoolingTime = 1 dtDataDump = 0.1 TopGridDimensions = 98 98 98 TestStarParticleEnergy = 0.00104392468495 MaximumRefinementLevel = 0 Initialdt = 4.81533668755e-05 TestStarParticleDensity = 1.0 ProblemType = 90 PPMFlatteningParameter = 0 StopTime = 0.3 TopGridRank = 3 StarFeedbackKineticFraction = 1.0 StaticHierarchy = 1 Gamma = 1.66667 StarMassEjectionFraction = 0.25 TimeUnits = 3.15e13 TestStarParticleStarMass = 100.0 StarEnergyToThermalFeedback = 5.59e-6 HydroMethod = 0 DualEnergyFormalism = 1 #CycleSkipDataDump = 1 OutputTemperature = 1
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org

Hi Christine, I think Sam's thoughts look promising, let's hope that works out.
The yt version I'm using is the current one (I checked out the install script today). I did have to comment out the following line in yt-x86_64-shared/src/yt-hg/yt/frontends/enzo/data_structures.py:
#self.periodicity = ensure_tuple(self.parameters["LeftFaceBoundaryCondition"] == 3)
The enzo version I'm using is the tip of the enzo-dev-mom fork which diverged from enzo-dev after changeset e01ad22. I glanced through the accepted pull requests, but nothing jumped out at me as being a solution.
With respect to this issue, the corresponding change to Enzo happened here: https://bitbucket.org/enzo/enzo-dev/pull-request/131/adding-leftfaceboundary... but that's probably not the source of your real problems. Good luck! -- Stephen Skory s@skory.us http://stephenskory.com/ 510.621.3687 (google voice)

Hi all, Thanks for your observations and suggestions. I had neglected to install mpi4py, which was the original problem. I installed that and I can run parallel yt scripts, however, I'm still having trouble with using inline yt. I've pasted the error I now get below. It is not very informative (to me at least); the keyboard interrupt is the symptom, not the cause of the problem, I think. I'm doing this on trestles and I tried to use their parallel debugger ddt to get some more information. ddt seems to indicate that one of the processes is looking for a file called mpi4py.MPI.c in the /tmp directory, which I don't really understand, and maybe is a red herring. I don't have any problems with single processor jobs. I installed yt using shared libraries by adding the --enable-shared flag to the configure statement for python in the install script. I've also pasted the enzo make file that I'm using below. I'm thinking that I somehow have messed up the libraries or include files. If anyone has successfully used inline yt on trestles and has any advice, I'd love to hear it. Thanks for all your help Christine Error: MPI_Init: NumberOfProcessors = 3 warning: the following parameter line was not interpreted: TestStarParticleEnergy = 0.00104392468495 warning: the following parameter line was not interpreted: TestStarParticleDensity = 1.0 warning: the following parameter line was not interpreted: TestStarParticleStarMass = 100.0 ****** ReadUnits: 2.748961e+37 1.000000e-24 3.018025e+20 3.150000e+13 ******* Global Dir set to . Initialdt in ReadParameterFile = 4.815337e-05 InitializeNew: Starting problem initialization. Central Mass: 6813.382812 Allocated 1 particles Initialize Exterior ExtBndry: BoundaryRank = 3 ExtBndry: GridDimension = 104 104 104 ExtBndry: NumberOfBaryonFields = 6 InitializeExternalBoundaryFace SimpleConstantBoundary FALSE End of set exterior InitializeNew: Initial grid hierarchy set InitializeNew: Partition Initial Grid 0 Enter CommunicationPartitionGrid. PartitionGrid (on all processors): Layout = 1 1 3 NumberOfNewGrids = 3 GridDims[0]: 98 GridDims[1]: 98 GridDims[2]: 33 32 33 StartIndex[0]: 0 StartIndex[1]: 0 StartIndex[2]: 0 33 65 Call ZeroSUS on TopGrid ENZO_layout 1 x 1 x 3 Grid structure: 1576 SubGrids structure: 4728 Re-set Unigrid = 0 Grid distribution Delete OldGrid OldGrid deleted Exit CommunicationPartitionGrid. InitializeNew: Finished problem initialization. Initializing Python interface Successfully read in parameter file StarParticleTest.enzo. INITIALIZATION TIME = 9.38615084e-01 Beginning parallel import block. MPI process (rank: 1) terminated unexpectedly on trestles-12-20.local Exit code -5 signaled from trestles-12-20 Traceback (most recent call last): File "<string>", line 1, in <module> File "./user_script.py", line 1, in <module> from yt.pmods import * File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/pmods.py", line 364, in <module> from yt.mods import * File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/pmods.py", line 234, in __import_hook__ q, tail = __find_head_package__(parent, name) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/pmods.py", line 323, in __find_head_package__ q = __import_module__(head, qname, parent) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/pmods.py", line 268, in __import_module__ pathname,stuff,ierror = mpi.bcast((pathname,stuff,ierror)) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/pmods.py", line 201, in bcast return MPI.COMM_WORLD.bcast(obj,root) KeyboardInterrupt Caught fatal exception: 'Importing user_script failed!' at InitializePythonInterface.C:108 Backtrace: BT symbol: ./enzo.exe [0x41ff8a] BT symbol: ./enzo.exe [0x727e14] BT symbol: ./enzo.exe [0x421147] BT symbol: /lib64/libc.so.6(__libc_start_main+0xf4) [0x3c0121d994] BT symbol: ./enzo.exe(__gxx_personality_v0+0x3d9) [0x41fea9] terminate called after throwing an instance of 'EnzoFatalException' Make file: #======================================================================= # # FILE: Make.mach.trestles # # DESCRIPTION: Makefile settings for the Trestles Resource at SDSC/UCSD # # AUTHOR: John Wise (jwise@astro.princeton.edu) # # DATE: 07 Dec 2010 # # #======================================================================= MACH_TEXT = Trestles MACH_VALID = 1 MACH_FILE = Make.mach.trestles MACHINE_NOTES = "MACHINE_NOTES for Trestles at SDSC/UCSD: \ Load these modules, \ 'module add intel/11.1 mvapich2/1.5.1p1'" #----------------------------------------------------------------------- # Compiler settings #----------------------------------------------------------------------- LOCAL_MPI_INSTALL = /home/diag/opt/mvapich2/1.5.1p1/intel/ LOCAL_PYTHON_INSTALL = /home/csimpson/yt-x86_64-shared/ #LOCAL_COMPILER_DIR = /opt/pgi/linux86-64/10.5 LOCAL_COMPILER_DIR = /opt/intel/Compiler/11.1/072 LOCAL_HYPRE_INSTALL = # With MPI MACH_CPP = cpp MACH_CC_MPI = $(LOCAL_MPI_INSTALL)/bin/mpicc # C compiler when using MPI MACH_CXX_MPI = $(LOCAL_MPI_INSTALL)/bin/mpicxx # C++ compiler when using MPI MACH_FC_MPI = $(LOCAL_MPI_INSTALL)/bin/mpif90 # Fortran 77 compiler when using MPI MACH_F90_MPI = $(LOCAL_MPI_INSTALL)/bin/mpif90 # Fortran 90 compiler when using MPI MACH_LD_MPI = $(LOCAL_MPI_INSTALL)/bin/mpicxx # Linker when using MPI # Without MPI MACH_CC_NOMPI = $(LOCAL_COMPILER_DIR)/bin/intel64/icc # C compiler when not using MPI MACH_CXX_NOMPI = $(LOCAL_COMPILER_DIR)/bin/intel64/icpc # C++ compiler when not using MPI MACH_FC_NOMPI = $(LOCAL_COMPILER_DIR)/bin/intel64/ifort # Fortran 77 compiler when not using MPI MACH_F90_NOMPI = $(LOCAL_COMPILER_DIR)/bin/intel64/ifort # Fortran 90 compiler when not using MPI MACH_LD_NOMPI = $(LOCAL_COMPILER_DIR)/bin/intel64/icpc # Linker when not using MPI #----------------------------------------------------------------------- # Machine-dependent defines #----------------------------------------------------------------------- # Defines for the architecture; e.g. -DSUN, -DLINUX, etc. MACH_DEFINES = -DLINUX -DH5_USE_16_API #----------------------------------------------------------------------- # Compiler flag settings #----------------------------------------------------------------------- MACH_CPPFLAGS = -P -traditional MACH_CFLAGS = MACH_CXXFLAGS = MACH_FFLAGS = MACH_F90FLAGS = MACH_LDFLAGS = #----------------------------------------------------------------------- # Precision-related flags #----------------------------------------------------------------------- MACH_FFLAGS_INTEGER_32 = -i4 MACH_FFLAGS_INTEGER_64 = -i8 MACH_FFLAGS_REAL_32 = -r4 MACH_FFLAGS_REAL_64 = -r8 #----------------------------------------------------------------------- # Optimization flags #----------------------------------------------------------------------- MACH_OPT_WARN = -Wall # Flags for verbose compiler warnings MACH_OPT_DEBUG = -O0 -g # Flags for debugging # Flags for high conservative optimization #MACH_OPT_HIGH = -O1 -ftz -mieee-fp -fp-speculation=off -prec-sqrt -prec-div MACH_OPT_HIGH = -O2 # Note that this breaks determinism, which is why it's commented out! # MACH_OPT_AGGRESSIVE = -O3 # Flags for aggressive optimization # This is the best we can do, from what I can tell. #MACH_OPT_AGGRESSIVE = -O1 -ftz -mieee-fp -fp-speculation=off -prec-sqrt -prec-div #----------------------------------------------------------------------- # Includes #----------------------------------------------------------------------- LOCAL_INCLUDES_MPI = LOCAL_INCLUDES_HDF5 = -I/home/csimpson/yt-x86_64-shared/include # HDF5 includes LOCAL_INCLUDES_HYPRE = LOCAL_INCLUDES_PAPI = # PAPI includes LOCAL_INCLUDES_PYTHON = -I$(LOCAL_PYTHON_INSTALL)/include/python2.7 \ -I$(LOCAL_PYTHON_INSTALL)/lib/python2.7/site-packages/numpy/core/include MACH_INCLUDES = $(LOCAL_INCLUDES_HDF5) MACH_INCLUDES_PYTHON = $(LOCAL_INCLUDES_PYTHON) MACH_INCLUDES_MPI = $(LOCAL_INCLUDES_MPI) MACH_INCLUDES_HYPRE = $(LOCAL_INCLUDES_HYPRE) MACH_INCLUDES_PAPI = $(LOCAL_INCLUDES_PAPI) #----------------------------------------------------------------------- # Libraries #----------------------------------------------------------------------- LOCAL_LIBS_MPI = LOCAL_LIBS_HDF5 = -L/home/csimpson/yt-x86_64-shared/lib -lhdf5 # HDF5 libraries LOCAL_LIBS_HYPRE = LOCAL_LIBS_PAPI = # PAPI libraries LOCAL_LIBS_PYTHON = -L$(LOCAL_PYTHON_INSTALL)/lib -lpython2.7 \ -lreadline -ltermcap -lutil #LOCAL_LIBS_MACH = -L$(LOCAL_COMPILER_DIR)/lib \ # -lpgf90 -lpgf90_rpm1 -lpgf902 -lpgf90rtl -lpgftnrtl -lrt LOCAL_LIBS_MACH = -L$(LOCAL_COMPILER_DIR)/lib/intel64 -lifcore -lifport MACH_LIBS = $(LOCAL_LIBS_HDF5) $(LOCAL_LIBS_MACH) MACH_LIBS_MPI = $(LOCAL_LIBS_MPI) MACH_LIBS_HYPRE = $(LOCAL_LIBS_HYPRE) MACH_LIBS_PAPI = $(LOCAL_LIBS_PAPI) MACH_LIBS_PYTHON = $(LOCAL_LIBS_PYTHON)

Hi Christine, On Thu, Apr 11, 2013 at 12:34 AM, Christine Simpson <csimpson@astro.columbia.edu> wrote:
Hi all,
Thanks for your observations and suggestions. I had neglected to install mpi4py, which was the original problem. I installed that and I can run parallel yt scripts, however, I'm still having trouble with using inline yt. I've pasted the error I now get below. It is not very informative (to me at least); the keyboard interrupt is the symptom, not the cause of the problem, I think. I'm doing this on trestles and I tried to use their parallel debugger ddt to get some more information. ddt seems to indicate that one of the processes is looking for a file called mpi4py.MPI.c in the /tmp directory, which I don't really understand, and maybe is a red herring. I don't have any problems with single processor jobs. I installed yt using shared libraries by adding the --enable-shared flag to the configure statement for python in the install script. I've also pasted the enzo make file that I'm using below. I'm thinking that I somehow have messed up the libraries or include files . If anyone has successfully used inline yt on trestles and has any advice, I'd love to hear it.
So this could probably be better covered by the documentation, but the inline yt process looks for a script called user_script.py in the yt directory, within which it will call the main() function. This function can get access to the in-memory output by doing something like "pf = EnzoStaticOutputInMemory()" which will query the appropriate items. Note that you can't access raw data like "sphere['Density']" but you can do operations like "sphere.quantities['Extrema']('Density')" and so on; anything that uses an opaque object is fine, but arrays of concatenated data generally aren't. If you do have the script "user_script.py" in your directory, then this generally means that there's a syntax error or something else preventing it from being imported. I think if you have gotten this far and you don't have user_script.py, you probably are fine for the libraries and so on. If you do have it, are you able to run it with "python2.7 user_script.py" ? -Matt
Thanks for all your help Christine
Error:
MPI_Init: NumberOfProcessors = 3 warning: the following parameter line was not interpreted: TestStarParticleEnergy = 0.00104392468495 warning: the following parameter line was not interpreted: TestStarParticleDensity = 1.0 warning: the following parameter line was not interpreted: TestStarParticleStarMass = 100.0 ****** ReadUnits: 2.748961e+37 1.000000e-24 3.018025e+20 3.150000e+13 ******* Global Dir set to . Initialdt in ReadParameterFile = 4.815337e-05 InitializeNew: Starting problem initialization. Central Mass: 6813.382812 Allocated 1 particles Initialize Exterior ExtBndry: BoundaryRank = 3 ExtBndry: GridDimension = 104 104 104 ExtBndry: NumberOfBaryonFields = 6 InitializeExternalBoundaryFace SimpleConstantBoundary FALSE End of set exterior InitializeNew: Initial grid hierarchy set InitializeNew: Partition Initial Grid 0 Enter CommunicationPartitionGrid. PartitionGrid (on all processors): Layout = 1 1 3 NumberOfNewGrids = 3 GridDims[0]: 98 GridDims[1]: 98 GridDims[2]: 33 32 33 StartIndex[0]: 0 StartIndex[1]: 0 StartIndex[2]: 0 33 65 Call ZeroSUS on TopGrid ENZO_layout 1 x 1 x 3 Grid structure: 1576 SubGrids structure: 4728 Re-set Unigrid = 0 Grid distribution Delete OldGrid OldGrid deleted Exit CommunicationPartitionGrid. InitializeNew: Finished problem initialization. Initializing Python interface Successfully read in parameter file StarParticleTest.enzo. INITIALIZATION TIME = 9.38615084e-01 Beginning parallel import block. MPI process (rank: 1) terminated unexpectedly on trestles-12-20.local Exit code -5 signaled from trestles-12-20 Traceback (most recent call last): File "<string>", line 1, in <module> File "./user_script.py", line 1, in <module> from yt.pmods import * File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/pmods.py", line 364, in <module> from yt.mods import * File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/pmods.py", line 234, in __import_hook__ q, tail = __find_head_package__(parent, name) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/pmods.py", line 323, in __find_head_package__ q = __import_module__(head, qname, parent) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/pmods.py", line 268, in __import_module__ pathname,stuff,ierror = mpi.bcast((pathname,stuff,ierror)) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/pmods.py", line 201, in bcast return MPI.COMM_WORLD.bcast(obj,root) KeyboardInterrupt Caught fatal exception:
'Importing user_script failed!' at InitializePythonInterface.C:108
Backtrace:
BT symbol: ./enzo.exe [0x41ff8a] BT symbol: ./enzo.exe [0x727e14] BT symbol: ./enzo.exe [0x421147] BT symbol: /lib64/libc.so.6(__libc_start_main+0xf4) [0x3c0121d994] BT symbol: ./enzo.exe(__gxx_personality_v0+0x3d9) [0x41fea9] terminate called after throwing an instance of 'EnzoFatalException'
Make file:
#======================================================================= # # FILE: Make.mach.trestles # # DESCRIPTION: Makefile settings for the Trestles Resource at SDSC/UCSD # # AUTHOR: John Wise (jwise@astro.princeton.edu) # # DATE: 07 Dec 2010 # # #=======================================================================
MACH_TEXT = Trestles MACH_VALID = 1 MACH_FILE = Make.mach.trestles
MACHINE_NOTES = "MACHINE_NOTES for Trestles at SDSC/UCSD: \ Load these modules, \ 'module add intel/11.1 mvapich2/1.5.1p1'"
#----------------------------------------------------------------------- # Compiler settings #-----------------------------------------------------------------------
LOCAL_MPI_INSTALL = /home/diag/opt/mvapich2/1.5.1p1/intel/ LOCAL_PYTHON_INSTALL = /home/csimpson/yt-x86_64-shared/ #LOCAL_COMPILER_DIR = /opt/pgi/linux86-64/10.5 LOCAL_COMPILER_DIR = /opt/intel/Compiler/11.1/072 LOCAL_HYPRE_INSTALL =
# With MPI
MACH_CPP = cpp MACH_CC_MPI = $(LOCAL_MPI_INSTALL)/bin/mpicc # C compiler when using MPI MACH_CXX_MPI = $(LOCAL_MPI_INSTALL)/bin/mpicxx # C++ compiler when using MPI MACH_FC_MPI = $(LOCAL_MPI_INSTALL)/bin/mpif90 # Fortran 77 compiler when using MPI MACH_F90_MPI = $(LOCAL_MPI_INSTALL)/bin/mpif90 # Fortran 90 compiler when using MPI MACH_LD_MPI = $(LOCAL_MPI_INSTALL)/bin/mpicxx # Linker when using MPI
# Without MPI
MACH_CC_NOMPI = $(LOCAL_COMPILER_DIR)/bin/intel64/icc # C compiler when not using MPI MACH_CXX_NOMPI = $(LOCAL_COMPILER_DIR)/bin/intel64/icpc # C++ compiler when not using MPI MACH_FC_NOMPI = $(LOCAL_COMPILER_DIR)/bin/intel64/ifort # Fortran 77 compiler when not using MPI MACH_F90_NOMPI = $(LOCAL_COMPILER_DIR)/bin/intel64/ifort # Fortran 90 compiler when not using MPI MACH_LD_NOMPI = $(LOCAL_COMPILER_DIR)/bin/intel64/icpc # Linker when not using MPI
#----------------------------------------------------------------------- # Machine-dependent defines #----------------------------------------------------------------------- # Defines for the architecture; e.g. -DSUN, -DLINUX, etc. MACH_DEFINES = -DLINUX -DH5_USE_16_API
#----------------------------------------------------------------------- # Compiler flag settings #-----------------------------------------------------------------------
MACH_CPPFLAGS = -P -traditional MACH_CFLAGS = MACH_CXXFLAGS = MACH_FFLAGS = MACH_F90FLAGS = MACH_LDFLAGS =
#----------------------------------------------------------------------- # Precision-related flags #-----------------------------------------------------------------------
MACH_FFLAGS_INTEGER_32 = -i4 MACH_FFLAGS_INTEGER_64 = -i8 MACH_FFLAGS_REAL_32 = -r4 MACH_FFLAGS_REAL_64 = -r8
#----------------------------------------------------------------------- # Optimization flags #-----------------------------------------------------------------------
MACH_OPT_WARN = -Wall # Flags for verbose compiler warnings MACH_OPT_DEBUG = -O0 -g # Flags for debugging # Flags for high conservative optimization #MACH_OPT_HIGH = -O1 -ftz -mieee-fp -fp-speculation=off -prec-sqrt -prec-div MACH_OPT_HIGH = -O2 # Note that this breaks determinism, which is why it's commented out! # MACH_OPT_AGGRESSIVE = -O3 # Flags for aggressive optimization # This is the best we can do, from what I can tell. #MACH_OPT_AGGRESSIVE = -O1 -ftz -mieee-fp -fp-speculation=off -prec-sqrt -prec-div
#----------------------------------------------------------------------- # Includes #-----------------------------------------------------------------------
LOCAL_INCLUDES_MPI = LOCAL_INCLUDES_HDF5 = -I/home/csimpson/yt-x86_64-shared/include # HDF5 includes LOCAL_INCLUDES_HYPRE = LOCAL_INCLUDES_PAPI = # PAPI includes LOCAL_INCLUDES_PYTHON = -I$(LOCAL_PYTHON_INSTALL)/include/python2.7 \ -I$(LOCAL_PYTHON_INSTALL)/lib/python2.7/site-packages/numpy/core/include
MACH_INCLUDES = $(LOCAL_INCLUDES_HDF5) MACH_INCLUDES_PYTHON = $(LOCAL_INCLUDES_PYTHON) MACH_INCLUDES_MPI = $(LOCAL_INCLUDES_MPI) MACH_INCLUDES_HYPRE = $(LOCAL_INCLUDES_HYPRE) MACH_INCLUDES_PAPI = $(LOCAL_INCLUDES_PAPI)
#----------------------------------------------------------------------- # Libraries #-----------------------------------------------------------------------
LOCAL_LIBS_MPI = LOCAL_LIBS_HDF5 = -L/home/csimpson/yt-x86_64-shared/lib -lhdf5 # HDF5 libraries LOCAL_LIBS_HYPRE = LOCAL_LIBS_PAPI = # PAPI libraries LOCAL_LIBS_PYTHON = -L$(LOCAL_PYTHON_INSTALL)/lib -lpython2.7 \ -lreadline -ltermcap -lutil
#LOCAL_LIBS_MACH = -L$(LOCAL_COMPILER_DIR)/lib \ # -lpgf90 -lpgf90_rpm1 -lpgf902 -lpgf90rtl -lpgftnrtl -lrt LOCAL_LIBS_MACH = -L$(LOCAL_COMPILER_DIR)/lib/intel64 -lifcore -lifport
MACH_LIBS = $(LOCAL_LIBS_HDF5) $(LOCAL_LIBS_MACH) MACH_LIBS_MPI = $(LOCAL_LIBS_MPI) MACH_LIBS_HYPRE = $(LOCAL_LIBS_HYPRE) MACH_LIBS_PAPI = $(LOCAL_LIBS_PAPI) MACH_LIBS_PYTHON = $(LOCAL_LIBS_PYTHON)
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org

Hi Matt, So I have a script called user_script.py in my run directory where I have the enzo executable, parameter file, etc. and from which I'm running the enzo simulation. What do you mean by 'yt directory'? Do you mean site-packages? The script I'm testing is just this (from the yt docs): from yt.pmods import * def main(): pf = EnzoStaticOutputInMemory() pc = PlotCollection(pf) pc.add_slice("Density",1) pc.save() If I just type python2.7 user_script.py on the command line, it just returns (although I have to change yt.pmods to yt.mods). I mean, there's no enzo output in memory unless it is running with an enzo simulation, right? Thanks for your help, Christine On Apr 11, 2013, at 2:55 AM, Matthew Turk wrote:
Hi Christine,
On Thu, Apr 11, 2013 at 12:34 AM, Christine Simpson <csimpson@astro.columbia.edu> wrote:
Hi all,
Thanks for your observations and suggestions. I had neglected to install mpi4py, which was the original problem. I installed that and I can run parallel yt scripts, however, I'm still having trouble with using inline yt. I've pasted the error I now get below. It is not very informative (to me at least); the keyboard interrupt is the symptom, not the cause of the problem, I think. I'm doing this on trestles and I tried to use their parallel debugger ddt to get some more information. ddt seems to indicate that one of the processes is looking for a file called mpi4py.MPI.c in the /tmp directory, which I don't really understand, and maybe is a red herring. I don't have any problems with single processor jobs. I installed yt using shared libraries by adding the --enable-shared flag to the configure statement for python in the install script. I've also pasted the enzo make file that I'm using below. I'm thinking that I somehow have messed up the libraries or include fil es . If anyone has successfully used inline yt on trestles and has any advice, I'd love to hear it.
So this could probably be better covered by the documentation, but the inline yt process looks for a script called user_script.py in the yt directory, within which it will call the main() function. This function can get access to the in-memory output by doing something like "pf = EnzoStaticOutputInMemory()" which will query the appropriate items. Note that you can't access raw data like "sphere['Density']" but you can do operations like "sphere.quantities['Extrema']('Density')" and so on; anything that uses an opaque object is fine, but arrays of concatenated data generally aren't.
If you do have the script "user_script.py" in your directory, then this generally means that there's a syntax error or something else preventing it from being imported. I think if you have gotten this far and you don't have user_script.py, you probably are fine for the libraries and so on. If you do have it, are you able to run it with "python2.7 user_script.py" ?
-Matt
Thanks for all your help Christine
Error:
MPI_Init: NumberOfProcessors = 3 warning: the following parameter line was not interpreted: TestStarParticleEnergy = 0.00104392468495 warning: the following parameter line was not interpreted: TestStarParticleDensity = 1.0 warning: the following parameter line was not interpreted: TestStarParticleStarMass = 100.0 ****** ReadUnits: 2.748961e+37 1.000000e-24 3.018025e+20 3.150000e+13 ******* Global Dir set to . Initialdt in ReadParameterFile = 4.815337e-05 InitializeNew: Starting problem initialization. Central Mass: 6813.382812 Allocated 1 particles Initialize Exterior ExtBndry: BoundaryRank = 3 ExtBndry: GridDimension = 104 104 104 ExtBndry: NumberOfBaryonFields = 6 InitializeExternalBoundaryFace SimpleConstantBoundary FALSE End of set exterior InitializeNew: Initial grid hierarchy set InitializeNew: Partition Initial Grid 0 Enter CommunicationPartitionGrid. PartitionGrid (on all processors): Layout = 1 1 3 NumberOfNewGrids = 3 GridDims[0]: 98 GridDims[1]: 98 GridDims[2]: 33 32 33 StartIndex[0]: 0 StartIndex[1]: 0 StartIndex[2]: 0 33 65 Call ZeroSUS on TopGrid ENZO_layout 1 x 1 x 3 Grid structure: 1576 SubGrids structure: 4728 Re-set Unigrid = 0 Grid distribution Delete OldGrid OldGrid deleted Exit CommunicationPartitionGrid. InitializeNew: Finished problem initialization. Initializing Python interface Successfully read in parameter file StarParticleTest.enzo. INITIALIZATION TIME = 9.38615084e-01 Beginning parallel import block. MPI process (rank: 1) terminated unexpectedly on trestles-12-20.local Exit code -5 signaled from trestles-12-20 Traceback (most recent call last): File "<string>", line 1, in <module> File "./user_script.py", line 1, in <module> from yt.pmods import * File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/pmods.py", line 364, in <module> from yt.mods import * File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/pmods.py", line 234, in __import_hook__ q, tail = __find_head_package__(parent, name) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/pmods.py", line 323, in __find_head_package__ q = __import_module__(head, qname, parent) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/pmods.py", line 268, in __import_module__ pathname,stuff,ierror = mpi.bcast((pathname,stuff,ierror)) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/pmods.py", line 201, in bcast return MPI.COMM_WORLD.bcast(obj,root) KeyboardInterrupt Caught fatal exception:
'Importing user_script failed!' at InitializePythonInterface.C:108
Backtrace:
BT symbol: ./enzo.exe [0x41ff8a] BT symbol: ./enzo.exe [0x727e14] BT symbol: ./enzo.exe [0x421147] BT symbol: /lib64/libc.so.6(__libc_start_main+0xf4) [0x3c0121d994] BT symbol: ./enzo.exe(__gxx_personality_v0+0x3d9) [0x41fea9] terminate called after throwing an instance of 'EnzoFatalException'
Make file:
#======================================================================= # # FILE: Make.mach.trestles # # DESCRIPTION: Makefile settings for the Trestles Resource at SDSC/UCSD # # AUTHOR: John Wise (jwise@astro.princeton.edu) # # DATE: 07 Dec 2010 # # #=======================================================================
MACH_TEXT = Trestles MACH_VALID = 1 MACH_FILE = Make.mach.trestles
MACHINE_NOTES = "MACHINE_NOTES for Trestles at SDSC/UCSD: \ Load these modules, \ 'module add intel/11.1 mvapich2/1.5.1p1'"
#----------------------------------------------------------------------- # Compiler settings #-----------------------------------------------------------------------
LOCAL_MPI_INSTALL = /home/diag/opt/mvapich2/1.5.1p1/intel/ LOCAL_PYTHON_INSTALL = /home/csimpson/yt-x86_64-shared/ #LOCAL_COMPILER_DIR = /opt/pgi/linux86-64/10.5 LOCAL_COMPILER_DIR = /opt/intel/Compiler/11.1/072 LOCAL_HYPRE_INSTALL =
# With MPI
MACH_CPP = cpp MACH_CC_MPI = $(LOCAL_MPI_INSTALL)/bin/mpicc # C compiler when using MPI MACH_CXX_MPI = $(LOCAL_MPI_INSTALL)/bin/mpicxx # C++ compiler when using MPI MACH_FC_MPI = $(LOCAL_MPI_INSTALL)/bin/mpif90 # Fortran 77 compiler when using MPI MACH_F90_MPI = $(LOCAL_MPI_INSTALL)/bin/mpif90 # Fortran 90 compiler when using MPI MACH_LD_MPI = $(LOCAL_MPI_INSTALL)/bin/mpicxx # Linker when using MPI
# Without MPI
MACH_CC_NOMPI = $(LOCAL_COMPILER_DIR)/bin/intel64/icc # C compiler when not using MPI MACH_CXX_NOMPI = $(LOCAL_COMPILER_DIR)/bin/intel64/icpc # C++ compiler when not using MPI MACH_FC_NOMPI = $(LOCAL_COMPILER_DIR)/bin/intel64/ifort # Fortran 77 compiler when not using MPI MACH_F90_NOMPI = $(LOCAL_COMPILER_DIR)/bin/intel64/ifort # Fortran 90 compiler when not using MPI MACH_LD_NOMPI = $(LOCAL_COMPILER_DIR)/bin/intel64/icpc # Linker when not using MPI
#----------------------------------------------------------------------- # Machine-dependent defines #----------------------------------------------------------------------- # Defines for the architecture; e.g. -DSUN, -DLINUX, etc. MACH_DEFINES = -DLINUX -DH5_USE_16_API
#----------------------------------------------------------------------- # Compiler flag settings #-----------------------------------------------------------------------
MACH_CPPFLAGS = -P -traditional MACH_CFLAGS = MACH_CXXFLAGS = MACH_FFLAGS = MACH_F90FLAGS = MACH_LDFLAGS =
#----------------------------------------------------------------------- # Precision-related flags #-----------------------------------------------------------------------
MACH_FFLAGS_INTEGER_32 = -i4 MACH_FFLAGS_INTEGER_64 = -i8 MACH_FFLAGS_REAL_32 = -r4 MACH_FFLAGS_REAL_64 = -r8
#----------------------------------------------------------------------- # Optimization flags #-----------------------------------------------------------------------
MACH_OPT_WARN = -Wall # Flags for verbose compiler warnings MACH_OPT_DEBUG = -O0 -g # Flags for debugging # Flags for high conservative optimization #MACH_OPT_HIGH = -O1 -ftz -mieee-fp -fp-speculation=off -prec-sqrt -prec-div MACH_OPT_HIGH = -O2 # Note that this breaks determinism, which is why it's commented out! # MACH_OPT_AGGRESSIVE = -O3 # Flags for aggressive optimization # This is the best we can do, from what I can tell. #MACH_OPT_AGGRESSIVE = -O1 -ftz -mieee-fp -fp-speculation=off -prec-sqrt -prec-div
#----------------------------------------------------------------------- # Includes #-----------------------------------------------------------------------
LOCAL_INCLUDES_MPI = LOCAL_INCLUDES_HDF5 = -I/home/csimpson/yt-x86_64-shared/include # HDF5 includes LOCAL_INCLUDES_HYPRE = LOCAL_INCLUDES_PAPI = # PAPI includes LOCAL_INCLUDES_PYTHON = -I$(LOCAL_PYTHON_INSTALL)/include/python2.7 \ -I$(LOCAL_PYTHON_INSTALL)/lib/python2.7/site-packages/numpy/core/include
MACH_INCLUDES = $(LOCAL_INCLUDES_HDF5) MACH_INCLUDES_PYTHON = $(LOCAL_INCLUDES_PYTHON) MACH_INCLUDES_MPI = $(LOCAL_INCLUDES_MPI) MACH_INCLUDES_HYPRE = $(LOCAL_INCLUDES_HYPRE) MACH_INCLUDES_PAPI = $(LOCAL_INCLUDES_PAPI)
#----------------------------------------------------------------------- # Libraries #-----------------------------------------------------------------------
LOCAL_LIBS_MPI = LOCAL_LIBS_HDF5 = -L/home/csimpson/yt-x86_64-shared/lib -lhdf5 # HDF5 libraries LOCAL_LIBS_HYPRE = LOCAL_LIBS_PAPI = # PAPI libraries LOCAL_LIBS_PYTHON = -L$(LOCAL_PYTHON_INSTALL)/lib -lpython2.7 \ -lreadline -ltermcap -lutil
#LOCAL_LIBS_MACH = -L$(LOCAL_COMPILER_DIR)/lib \ # -lpgf90 -lpgf90_rpm1 -lpgf902 -lpgf90rtl -lpgftnrtl -lrt LOCAL_LIBS_MACH = -L$(LOCAL_COMPILER_DIR)/lib/intel64 -lifcore -lifport
MACH_LIBS = $(LOCAL_LIBS_HDF5) $(LOCAL_LIBS_MACH) MACH_LIBS_MPI = $(LOCAL_LIBS_MPI) MACH_LIBS_HYPRE = $(LOCAL_LIBS_HYPRE) MACH_LIBS_PAPI = $(LOCAL_LIBS_PAPI) MACH_LIBS_PYTHON = $(LOCAL_LIBS_PYTHON)
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org

On Thu, Apr 11, 2013 at 3:00 PM, Christine Simpson <csimpson@astro.columbia.edu> wrote:
Hi Matt,
So I have a script called user_script.py in my run directory where I have the enzo executable, parameter file, etc. and from which I'm running the enzo simulation. What do you mean by 'yt directory'? Do you mean site-packages? The script I'm testing is just this (from the yt docs):
Oops! No, I meant exactly what you're doing. This looks precisely correct to me. And, more to the point, I completely did not see that this was all evident from your message last night. (This should be a lesson for me: be fully awake when replying.) I am a bit curious as to a few things. You b uilt with shared libraries, which is exactly the way that we fixed similar-but-not-identical answers in the past. So, the first is whether or not we can track this down to a specific import or library. Can you run a couple different user_script.py files: type 1: import numpy as np def main(): print "Type 1" type 2: from yt.mods import * def main: print "Type 2" type 3: type 1: from mpi4py import MPI def main(): print "Type 3" == My suspicion is that pmods may be causing issues, or that by some chance mpi4py is not compiled correctly (it is a bit of a funny compilation system) and that's where the problem is coming from. Note that unfortunately KeyboardInterrupt is code for a wide variety here, as what happens is that anything that throws a signal gets interpreted thusly. One last thing to check is the output of ldd on the MPI.so file inside your yt site-packages directory, under the mpi4py subdirectory, versus the output of ldd for enzo.exe. These should both link to the same MPI library. Hope that helps, -Matt
from yt.pmods import *
def main():
pf = EnzoStaticOutputInMemory()
pc = PlotCollection(pf) pc.add_slice("Density",1) pc.save()
If I just type python2.7 user_script.py on the command line, it just returns (although I have to change yt.pmods to yt.mods). I mean, there's no enzo output in memory unless it is running with an enzo simulation, right?
Thanks for your help,
Christine
On Apr 11, 2013, at 2:55 AM, Matthew Turk wrote:
Hi Christine,
On Thu, Apr 11, 2013 at 12:34 AM, Christine Simpson <csimpson@astro.columbia.edu> wrote:
Hi all,
Thanks for your observations and suggestions. I had neglected to install mpi4py, which was the original problem. I installed that and I can run parallel yt scripts, however, I'm still having trouble with using inline yt. I've pasted the error I now get below. It is not very informative (to me at least); the keyboard interrupt is the symptom, not the cause of the problem, I think. I'm doing this on trestles and I tried to use their parallel debugger ddt to get some more information. ddt seems to indicate that one of the processes is looking for a file called mpi4py.MPI.c in the /tmp directory, which I don't really understand, and maybe is a red herring. I don't have any problems with single processor jobs. I installed yt using shared libraries by adding the --enable-shared flag to the configure statement for python in the install script. I've also pasted the enzo make file that I'm using below. I'm thinking that I somehow have messed up the libraries or include fi l es . If anyone has successfully used inline yt on trestles and has any advice, I'd love to hear it.
So this could probably be better covered by the documentation, but the inline yt process looks for a script called user_script.py in the yt directory, within which it will call the main() function. This function can get access to the in-memory output by doing something like "pf = EnzoStaticOutputInMemory()" which will query the appropriate items. Note that you can't access raw data like "sphere['Density']" but you can do operations like "sphere.quantities['Extrema']('Density')" and so on; anything that uses an opaque object is fine, but arrays of concatenated data generally aren't.
If you do have the script "user_script.py" in your directory, then this generally means that there's a syntax error or something else preventing it from being imported. I think if you have gotten this far and you don't have user_script.py, you probably are fine for the libraries and so on. If you do have it, are you able to run it with "python2.7 user_script.py" ?
-Matt
Thanks for all your help Christine
Error:
MPI_Init: NumberOfProcessors = 3 warning: the following parameter line was not interpreted: TestStarParticleEnergy = 0.00104392468495 warning: the following parameter line was not interpreted: TestStarParticleDensity = 1.0 warning: the following parameter line was not interpreted: TestStarParticleStarMass = 100.0 ****** ReadUnits: 2.748961e+37 1.000000e-24 3.018025e+20 3.150000e+13 ******* Global Dir set to . Initialdt in ReadParameterFile = 4.815337e-05 InitializeNew: Starting problem initialization. Central Mass: 6813.382812 Allocated 1 particles Initialize Exterior ExtBndry: BoundaryRank = 3 ExtBndry: GridDimension = 104 104 104 ExtBndry: NumberOfBaryonFields = 6 InitializeExternalBoundaryFace SimpleConstantBoundary FALSE End of set exterior InitializeNew: Initial grid hierarchy set InitializeNew: Partition Initial Grid 0 Enter CommunicationPartitionGrid. PartitionGrid (on all processors): Layout = 1 1 3 NumberOfNewGrids = 3 GridDims[0]: 98 GridDims[1]: 98 GridDims[2]: 33 32 33 StartIndex[0]: 0 StartIndex[1]: 0 StartIndex[2]: 0 33 65 Call ZeroSUS on TopGrid ENZO_layout 1 x 1 x 3 Grid structure: 1576 SubGrids structure: 4728 Re-set Unigrid = 0 Grid distribution Delete OldGrid OldGrid deleted Exit CommunicationPartitionGrid. InitializeNew: Finished problem initialization. Initializing Python interface Successfully read in parameter file StarParticleTest.enzo. INITIALIZATION TIME = 9.38615084e-01 Beginning parallel import block. MPI process (rank: 1) terminated unexpectedly on trestles-12-20.local Exit code -5 signaled from trestles-12-20 Traceback (most recent call last): File "<string>", line 1, in <module> File "./user_script.py", line 1, in <module> from yt.pmods import * File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/pmods.py", line 364, in <module> from yt.mods import * File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/pmods.py", line 234, in __import_hook__ q, tail = __find_head_package__(parent, name) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/pmods.py", line 323, in __find_head_package__ q = __import_module__(head, qname, parent) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/pmods.py", line 268, in __import_module__ pathname,stuff,ierror = mpi.bcast((pathname,stuff,ierror)) File "/home/csimpson/yt-x86_64-shared/src/yt-hg/yt/pmods.py", line 201, in bcast return MPI.COMM_WORLD.bcast(obj,root) KeyboardInterrupt Caught fatal exception:
'Importing user_script failed!' at InitializePythonInterface.C:108
Backtrace:
BT symbol: ./enzo.exe [0x41ff8a] BT symbol: ./enzo.exe [0x727e14] BT symbol: ./enzo.exe [0x421147] BT symbol: /lib64/libc.so.6(__libc_start_main+0xf4) [0x3c0121d994] BT symbol: ./enzo.exe(__gxx_personality_v0+0x3d9) [0x41fea9] terminate called after throwing an instance of 'EnzoFatalException'
Make file:
#======================================================================= # # FILE: Make.mach.trestles # # DESCRIPTION: Makefile settings for the Trestles Resource at SDSC/UCSD # # AUTHOR: John Wise (jwise@astro.princeton.edu) # # DATE: 07 Dec 2010 # # #=======================================================================
MACH_TEXT = Trestles MACH_VALID = 1 MACH_FILE = Make.mach.trestles
MACHINE_NOTES = "MACHINE_NOTES for Trestles at SDSC/UCSD: \ Load these modules, \ 'module add intel/11.1 mvapich2/1.5.1p1'"
#----------------------------------------------------------------------- # Compiler settings #-----------------------------------------------------------------------
LOCAL_MPI_INSTALL = /home/diag/opt/mvapich2/1.5.1p1/intel/ LOCAL_PYTHON_INSTALL = /home/csimpson/yt-x86_64-shared/ #LOCAL_COMPILER_DIR = /opt/pgi/linux86-64/10.5 LOCAL_COMPILER_DIR = /opt/intel/Compiler/11.1/072 LOCAL_HYPRE_INSTALL =
# With MPI
MACH_CPP = cpp MACH_CC_MPI = $(LOCAL_MPI_INSTALL)/bin/mpicc # C compiler when using MPI MACH_CXX_MPI = $(LOCAL_MPI_INSTALL)/bin/mpicxx # C++ compiler when using MPI MACH_FC_MPI = $(LOCAL_MPI_INSTALL)/bin/mpif90 # Fortran 77 compiler when using MPI MACH_F90_MPI = $(LOCAL_MPI_INSTALL)/bin/mpif90 # Fortran 90 compiler when using MPI MACH_LD_MPI = $(LOCAL_MPI_INSTALL)/bin/mpicxx # Linker when using MPI
# Without MPI
MACH_CC_NOMPI = $(LOCAL_COMPILER_DIR)/bin/intel64/icc # C compiler when not using MPI MACH_CXX_NOMPI = $(LOCAL_COMPILER_DIR)/bin/intel64/icpc # C++ compiler when not using MPI MACH_FC_NOMPI = $(LOCAL_COMPILER_DIR)/bin/intel64/ifort # Fortran 77 compiler when not using MPI MACH_F90_NOMPI = $(LOCAL_COMPILER_DIR)/bin/intel64/ifort # Fortran 90 compiler when not using MPI MACH_LD_NOMPI = $(LOCAL_COMPILER_DIR)/bin/intel64/icpc # Linker when not using MPI
#----------------------------------------------------------------------- # Machine-dependent defines #----------------------------------------------------------------------- # Defines for the architecture; e.g. -DSUN, -DLINUX, etc. MACH_DEFINES = -DLINUX -DH5_USE_16_API
#----------------------------------------------------------------------- # Compiler flag settings #-----------------------------------------------------------------------
MACH_CPPFLAGS = -P -traditional MACH_CFLAGS = MACH_CXXFLAGS = MACH_FFLAGS = MACH_F90FLAGS = MACH_LDFLAGS =
#----------------------------------------------------------------------- # Precision-related flags #-----------------------------------------------------------------------
MACH_FFLAGS_INTEGER_32 = -i4 MACH_FFLAGS_INTEGER_64 = -i8 MACH_FFLAGS_REAL_32 = -r4 MACH_FFLAGS_REAL_64 = -r8
#----------------------------------------------------------------------- # Optimization flags #-----------------------------------------------------------------------
MACH_OPT_WARN = -Wall # Flags for verbose compiler warnings MACH_OPT_DEBUG = -O0 -g # Flags for debugging # Flags for high conservative optimization #MACH_OPT_HIGH = -O1 -ftz -mieee-fp -fp-speculation=off -prec-sqrt -prec-div MACH_OPT_HIGH = -O2 # Note that this breaks determinism, which is why it's commented out! # MACH_OPT_AGGRESSIVE = -O3 # Flags for aggressive optimization # This is the best we can do, from what I can tell. #MACH_OPT_AGGRESSIVE = -O1 -ftz -mieee-fp -fp-speculation=off -prec-sqrt -prec-div
#----------------------------------------------------------------------- # Includes #-----------------------------------------------------------------------
LOCAL_INCLUDES_MPI = LOCAL_INCLUDES_HDF5 = -I/home/csimpson/yt-x86_64-shared/include # HDF5 includes LOCAL_INCLUDES_HYPRE = LOCAL_INCLUDES_PAPI = # PAPI includes LOCAL_INCLUDES_PYTHON = -I$(LOCAL_PYTHON_INSTALL)/include/python2.7 \ -I$(LOCAL_PYTHON_INSTALL)/lib/python2.7/site-packages/numpy/core/include
MACH_INCLUDES = $(LOCAL_INCLUDES_HDF5) MACH_INCLUDES_PYTHON = $(LOCAL_INCLUDES_PYTHON) MACH_INCLUDES_MPI = $(LOCAL_INCLUDES_MPI) MACH_INCLUDES_HYPRE = $(LOCAL_INCLUDES_HYPRE) MACH_INCLUDES_PAPI = $(LOCAL_INCLUDES_PAPI)
#----------------------------------------------------------------------- # Libraries #-----------------------------------------------------------------------
LOCAL_LIBS_MPI = LOCAL_LIBS_HDF5 = -L/home/csimpson/yt-x86_64-shared/lib -lhdf5 # HDF5 libraries LOCAL_LIBS_HYPRE = LOCAL_LIBS_PAPI = # PAPI libraries LOCAL_LIBS_PYTHON = -L$(LOCAL_PYTHON_INSTALL)/lib -lpython2.7 \ -lreadline -ltermcap -lutil
#LOCAL_LIBS_MACH = -L$(LOCAL_COMPILER_DIR)/lib \ # -lpgf90 -lpgf90_rpm1 -lpgf902 -lpgf90rtl -lpgftnrtl -lrt LOCAL_LIBS_MACH = -L$(LOCAL_COMPILER_DIR)/lib/intel64 -lifcore -lifport
MACH_LIBS = $(LOCAL_LIBS_HDF5) $(LOCAL_LIBS_MACH) MACH_LIBS_MPI = $(LOCAL_LIBS_MPI) MACH_LIBS_HYPRE = $(LOCAL_LIBS_HYPRE) MACH_LIBS_PAPI = $(LOCAL_LIBS_PAPI) MACH_LIBS_PYTHON = $(LOCAL_LIBS_PYTHON)
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org

Thanks so much for your help! So I did some tests. I ran each test on 3 processors for 10 cycles and set PythonTopGridSkip = 1. The hierarchy is static (so no amr).
type 1:
import numpy as np
def main(): print "Type 1"
This worked. Type 1 printed a total of 60 times, which is ncycles*nproc*2 (for the 2 calls to python, one at the end of evolve level and the one at the end of evolve hierarchy).
type 2:
from yt.mods import *
def main: print "Type 2"
This failed. The error I get is Successfully read in parameter file StarParticleTest.enzo. INITIALIZATION TIME = 4.38757896e-01 yt : [INFO ] 2013-04-11 13:14:08,924 Global parallel computation enabled: 1 / 3 yt : [INFO ] 2013-04-11 13:14:08,924 Global parallel computation enabled: 2 / 3 yt : [INFO ] 2013-04-11 13:14:08,924 Global parallel computation enabled: 0 / 3 MPI process (rank: 0) terminated unexpectedly on trestles-1-10.local Exit code -5 signaled from trestles-1-10
type 3:
type 1:
from mpi4py import MPI
def main(): print "Type 3"
This works and prints out Type 3 a total of 60 times as in case 1.
One last thing to check is the output of ldd on the MPI.so file inside your yt site-packages directory, under the mpi4py subdirectory, versus the output of ldd for enzo.exe. These should both link to the same MPI library.
I'm not sure what the MPI library is called, but all the directory files listed for MPI.so from the yt site-packages directory and for the enzo executable look the same. They are pasted below. So it looks like it is something with yt.mods/yt.pmods? (I wasn't quite sure which one to use, by the way). So I can try putting some print statements in that file to see how far it gets. Are there likely problem places that I should focus on? Thanks Christine For MPI.so in the yt site-packages directory (yt-x86_64-shared)[csimpson@trestles-login1 mpi4py]$ ldd MPI.so libpython2.7.so.1.0 => /home/csimpson/yt-x86_64-shared/lib/libpython2.7.so.1.0 (0x00002b9930541000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00002b993091d000) librdmacm.so.1 => /usr/lib64/librdmacm.so.1 (0x00002b9930b39000) libibverbs.so.1 => /usr/lib64/libibverbs.so.1 (0x00002b9930d3e000) libibumad.so.3 => /usr/lib64/libibumad.so.3 (0x00002b9930f4b000) libdl.so.2 => /lib64/libdl.so.2 (0x00002b9931152000) librt.so.1 => /lib64/librt.so.1 (0x00002b9931356000) libimf.so => /opt/intel/Compiler/11.1/072/lib/intel64/libimf.so (0x00002b993155f000) libsvml.so => /opt/intel/Compiler/11.1/072/lib/intel64/libsvml.so (0x00002b99318f4000) libm.so.6 => /lib64/libm.so.6 (0x00002b9931b0a000) libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00002b9931d8d000) libintlc.so.5 => /opt/intel/Compiler/11.1/072/lib/intel64/libintlc.so.5 (0x00002b9931f9c000) libc.so.6 => /lib64/libc.so.6 (0x00002b99320da000) libutil.so.1 => /lib64/libutil.so.1 (0x00002b9932431000) /lib64/ld-linux-x86-64.so.2 (0x000000344ba00000) For enzo.exe: (yt-x86_64-shared)[csimpson@trestles-login1 mpi4py]$ ldd ~/temp_mom/enzo-dev-mom/src/enzo/enzo.exe libhdf5.so.7 => /home/csimpson/yt-x86_64-shared/lib/libhdf5.so.7 (0x00002b9496c96000) libifcore.so.5 => /opt/intel/Compiler/11.1/072/lib/intel64/libifcore.so.5 (0x00002b9497139000) libifport.so.5 => /opt/intel/Compiler/11.1/072/lib/intel64/libifport.so.5 (0x00002b94973b1000) libpython2.7.so.1.0 => /home/csimpson/yt-x86_64-shared/lib/libpython2.7.so.1.0 (0x00002b94974ea000) libreadline.so.5 => /usr/lib64/libreadline.so.5 (0x0000003d96c00000) libtermcap.so.2 => /lib64/libtermcap.so.2 (0x000000346bc00000) libutil.so.1 => /lib64/libutil.so.1 (0x0000003452200000) libpthread.so.0 => /lib64/libpthread.so.0 (0x000000344ca00000) librdmacm.so.1 => /usr/lib64/librdmacm.so.1 (0x000000344ce00000) libibverbs.so.1 => /usr/lib64/libibverbs.so.1 (0x000000344c200000) libibumad.so.3 => /usr/lib64/libibumad.so.3 (0x00002b94978c8000) libdl.so.2 => /lib64/libdl.so.2 (0x000000344c600000) librt.so.1 => /lib64/librt.so.1 (0x000000344da00000) libm.so.6 => /lib64/libm.so.6 (0x00002b9497acf000) libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00000035f8a00000) libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00000035f8e00000) libc.so.6 => /lib64/libc.so.6 (0x000000344be00000) libz.so.1 => /home/csimpson/yt-x86_64-shared/lib/libz.so.1 (0x00002b9497d53000) libimf.so => /opt/intel/Compiler/11.1/072/lib/intel64/libimf.so (0x00002b9497f6a000) libintlc.so.5 => /opt/intel/Compiler/11.1/072/lib/intel64/libintlc.so.5 (0x00002b94982fe000) /lib64/ld-linux-x86-64.so.2 (0x000000344ba00000)

Hi Christine, This is a total stab in the dark, but have you tried running on four processors? I think in principle it should work on 3 cores but I bet it's not a commonly used configuration. -Nathan On Apr 11, 2013, at 4:29 PM, Christine Simpson <csimpson@astro.columbia.edu> wrote:
Thanks so much for your help!
So I did some tests. I ran each test on 3 processors for 10 cycles and set PythonTopGridSkip = 1. The hierarchy is static (so no amr).
type 1:
import numpy as np
def main(): print "Type 1"
This worked. Type 1 printed a total of 60 times, which is ncycles*nproc*2 (for the 2 calls to python, one at the end of evolve level and the one at the end of evolve hierarchy).
type 2:
from yt.mods import *
def main: print "Type 2"
This failed. The error I get is
Successfully read in parameter file StarParticleTest.enzo. INITIALIZATION TIME = 4.38757896e-01 yt : [INFO ] 2013-04-11 13:14:08,924 Global parallel computation enabled: 1 / 3 yt : [INFO ] 2013-04-11 13:14:08,924 Global parallel computation enabled: 2 / 3 yt : [INFO ] 2013-04-11 13:14:08,924 Global parallel computation enabled: 0 / 3 MPI process (rank: 0) terminated unexpectedly on trestles-1-10.local Exit code -5 signaled from trestles-1-10
type 3:
type 1:
from mpi4py import MPI
def main(): print "Type 3"
This works and prints out Type 3 a total of 60 times as in case 1.
One last thing to check is the output of ldd on the MPI.so file inside your yt site-packages directory, under the mpi4py subdirectory, versus the output of ldd for enzo.exe. These should both link to the same MPI library.
I'm not sure what the MPI library is called, but all the directory files listed for MPI.so from the yt site-packages directory and for the enzo executable look the same. They are pasted below.
So it looks like it is something with yt.mods/yt.pmods? (I wasn't quite sure which one to use, by the way). So I can try putting some print statements in that file to see how far it gets. Are there likely problem places that I should focus on?
Thanks Christine
For MPI.so in the yt site-packages directory (yt-x86_64-shared)[csimpson@trestles-login1 mpi4py]$ ldd MPI.so libpython2.7.so.1.0 => /home/csimpson/yt-x86_64-shared/lib/libpython2.7.so.1.0 (0x00002b9930541000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00002b993091d000) librdmacm.so.1 => /usr/lib64/librdmacm.so.1 (0x00002b9930b39000) libibverbs.so.1 => /usr/lib64/libibverbs.so.1 (0x00002b9930d3e000) libibumad.so.3 => /usr/lib64/libibumad.so.3 (0x00002b9930f4b000) libdl.so.2 => /lib64/libdl.so.2 (0x00002b9931152000) librt.so.1 => /lib64/librt.so.1 (0x00002b9931356000) libimf.so => /opt/intel/Compiler/11.1/072/lib/intel64/libimf.so (0x00002b993155f000) libsvml.so => /opt/intel/Compiler/11.1/072/lib/intel64/libsvml.so (0x00002b99318f4000) libm.so.6 => /lib64/libm.so.6 (0x00002b9931b0a000) libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00002b9931d8d000) libintlc.so.5 => /opt/intel/Compiler/11.1/072/lib/intel64/libintlc.so.5 (0x00002b9931f9c000) libc.so.6 => /lib64/libc.so.6 (0x00002b99320da000) libutil.so.1 => /lib64/libutil.so.1 (0x00002b9932431000) /lib64/ld-linux-x86-64.so.2 (0x000000344ba00000)
For enzo.exe: (yt-x86_64-shared)[csimpson@trestles-login1 mpi4py]$ ldd ~/temp_mom/enzo-dev-mom/src/enzo/enzo.exe libhdf5.so.7 => /home/csimpson/yt-x86_64-shared/lib/libhdf5.so.7 (0x00002b9496c96000) libifcore.so.5 => /opt/intel/Compiler/11.1/072/lib/intel64/libifcore.so.5 (0x00002b9497139000) libifport.so.5 => /opt/intel/Compiler/11.1/072/lib/intel64/libifport.so.5 (0x00002b94973b1000) libpython2.7.so.1.0 => /home/csimpson/yt-x86_64-shared/lib/libpython2.7.so.1.0 (0x00002b94974ea000) libreadline.so.5 => /usr/lib64/libreadline.so.5 (0x0000003d96c00000) libtermcap.so.2 => /lib64/libtermcap.so.2 (0x000000346bc00000) libutil.so.1 => /lib64/libutil.so.1 (0x0000003452200000) libpthread.so.0 => /lib64/libpthread.so.0 (0x000000344ca00000) librdmacm.so.1 => /usr/lib64/librdmacm.so.1 (0x000000344ce00000) libibverbs.so.1 => /usr/lib64/libibverbs.so.1 (0x000000344c200000) libibumad.so.3 => /usr/lib64/libibumad.so.3 (0x00002b94978c8000) libdl.so.2 => /lib64/libdl.so.2 (0x000000344c600000) librt.so.1 => /lib64/librt.so.1 (0x000000344da00000) libm.so.6 => /lib64/libm.so.6 (0x00002b9497acf000) libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00000035f8a00000) libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00000035f8e00000) libc.so.6 => /lib64/libc.so.6 (0x000000344be00000) libz.so.1 => /home/csimpson/yt-x86_64-shared/lib/libz.so.1 (0x00002b9497d53000) libimf.so => /opt/intel/Compiler/11.1/072/lib/intel64/libimf.so (0x00002b9497f6a000) libintlc.so.5 => /opt/intel/Compiler/11.1/072/lib/intel64/libintlc.so.5 (0x00002b94982fe000) /lib64/ld-linux-x86-64.so.2 (0x000000344ba00000)
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org

Hi Nathan, Yes, I get the same errors with 2 and 4 processors, the parity of the processor number doesn't seem to matter. I'm using 3 processors only because for the problem I'm doing, I don't want enzo to draw a grid boundary through the center of the simulation box (which it would do for an even number of processors). Thanks Christine On Thu, 2013-04-11 at 16:35 -0700, Nathan Goldbaum wrote:
Hi Christine,
This is a total stab in the dark, but have you tried running on four processors? I think in principle it should work on 3 cores but I bet it's not a commonly used configuration.
-Nathan
On Apr 11, 2013, at 4:29 PM, Christine Simpson <csimpson@astro.columbia.edu> wrote:
Thanks so much for your help!
So I did some tests. I ran each test on 3 processors for 10 cycles and set PythonTopGridSkip = 1. The hierarchy is static (so no amr).
type 1:
import numpy as np
def main(): print "Type 1"
This worked. Type 1 printed a total of 60 times, which is ncycles*nproc*2 (for the 2 calls to python, one at the end of evolve level and the one at the end of evolve hierarchy).
type 2:
from yt.mods import *
def main: print "Type 2"
This failed. The error I get is
Successfully read in parameter file StarParticleTest.enzo. INITIALIZATION TIME = 4.38757896e-01 yt : [INFO ] 2013-04-11 13:14:08,924 Global parallel computation enabled: 1 / 3 yt : [INFO ] 2013-04-11 13:14:08,924 Global parallel computation enabled: 2 / 3 yt : [INFO ] 2013-04-11 13:14:08,924 Global parallel computation enabled: 0 / 3 MPI process (rank: 0) terminated unexpectedly on trestles-1-10.local Exit code -5 signaled from trestles-1-10
type 3:
type 1:
from mpi4py import MPI
def main(): print "Type 3"
This works and prints out Type 3 a total of 60 times as in case 1.
One last thing to check is the output of ldd on the MPI.so file inside your yt site-packages directory, under the mpi4py subdirectory, versus the output of ldd for enzo.exe. These should both link to the same MPI library.
I'm not sure what the MPI library is called, but all the directory files listed for MPI.so from the yt site-packages directory and for the enzo executable look the same. They are pasted below.
So it looks like it is something with yt.mods/yt.pmods? (I wasn't quite sure which one to use, by the way). So I can try putting some print statements in that file to see how far it gets. Are there likely problem places that I should focus on?
Thanks Christine
For MPI.so in the yt site-packages directory (yt-x86_64-shared)[csimpson@trestles-login1 mpi4py]$ ldd MPI.so libpython2.7.so.1.0 => /home/csimpson/yt-x86_64-shared/lib/libpython2.7.so.1.0 (0x00002b9930541000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00002b993091d000) librdmacm.so.1 => /usr/lib64/librdmacm.so.1 (0x00002b9930b39000) libibverbs.so.1 => /usr/lib64/libibverbs.so.1 (0x00002b9930d3e000) libibumad.so.3 => /usr/lib64/libibumad.so.3 (0x00002b9930f4b000) libdl.so.2 => /lib64/libdl.so.2 (0x00002b9931152000) librt.so.1 => /lib64/librt.so.1 (0x00002b9931356000) libimf.so => /opt/intel/Compiler/11.1/072/lib/intel64/libimf.so (0x00002b993155f000) libsvml.so => /opt/intel/Compiler/11.1/072/lib/intel64/libsvml.so (0x00002b99318f4000) libm.so.6 => /lib64/libm.so.6 (0x00002b9931b0a000) libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00002b9931d8d000) libintlc.so.5 => /opt/intel/Compiler/11.1/072/lib/intel64/libintlc.so.5 (0x00002b9931f9c000) libc.so.6 => /lib64/libc.so.6 (0x00002b99320da000) libutil.so.1 => /lib64/libutil.so.1 (0x00002b9932431000) /lib64/ld-linux-x86-64.so.2 (0x000000344ba00000)
For enzo.exe: (yt-x86_64-shared)[csimpson@trestles-login1 mpi4py]$ ldd ~/temp_mom/enzo-dev-mom/src/enzo/enzo.exe libhdf5.so.7 => /home/csimpson/yt-x86_64-shared/lib/libhdf5.so.7 (0x00002b9496c96000) libifcore.so.5 => /opt/intel/Compiler/11.1/072/lib/intel64/libifcore.so.5 (0x00002b9497139000) libifport.so.5 => /opt/intel/Compiler/11.1/072/lib/intel64/libifport.so.5 (0x00002b94973b1000) libpython2.7.so.1.0 => /home/csimpson/yt-x86_64-shared/lib/libpython2.7.so.1.0 (0x00002b94974ea000) libreadline.so.5 => /usr/lib64/libreadline.so.5 (0x0000003d96c00000) libtermcap.so.2 => /lib64/libtermcap.so.2 (0x000000346bc00000) libutil.so.1 => /lib64/libutil.so.1 (0x0000003452200000) libpthread.so.0 => /lib64/libpthread.so.0 (0x000000344ca00000) librdmacm.so.1 => /usr/lib64/librdmacm.so.1 (0x000000344ce00000) libibverbs.so.1 => /usr/lib64/libibverbs.so.1 (0x000000344c200000) libibumad.so.3 => /usr/lib64/libibumad.so.3 (0x00002b94978c8000) libdl.so.2 => /lib64/libdl.so.2 (0x000000344c600000) librt.so.1 => /lib64/librt.so.1 (0x000000344da00000) libm.so.6 => /lib64/libm.so.6 (0x00002b9497acf000) libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00000035f8a00000) libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00000035f8e00000) libc.so.6 => /lib64/libc.so.6 (0x000000344be00000) libz.so.1 => /home/csimpson/yt-x86_64-shared/lib/libz.so.1 (0x00002b9497d53000) libimf.so => /opt/intel/Compiler/11.1/072/lib/intel64/libimf.so (0x00002b9497f6a000) libintlc.so.5 => /opt/intel/Compiler/11.1/072/lib/intel64/libintlc.so.5 (0x00002b94982fe000) /lib64/ld-linux-x86-64.so.2 (0x000000344ba00000)
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org

Hi Christine, On Thu, Apr 11, 2013 at 7:29 PM, Christine Simpson <csimpson@astro.columbia.edu> wrote:
Thanks so much for your help!
So I did some tests. I ran each test on 3 processors for 10 cycles and set PythonTopGridSkip = 1. The hierarchy is static (so no amr).
Okay, that's good to hear -- it eliminates some common problems.
type 1:
import numpy as np
def main(): print "Type 1"
This worked. Type 1 printed a total of 60 times, which is ncycles*nproc*2 (for the 2 calls to python, one at the end of evolve level and the one at the end of evolve hierarchy).
type 2:
from yt.mods import *
def main: print "Type 2"
This failed. The error I get is
Successfully read in parameter file StarParticleTest.enzo. INITIALIZATION TIME = 4.38757896e-01 yt : [INFO ] 2013-04-11 13:14:08,924 Global parallel computation enabled: 1 / 3 yt : [INFO ] 2013-04-11 13:14:08,924 Global parallel computation enabled: 2 / 3 yt : [INFO ] 2013-04-11 13:14:08,924 Global parallel computation enabled: 0 / 3 MPI process (rank: 0) terminated unexpectedly on trestles-1-10.local Exit code -5 signaled from trestles-1-10
Awesome. This tells us a *lot* of information. For starters, importing numpy and having that fail is a very common use case that can be difficult to fix. Fortunately, that's not the case here! :) So here's a good start -- in the directory ~/.yt you may or may not have a file named 'config'. If you add to that file something like: [yt] loglevel: 1 then you can get a lot more info about how things proceed. For me it looks like this: http://paste.yt-project.org/show/3357/ The next step is to put print statements in yt/mods.py around these areas: http://paste.yt-project.org/show/3358/ Speciffically, I'm curious if it gets past startup tasks. After that, there is a series of import blocks that look like: from yt.something \ import flux_capacitor or something. Each of these blocks might be grabbing something. Now, one very *last* thing is, can you run one last script identical to the ones before, except test to see if h5py can be imported? It occurs to me that's another source of error that might be pernicious. I'm *really* sorry this is so annoying -- getting this up and running is a huge priority for me! For what it's worth, part of the issue here is that yt+sim code have to be in the same executable; the next generation of in situ stuff will hopefully avoid this, and thus make problems like this ... erased. But that's a year or more off. Anyhow, I'm really interested in getting this working for you, so please let me know if any of that helps at all -- the unexplained error (-5) doesn't help narrow it down too much, unfortunately. Thanks again, Matt
type 3:
type 1:
from mpi4py import MPI
def main(): print "Type 3"
This works and prints out Type 3 a total of 60 times as in case 1.
One last thing to check is the output of ldd on the MPI.so file inside your yt site-packages directory, under the mpi4py subdirectory, versus the output of ldd for enzo.exe. These should both link to the same MPI library.
I'm not sure what the MPI library is called, but all the directory files listed for MPI.so from the yt site-packages directory and for the enzo executable look the same. They are pasted below.
So it looks like it is something with yt.mods/yt.pmods? (I wasn't quite sure which one to use, by the way). So I can try putting some print statements in that file to see how far it gets. Are there likely problem places that I should focus on?
Thanks Christine
For MPI.so in the yt site-packages directory (yt-x86_64-shared)[csimpson@trestles-login1 mpi4py]$ ldd MPI.so libpython2.7.so.1.0 => /home/csimpson/yt-x86_64-shared/lib/libpython2.7.so.1.0 (0x00002b9930541000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00002b993091d000) librdmacm.so.1 => /usr/lib64/librdmacm.so.1 (0x00002b9930b39000) libibverbs.so.1 => /usr/lib64/libibverbs.so.1 (0x00002b9930d3e000) libibumad.so.3 => /usr/lib64/libibumad.so.3 (0x00002b9930f4b000) libdl.so.2 => /lib64/libdl.so.2 (0x00002b9931152000) librt.so.1 => /lib64/librt.so.1 (0x00002b9931356000) libimf.so => /opt/intel/Compiler/11.1/072/lib/intel64/libimf.so (0x00002b993155f000) libsvml.so => /opt/intel/Compiler/11.1/072/lib/intel64/libsvml.so (0x00002b99318f4000) libm.so.6 => /lib64/libm.so.6 (0x00002b9931b0a000) libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00002b9931d8d000) libintlc.so.5 => /opt/intel/Compiler/11.1/072/lib/intel64/libintlc.so.5 (0x00002b9931f9c000) libc.so.6 => /lib64/libc.so.6 (0x00002b99320da000) libutil.so.1 => /lib64/libutil.so.1 (0x00002b9932431000) /lib64/ld-linux-x86-64.so.2 (0x000000344ba00000)
For enzo.exe: (yt-x86_64-shared)[csimpson@trestles-login1 mpi4py]$ ldd ~/temp_mom/enzo-dev-mom/src/enzo/enzo.exe libhdf5.so.7 => /home/csimpson/yt-x86_64-shared/lib/libhdf5.so.7 (0x00002b9496c96000) libifcore.so.5 => /opt/intel/Compiler/11.1/072/lib/intel64/libifcore.so.5 (0x00002b9497139000) libifport.so.5 => /opt/intel/Compiler/11.1/072/lib/intel64/libifport.so.5 (0x00002b94973b1000) libpython2.7.so.1.0 => /home/csimpson/yt-x86_64-shared/lib/libpython2.7.so.1.0 (0x00002b94974ea000) libreadline.so.5 => /usr/lib64/libreadline.so.5 (0x0000003d96c00000) libtermcap.so.2 => /lib64/libtermcap.so.2 (0x000000346bc00000) libutil.so.1 => /lib64/libutil.so.1 (0x0000003452200000) libpthread.so.0 => /lib64/libpthread.so.0 (0x000000344ca00000) librdmacm.so.1 => /usr/lib64/librdmacm.so.1 (0x000000344ce00000) libibverbs.so.1 => /usr/lib64/libibverbs.so.1 (0x000000344c200000) libibumad.so.3 => /usr/lib64/libibumad.so.3 (0x00002b94978c8000) libdl.so.2 => /lib64/libdl.so.2 (0x000000344c600000) librt.so.1 => /lib64/librt.so.1 (0x000000344da00000) libm.so.6 => /lib64/libm.so.6 (0x00002b9497acf000) libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00000035f8a00000) libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00000035f8e00000) libc.so.6 => /lib64/libc.so.6 (0x000000344be00000) libz.so.1 => /home/csimpson/yt-x86_64-shared/lib/libz.so.1 (0x00002b9497d53000) libimf.so => /opt/intel/Compiler/11.1/072/lib/intel64/libimf.so (0x00002b9497f6a000) libintlc.so.5 => /opt/intel/Compiler/11.1/072/lib/intel64/libintlc.so.5 (0x00002b94982fe000) /lib64/ld-linux-x86-64.so.2 (0x000000344ba00000)
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org

Hi Matt, So I put some print statements into yt/mods.py. It does not get past this line (the print statement before it prints on all processors, the one after it doesn't print at all): from yt.data_objects.api import \ BinnedProfile1D, BinnedProfile2D, BinnedProfile3D, \ data_object_registry, \ derived_field, add_field, add_grad, FieldInfo, \ ValidateParameter, ValidateDataField, ValidateProperty, \ ValidateSpatial, ValidateGridType, \ TimeSeriesData, AnalysisTask, analysis_task, \ ParticleTrajectoryCollection, ImageArray This is what gets output: INITIALIZATION TIME = 4.36470985e-01 yt : [DEBUG ] 2013-04-11 20:44:55,716 Set log level to 1 yt : [DEBUG ] 2013-04-11 20:44:55,719 Set log level to 1 yt : [DEBUG ] 2013-04-11 20:44:55,720 Set log level to 1 yt : [DEBUG ] 2013-04-11 20:44:55,931 SIGUSR1 registered for traceback printing yt : [DEBUG ] 2013-04-11 20:44:55,931 SIGUSR2 registered for IPython Insertion yt : [DEBUG ] 2013-04-11 20:44:55,931 SIGUSR1 registered for traceback printing yt : [DEBUG ] 2013-04-11 20:44:55,931 SIGUSR2 registered for IPython Insertion yt : [DEBUG ] 2013-04-11 20:44:55,932 SIGUSR1 registered for traceback printing yt : [DEBUG ] 2013-04-11 20:44:55,932 SIGUSR2 registered for IPython Insertion yt : [INFO ] 2013-04-11 20:44:55,946 Global parallel computation enabled: 2 / 3 Point1 yt : [INFO ] 2013-04-11 20:44:55,947 Global parallel computation enabled: 0 / 3 yt : [INFO ] 2013-04-11 20:44:55,947 Global parallel computation enabled: 1 / 3 Point1 Point1 Point2 Point3 Point2 Point3 Point2 Point3 P001 yt : [WARNING ] 2013-04-11 20:44:55,993 Log Level is set low -- this could affect parallel performance! P000 yt : [WARNING ] 2013-04-11 20:44:55,993 Log Level is set low -- this could affect parallel performance! P002 yt : [WARNING ] 2013-04-11 20:44:55,993 Log Level is set low -- this could affect parallel performance! MPI process (rank: 1) terminated unexpectedly on trestles-10-27.local Exit code -5 signaled from trestles-10-27 I tried importing just one item at a time from yt/data_objects/api but all the ones I tried failed. If I comment out the line, the subsequent import statement fails. I tested importing h5py, and that worked fine. Christine On Apr 11, 2013, at 8:49 PM, Matthew Turk wrote:
Awesome. This tells us a *lot* of information. For starters, importing numpy and having that fail is a very common use case that can be difficult to fix. Fortunately, that's not the case here! :) So here's a good start -- in the directory ~/.yt you may or may not have a file named 'config'. If you add to that file something like:
[yt] loglevel: 1
then you can get a lot more info about how things proceed. For me it looks like this:
http://paste.yt-project.org/show/3357/
The next step is to put print statements in yt/mods.py around these areas:
http://paste.yt-project.org/show/3358/
Speciffically, I'm curious if it gets past startup tasks. After that, there is a series of import blocks that look like:
from yt.something \ import flux_capacitor
or something. Each of these blocks might be grabbing something.
Now, one very *last* thing is, can you run one last script identical to the ones before, except test to see if h5py can be imported? It occurs to me that's another source of error that might be pernicious.
participants (5)
-
Christine Simpson
-
Matthew Turk
-
Nathan Goldbaum
-
Sam Skillman
-
Stephen Skory