Hi all, I've recently been trying to use yt's inline analysis functionality with Enzo and am having some difficultly getting it to work in parallel. I am using the development tip of yt. In serial, everything works fine, but in parallel, I get the following error: http://paste.yt-project.org/show/5694/ It seems that the issue is that yt is not correctly identifying which grids are available on a given processory for the EnzoDatasetInMemory object. Does anyone have an idea of how to fix this? Has anyone else seen this? For reference, my user_script is just this: import yt from yt.frontends.enzo.api import EnzoDatasetInMemory def main(): ds = EnzoDatasetInMemory() ad = ds.all_data() print ad.quantities.total_quantity("cell_mass") Thanks for any help, Britton
Hi Britton, What looks suspicious to me is the way it's using grid.id. This might lead to an off-by-one error. Can you try it with grid.id-grid._id_offset and see if that clears it up? On Mon, Jul 13, 2015 at 7:42 AM, Britton Smith <brittonsmith@gmail.com> wrote:
Hi all,
I've recently been trying to use yt's inline analysis functionality with Enzo and am having some difficultly getting it to work in parallel. I am using the development tip of yt. In serial, everything works fine, but in parallel, I get the following error: http://paste.yt-project.org/show/5694/
It seems that the issue is that yt is not correctly identifying which grids are available on a given processory for the EnzoDatasetInMemory object. Does anyone have an idea of how to fix this? Has anyone else seen this?
For reference, my user_script is just this:
import yt from yt.frontends.enzo.api import EnzoDatasetInMemory
def main(): ds = EnzoDatasetInMemory() ad = ds.all_data() print ad.quantities.total_quantity("cell_mass")
Thanks for any help,
Britton
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
Hi Matt, Thanks for your help. Adjust by grid._id_offset did not work, but I can that what is happening is that all processors are trying to call _read_field_names using grid 1, when only processor 0 owns that grid. I will look into why now, but if you have any intuition where to check next, that would be awesome. Thanks, Britton On Mon, Jul 13, 2015 at 1:51 PM, Matthew Turk <matthewturk@gmail.com> wrote:
Hi Britton,
What looks suspicious to me is the way it's using grid.id. This might lead to an off-by-one error. Can you try it with grid.id-grid._id_offset and see if that clears it up?
On Mon, Jul 13, 2015 at 7:42 AM, Britton Smith <brittonsmith@gmail.com> wrote:
Hi all,
I've recently been trying to use yt's inline analysis functionality with Enzo and am having some difficultly getting it to work in parallel. I am using the development tip of yt. In serial, everything works fine, but in parallel, I get the following error: http://paste.yt-project.org/show/5694/
It seems that the issue is that yt is not correctly identifying which grids are available on a given processory for the EnzoDatasetInMemory object. Does anyone have an idea of how to fix this? Has anyone else seen this?
For reference, my user_script is just this:
import yt from yt.frontends.enzo.api import EnzoDatasetInMemory
def main(): ds = EnzoDatasetInMemory() ad = ds.all_data() print ad.quantities.total_quantity("cell_mass")
Thanks for any help,
Britton
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
Hi again, Maybe this is a clue. In _generate_random_grids, self.comm.rank is 0 for all processors, which would explain why N-1 cores are trying to get grids that don't belong to them. Interestingly, mylog.info prints out the correct rank for each of them. Britton On Mon, Jul 13, 2015 at 2:21 PM, Britton Smith <brittonsmith@gmail.com> wrote:
Hi Matt,
Thanks for your help. Adjust by grid._id_offset did not work, but I can that what is happening is that all processors are trying to call _read_field_names using grid 1, when only processor 0 owns that grid. I will look into why now, but if you have any intuition where to check next, that would be awesome.
Thanks, Britton
On Mon, Jul 13, 2015 at 1:51 PM, Matthew Turk <matthewturk@gmail.com> wrote:
Hi Britton,
What looks suspicious to me is the way it's using grid.id. This might lead to an off-by-one error. Can you try it with grid.id-grid._id_offset and see if that clears it up?
On Mon, Jul 13, 2015 at 7:42 AM, Britton Smith <brittonsmith@gmail.com> wrote:
Hi all,
I've recently been trying to use yt's inline analysis functionality with Enzo and am having some difficultly getting it to work in parallel. I am using the development tip of yt. In serial, everything works fine, but in parallel, I get the following error: http://paste.yt-project.org/show/5694/
It seems that the issue is that yt is not correctly identifying which grids are available on a given processory for the EnzoDatasetInMemory object. Does anyone have an idea of how to fix this? Has anyone else seen this?
For reference, my user_script is just this:
import yt from yt.frontends.enzo.api import EnzoDatasetInMemory
def main(): ds = EnzoDatasetInMemory() ad = ds.all_data() print ad.quantities.total_quantity("cell_mass")
Thanks for any help,
Britton
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
That sounds like a new communicator got pushed to the top of the stack when it should not have been, perhaps in a rogue parallel_objects call. On Mon, Jul 13, 2015, 8:35 AM Britton Smith <brittonsmith@gmail.com> wrote:
Hi again,
Maybe this is a clue. In _generate_random_grids, self.comm.rank is 0 for all processors, which would explain why N-1 cores are trying to get grids that don't belong to them. Interestingly, mylog.info prints out the correct rank for each of them.
Britton
On Mon, Jul 13, 2015 at 2:21 PM, Britton Smith <brittonsmith@gmail.com> wrote:
Hi Matt,
Thanks for your help. Adjust by grid._id_offset did not work, but I can that what is happening is that all processors are trying to call _read_field_names using grid 1, when only processor 0 owns that grid. I will look into why now, but if you have any intuition where to check next, that would be awesome.
Thanks, Britton
On Mon, Jul 13, 2015 at 1:51 PM, Matthew Turk <matthewturk@gmail.com> wrote:
Hi Britton,
What looks suspicious to me is the way it's using grid.id. This might lead to an off-by-one error. Can you try it with grid.id-grid._id_offset and see if that clears it up?
Hi all,
I've recently been trying to use yt's inline analysis functionality with Enzo and am having some difficultly getting it to work in parallel. I am using the development tip of yt. In serial, everything works fine, but in parallel, I get the following error: http://paste.yt-project.org/show/5694/
It seems that the issue is that yt is not correctly identifying which grids are available on a given processory for the EnzoDatasetInMemory object. Does anyone have an idea of how to fix this? Has anyone else seen
On Mon, Jul 13, 2015 at 7:42 AM, Britton Smith <brittonsmith@gmail.com> wrote: this?
For reference, my user_script is just this:
import yt from yt.frontends.enzo.api import EnzoDatasetInMemory
def main(): ds = EnzoDatasetInMemory() ad = ds.all_data() print ad.quantities.total_quantity("cell_mass")
Thanks for any help,
Britton
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
Your tip led me to the right answer. The call to parallel_objects was happening in the derived quantity, where each processor is being made into its own comm where it is rank 0. The issue is that they then try to identify fields and incorrectly think of themselves as rank 0 for choosing which grids to look at. If I simply as ds.index right after creating the dataset, the problem goes away. This should probably just be added to the bottom of the __init__ for EnzoDatasetInMemory. Does that sound right? Britton On Mon, Jul 13, 2015 at 2:38 PM, Matthew Turk <matthewturk@gmail.com> wrote:
That sounds like a new communicator got pushed to the top of the stack when it should not have been, perhaps in a rogue parallel_objects call.
On Mon, Jul 13, 2015, 8:35 AM Britton Smith <brittonsmith@gmail.com> wrote:
Hi again,
Maybe this is a clue. In _generate_random_grids, self.comm.rank is 0 for all processors, which would explain why N-1 cores are trying to get grids that don't belong to them. Interestingly, mylog.info prints out the correct rank for each of them.
Britton
On Mon, Jul 13, 2015 at 2:21 PM, Britton Smith <brittonsmith@gmail.com> wrote:
Hi Matt,
Thanks for your help. Adjust by grid._id_offset did not work, but I can that what is happening is that all processors are trying to call _read_field_names using grid 1, when only processor 0 owns that grid. I will look into why now, but if you have any intuition where to check next, that would be awesome.
Thanks, Britton
On Mon, Jul 13, 2015 at 1:51 PM, Matthew Turk <matthewturk@gmail.com> wrote:
Hi Britton,
What looks suspicious to me is the way it's using grid.id. This might lead to an off-by-one error. Can you try it with grid.id-grid._id_offset and see if that clears it up?
Hi all,
I've recently been trying to use yt's inline analysis functionality with Enzo and am having some difficultly getting it to work in parallel. I am using the development tip of yt. In serial, everything works fine, but in parallel, I get the following error: http://paste.yt-project.org/show/5694/
It seems that the issue is that yt is not correctly identifying which grids are available on a given processory for the EnzoDatasetInMemory object. Does anyone have an idea of how to fix this? Has anyone else seen
On Mon, Jul 13, 2015 at 7:42 AM, Britton Smith <brittonsmith@gmail.com> wrote: this?
For reference, my user_script is just this:
import yt from yt.frontends.enzo.api import EnzoDatasetInMemory
def main(): ds = EnzoDatasetInMemory() ad = ds.all_data() print ad.quantities.total_quantity("cell_mass")
Thanks for any help,
Britton
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
Yup, it does. Nice detective work! On Mon, Jul 13, 2015, 8:58 AM Britton Smith <brittonsmith@gmail.com> wrote:
Your tip led me to the right answer. The call to parallel_objects was happening in the derived quantity, where each processor is being made into its own comm where it is rank 0. The issue is that they then try to identify fields and incorrectly think of themselves as rank 0 for choosing which grids to look at. If I simply as ds.index right after creating the dataset, the problem goes away. This should probably just be added to the bottom of the __init__ for EnzoDatasetInMemory. Does that sound right?
Britton
On Mon, Jul 13, 2015 at 2:38 PM, Matthew Turk <matthewturk@gmail.com> wrote:
That sounds like a new communicator got pushed to the top of the stack when it should not have been, perhaps in a rogue parallel_objects call.
On Mon, Jul 13, 2015, 8:35 AM Britton Smith <brittonsmith@gmail.com> wrote:
Hi again,
Maybe this is a clue. In _generate_random_grids, self.comm.rank is 0 for all processors, which would explain why N-1 cores are trying to get grids that don't belong to them. Interestingly, mylog.info prints out the correct rank for each of them.
Britton
On Mon, Jul 13, 2015 at 2:21 PM, Britton Smith <brittonsmith@gmail.com> wrote:
Hi Matt,
Thanks for your help. Adjust by grid._id_offset did not work, but I can that what is happening is that all processors are trying to call _read_field_names using grid 1, when only processor 0 owns that grid. I will look into why now, but if you have any intuition where to check next, that would be awesome.
Thanks, Britton
On Mon, Jul 13, 2015 at 1:51 PM, Matthew Turk <matthewturk@gmail.com> wrote:
Hi Britton,
What looks suspicious to me is the way it's using grid.id. This might lead to an off-by-one error. Can you try it with grid.id-grid._id_offset and see if that clears it up?
Hi all,
I've recently been trying to use yt's inline analysis functionality with Enzo and am having some difficultly getting it to work in parallel. I am using the development tip of yt. In serial, everything works fine, but in parallel, I get the following error: http://paste.yt-project.org/show/5694/
It seems that the issue is that yt is not correctly identifying which grids are available on a given processory for the EnzoDatasetInMemory object. Does anyone have an idea of how to fix this? Has anyone else seen
On Mon, Jul 13, 2015 at 7:42 AM, Britton Smith <brittonsmith@gmail.com> wrote: this?
For reference, my user_script is just this:
import yt from yt.frontends.enzo.api import EnzoDatasetInMemory
def main(): ds = EnzoDatasetInMemory() ad = ds.all_data() print ad.quantities.total_quantity("cell_mass")
Thanks for any help,
Britton
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
Thanks for your help! On Mon, Jul 13, 2015 at 3:01 PM, Matthew Turk <matthewturk@gmail.com> wrote:
Yup, it does. Nice detective work!
On Mon, Jul 13, 2015, 8:58 AM Britton Smith <brittonsmith@gmail.com> wrote:
Your tip led me to the right answer. The call to parallel_objects was happening in the derived quantity, where each processor is being made into its own comm where it is rank 0. The issue is that they then try to identify fields and incorrectly think of themselves as rank 0 for choosing which grids to look at. If I simply as ds.index right after creating the dataset, the problem goes away. This should probably just be added to the bottom of the __init__ for EnzoDatasetInMemory. Does that sound right?
Britton
On Mon, Jul 13, 2015 at 2:38 PM, Matthew Turk <matthewturk@gmail.com> wrote:
That sounds like a new communicator got pushed to the top of the stack when it should not have been, perhaps in a rogue parallel_objects call.
On Mon, Jul 13, 2015, 8:35 AM Britton Smith <brittonsmith@gmail.com> wrote:
Hi again,
Maybe this is a clue. In _generate_random_grids, self.comm.rank is 0 for all processors, which would explain why N-1 cores are trying to get grids that don't belong to them. Interestingly, mylog.info prints out the correct rank for each of them.
Britton
On Mon, Jul 13, 2015 at 2:21 PM, Britton Smith <brittonsmith@gmail.com> wrote:
Hi Matt,
Thanks for your help. Adjust by grid._id_offset did not work, but I can that what is happening is that all processors are trying to call _read_field_names using grid 1, when only processor 0 owns that grid. I will look into why now, but if you have any intuition where to check next, that would be awesome.
Thanks, Britton
On Mon, Jul 13, 2015 at 1:51 PM, Matthew Turk <matthewturk@gmail.com> wrote:
Hi Britton,
What looks suspicious to me is the way it's using grid.id. This might lead to an off-by-one error. Can you try it with grid.id-grid._id_offset and see if that clears it up?
On Mon, Jul 13, 2015 at 7:42 AM, Britton Smith < brittonsmith@gmail.com> wrote: > Hi all, > > I've recently been trying to use yt's inline analysis functionality with > Enzo and am having some difficultly getting it to work in parallel. I am > using the development tip of yt. In serial, everything works fine, but in > parallel, I get the following error: > http://paste.yt-project.org/show/5694/ > > It seems that the issue is that yt is not correctly identifying which grids > are available on a given processory for the EnzoDatasetInMemory object. > Does anyone have an idea of how to fix this? Has anyone else seen this? > > For reference, my user_script is just this: > > import yt > from yt.frontends.enzo.api import EnzoDatasetInMemory > > def main(): > ds = EnzoDatasetInMemory() > ad = ds.all_data() > print ad.quantities.total_quantity("cell_mass") > > Thanks for any help, > > Britton > > _______________________________________________ > yt-users mailing list > yt-users@lists.spacepope.org > http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org > _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
Hi Matt and Britton, I get the same error message with the same user_script. I'm sorry that I still don't know what's the right way to correct this and what .index means. Could you please clarify that? Thank you, Pengfei On Mon, Jul 13, 2015 at 7:07 AM, Britton Smith <brittonsmith@gmail.com> wrote:
Thanks for your help!
On Mon, Jul 13, 2015 at 3:01 PM, Matthew Turk <matthewturk@gmail.com> wrote:
Yup, it does. Nice detective work!
On Mon, Jul 13, 2015, 8:58 AM Britton Smith <brittonsmith@gmail.com> wrote:
Your tip led me to the right answer. The call to parallel_objects was happening in the derived quantity, where each processor is being made into its own comm where it is rank 0. The issue is that they then try to identify fields and incorrectly think of themselves as rank 0 for choosing which grids to look at. If I simply as ds.index right after creating the dataset, the problem goes away. This should probably just be added to the bottom of the __init__ for EnzoDatasetInMemory. Does that sound right?
Britton
On Mon, Jul 13, 2015 at 2:38 PM, Matthew Turk <matthewturk@gmail.com> wrote:
That sounds like a new communicator got pushed to the top of the stack when it should not have been, perhaps in a rogue parallel_objects call.
On Mon, Jul 13, 2015, 8:35 AM Britton Smith <brittonsmith@gmail.com> wrote:
Hi again,
Maybe this is a clue. In _generate_random_grids, self.comm.rank is 0 for all processors, which would explain why N-1 cores are trying to get grids that don't belong to them. Interestingly, mylog.info prints out the correct rank for each of them.
Britton
On Mon, Jul 13, 2015 at 2:21 PM, Britton Smith <brittonsmith@gmail.com
wrote:
Hi Matt,
Thanks for your help. Adjust by grid._id_offset did not work, but I can that what is happening is that all processors are trying to call _read_field_names using grid 1, when only processor 0 owns that grid. I will look into why now, but if you have any intuition where to check next, that would be awesome.
Thanks, Britton
On Mon, Jul 13, 2015 at 1:51 PM, Matthew Turk <matthewturk@gmail.com> wrote:
> Hi Britton, > > What looks suspicious to me is the way it's using grid.id. This > might > lead to an off-by-one error. Can you try it with > grid.id-grid._id_offset and see if that clears it up? > > On Mon, Jul 13, 2015 at 7:42 AM, Britton Smith < > brittonsmith@gmail.com> wrote: > > Hi all, > > > > I've recently been trying to use yt's inline analysis > functionality with > > Enzo and am having some difficultly getting it to work in > parallel. I am > > using the development tip of yt. In serial, everything works > fine, but in > > parallel, I get the following error: > > http://paste.yt-project.org/show/5694/ > > > > It seems that the issue is that yt is not correctly identifying > which grids > > are available on a given processory for the EnzoDatasetInMemory > object. > > Does anyone have an idea of how to fix this? Has anyone else seen > this? > > > > For reference, my user_script is just this: > > > > import yt > > from yt.frontends.enzo.api import EnzoDatasetInMemory > > > > def main(): > > ds = EnzoDatasetInMemory() > > ad = ds.all_data() > > print ad.quantities.total_quantity("cell_mass") > > > > Thanks for any help, > > > > Britton > > > > _______________________________________________ > > yt-users mailing list > > yt-users@lists.spacepope.org > > http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org > > > _______________________________________________ > yt-users mailing list > yt-users@lists.spacepope.org > http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org >
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
Hi Pengfei, For any dataset, the .index attribute associated with it is the structure responsible for figuring out how the data is organized. For Enzo data, this contains the AMR hierarchy, including the positions, levels, and sizes of all of the AMR grids. The ds.index object does not get initialized automatically when the dataset is loaded, but only after it is accessed for the first time. For this particular problem, ds.index needs to be created before any parallel analysis gets done. The solution is to put a line in your user_script with ds.index just after you load the dataset. In other words, your user_script should look like: ds = EnzoDatasetInMemory() ds.index # start analysis here Please, let us know if this doesn't work. Britton On Wed, Feb 17, 2016 at 11:56 PM, Pengfei Chen <madcpf@gmail.com> wrote:
Hi Matt and Britton,
I get the same error message with the same user_script. I'm sorry that I still don't know what's the right way to correct this and what .index means. Could you please clarify that?
Thank you, Pengfei
On Mon, Jul 13, 2015 at 7:07 AM, Britton Smith <brittonsmith@gmail.com> wrote:
Thanks for your help!
On Mon, Jul 13, 2015 at 3:01 PM, Matthew Turk <matthewturk@gmail.com> wrote:
Yup, it does. Nice detective work!
On Mon, Jul 13, 2015, 8:58 AM Britton Smith <brittonsmith@gmail.com> wrote:
Your tip led me to the right answer. The call to parallel_objects was happening in the derived quantity, where each processor is being made into its own comm where it is rank 0. The issue is that they then try to identify fields and incorrectly think of themselves as rank 0 for choosing which grids to look at. If I simply as ds.index right after creating the dataset, the problem goes away. This should probably just be added to the bottom of the __init__ for EnzoDatasetInMemory. Does that sound right?
Britton
On Mon, Jul 13, 2015 at 2:38 PM, Matthew Turk <matthewturk@gmail.com> wrote:
That sounds like a new communicator got pushed to the top of the stack when it should not have been, perhaps in a rogue parallel_objects call.
On Mon, Jul 13, 2015, 8:35 AM Britton Smith <brittonsmith@gmail.com> wrote:
Hi again,
Maybe this is a clue. In _generate_random_grids, self.comm.rank is 0 for all processors, which would explain why N-1 cores are trying to get grids that don't belong to them. Interestingly, mylog.info prints out the correct rank for each of them.
Britton
On Mon, Jul 13, 2015 at 2:21 PM, Britton Smith < brittonsmith@gmail.com> wrote:
> Hi Matt, > > Thanks for your help. Adjust by grid._id_offset did not work, but I > can that what is happening is that all processors are trying to call > _read_field_names using grid 1, when only processor 0 owns that grid. I > will look into why now, but if you have any intuition where to check next, > that would be awesome. > > Thanks, > Britton > > On Mon, Jul 13, 2015 at 1:51 PM, Matthew Turk <matthewturk@gmail.com > > wrote: > >> Hi Britton, >> >> What looks suspicious to me is the way it's using grid.id. This >> might >> lead to an off-by-one error. Can you try it with >> grid.id-grid._id_offset and see if that clears it up? >> >> On Mon, Jul 13, 2015 at 7:42 AM, Britton Smith < >> brittonsmith@gmail.com> wrote: >> > Hi all, >> > >> > I've recently been trying to use yt's inline analysis >> functionality with >> > Enzo and am having some difficultly getting it to work in >> parallel. I am >> > using the development tip of yt. In serial, everything works >> fine, but in >> > parallel, I get the following error: >> > http://paste.yt-project.org/show/5694/ >> > >> > It seems that the issue is that yt is not correctly identifying >> which grids >> > are available on a given processory for the EnzoDatasetInMemory >> object. >> > Does anyone have an idea of how to fix this? Has anyone else >> seen this? >> > >> > For reference, my user_script is just this: >> > >> > import yt >> > from yt.frontends.enzo.api import EnzoDatasetInMemory >> > >> > def main(): >> > ds = EnzoDatasetInMemory() >> > ad = ds.all_data() >> > print ad.quantities.total_quantity("cell_mass") >> > >> > Thanks for any help, >> > >> > Britton >> > >> > _______________________________________________ >> > yt-users mailing list >> > yt-users@lists.spacepope.org >> > http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org >> > >> _______________________________________________ >> yt-users mailing list >> yt-users@lists.spacepope.org >> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org >> > > _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
participants (3)
-
Britton Smith
-
Matthew Turk
-
Pengfei Chen