Hi Carla,

This error from your traceback:
  File "/beegfs/home/user/yt-conda/src/yt-git/yt/frontends/enzo/data_structures.py", line 828, in _parse_enzo2_parameter_file
    self.refine_by = self.parameters["RefineBy"]
P005 yt : [ERROR    ] 2019-05-28 14:12:18,759 KeyError: 'RefineBy'

suggests that whatever file is being opened is missing that parameter. Can you try loading some of the datasets individually? My only other thought is that somehow the EnzoSimulation is trying to open a different file than the one that is intended. For example, it could be trying to open something like 'RD0017/RD0017' instead of 'RD0017/RedshiftOutput0017'.

If that doesn't work, then perhaps we can try and work this out on the slack channel. I'm home sick today, but should be around tomorrow or the next day.

Britton

On Tue, May 28, 2019 at 3:41 PM C Bernhardt <bernhardt.phd@gmail.com> wrote:
Hi Britton,

For whatever reason, it looks like the dataset parameter files are missing the RefineBy parameter, which is rather peculiar. If you add the following line to your parameter file:
RefineBy = 2
it should at least get by this error, although it may crash somewhere else if the parameter file is incomplete.
Thanks for the suggestion, but this was already in my parameter file before the run (and still remains). My best guess is that some of the data got corrupted in the transfer. But any other suggestions are welcome!

Kind regards,
Carla 

Am Di., 28. Mai 2019 um 16:32 Uhr schrieb Britton Smith <brittonsmith@gmail.com>:
Hi Carla,

For whatever reason, it looks like the dataset parameter files are missing the RefineBy parameter, which is rather peculiar. If you add the following line to your parameter file:
RefineBy = 2
it should at least get by this error, although it may crash somewhere else if the parameter file is incomplete.

Also, I see why simply replacing the parameter files was not good enough. It looks like yt checks for the existence of the .hierarchy files to confirm that it's an Enzo dataset. You could probably get away with adding empty .hierarchy files for each directory, but I can't confirm that at the moment.

Britton

On Tue, May 28, 2019 at 2:23 PM C Bernhardt <bernhardt.phd@gmail.com> wrote:
Hi Britton and yt-users,

Thank you for your advice. I tried including find_outputs=True again, no difference. I tried creating redshift dump directories and copying the dataset parameter files there (FYI that gives the error: P005 yt : [ERROR    ] 2019-05-24 10:21:08,895 Couldn't figure out output type for RD0017/RedshiftOutput0017).  I tried putting the location of the dataset parameter files in storage, that led to the same result as not having the files mentioned at all.

Since then I have given up on making this work and have copied the files back from storage and tried running from the beginning. That results in this error being produced. Has anyone come across this error and knows what causes it? I had previously run rockstar successfully on this dataset before moving the files. Any suggestions would be appreciated.

Kind regards,
Carla

Am Di., 21. Mai 2019 um 13:18 Uhr schrieb Britton Smith <brittonsmith@gmail.com>:
Hi Carla,

It looks like the time-series object is failing to find any valid datasets. You can test this by removing the call to the RockstarHaloFinder and doing this after es.get_time_series:
for ds in es:
    print (ds)

I'm guessing that will not print anything. I imagine you'll just need to put the find_outputs=True back, but if that's not the case, then something has changed about the way the datasets are being named such that the EnzoSimulation can't find anything of the names it expects.

Once you get that fixed, I think the other issue you'll encounter is that Rockstar will be confused about which dataset on which to resume. You'll need to make sure that the datasets.txt file has all the previous datasets in it. I think the only way to accomplish this is to replace the original dataset directories and just put the dataset parameter files inside them.

Britton

On Mon, May 20, 2019 at 3:30 PM C Bernhardt <bernhardt.phd@gmail.com> wrote:
Dear yt-users,

I am using this script to run Rockstar and it has worked great for me for years. Here is the situation now: I ran rockstar a few weeks ago and since then have several additional enzo data dump outputs I would like to add to my merger trees (done with consistent-trees). To meet data quota requirements, I needed to move several of my outputs to storage (already got the rockstar outputs on these), which I believe is causing my rockstar restart to fail (see error here). 

In the script mentioned above, I tried removing find_outputs=True on line 51 and adding the keyword initial_redshift=11.749 to get_time_series() on line 59. I keep getting the same error. I have also updated datasets.txt as well as rockstar.cfg and restart.cfg to the desired RESTART_SNAP and NUM_SNAPS.

What am I missing? Is there a way to run a restart on Rockstar with some of the previous files (already processed by rockstar) missing? Thanks in advance.

Kind regards,
--
Carla Bernhardt
PhD Student
Universität Heidelberg
ZAH Institut für Theoretische Astrophysik
_______________________________________________
yt-users mailing list -- yt-users@python.org
To unsubscribe send an email to yt-users-leave@python.org
_______________________________________________
yt-users mailing list -- yt-users@python.org
To unsubscribe send an email to yt-users-leave@python.org


--
Carla Bernhardt
PhD Student
Universität Heidelberg
ZAH Institut für Theoretische Astrophysik
_______________________________________________
yt-users mailing list -- yt-users@python.org
To unsubscribe send an email to yt-users-leave@python.org
_______________________________________________
yt-users mailing list -- yt-users@python.org
To unsubscribe send an email to yt-users-leave@python.org


--
Carla Bernhardt
PhD Student
Universität Heidelberg
ZAH Institut für Theoretische Astrophysik
_______________________________________________
yt-users mailing list -- yt-users@python.org
To unsubscribe send an email to yt-users-leave@python.org