Hi,
Is there any way in yt to get a script to delete (or ignore) all the .yt
and .harray files and clear its memory? I run into errors when I'm
re-doing a plot or I've re-run the simulation and over-written the last
data set I looked at with yt. I have to exit yt, delete the above files
by hand and restart, which is a bit of a pain.
Elizabeth
Hi Bretton, Matt,
All your advices came in handy, thank you very much! I already did some
trials with the files you (Matt) gave me, and they worked, now I'm doing
experiments on other simulations (with particles of course). Hopefully won't
be any issues.
Greetings
Hi everyone,
I've come across an error while trying to use yt for parallel analysis on Kraken. Wondering if its something people have seen before.
I have my simple script that does slices of a few different quanties. When I ran it with 12 tasks on Kraken, it worked fine for the first four datasets, but crashed during the fifth one. I tried a simpler script that just does a slice of density for that dataset, and it crashed again with the same error:
IndexError: arrays used as indices must be of integer (or boolean) type
When I ran this code again with only 1 task (still on Kraken, everything else the same), it worked fine. More of the output, as well as my job script and plotting script, are below.
Since it works in serial , and works for some data sets, the problem seems to be something to do with parallelism. Is there any sort of simple explanation or something I'm doing wrong?
Just wondering if this is something people have come across before.
--Greg
(Last few lines of output)
mg, mc, mv, pos = self.find_max_cell_location(field, finest_levels)
mg, mc, mv, pos = self.find_max_cell_location(field, finest_levels)
File "/lustre/scratch/proj/sw/yt/current/lib/python2.7/site-packages/yt-2.1stable-py2.7-linux-x86_64.egg/yt/data_objects/object_finding_mixin.py", line 69, in find_max_cell_location
File "/lustre/scratch/proj/sw/yt/current/lib/python2.7/site-packages/yt-2.1stable-py2.7-linux-x86_64.egg/yt/data_objects/object_finding_mixin.py", line 69, in find_max_cell_location
File "/lustre/scratch/proj/sw/yt/current/lib/python2.7/site-packages/yt-2.1stable-py2.7-linux-x86_64.egg/yt/data_objects/object_finding_mixin.py", line 69, in find_max_cell_location
max_grid = self.grids[mg]
IndexError: arrays used as indices must be of integer (or boolean) type
max_grid = self.grids[mg]
max_grid = self.grids[mg]
IndexError: arrays used as indices must be of integer (or boolean) type
IndexError: arrays used as indices must be of integer (or boolean) type
(Job script)
#!/bin/bash
#PBS -A TG-AST090040
#PBS -l size=12,walltime=2:00:00
#PBS -j oe
#PBS -N slicer
cd /lustre/scratch/oshea/nested_ics_real1/analysis
ls
pwd
export MPLCONFIGDIR=${PBS_O_WORKDIR}/.matplotlib/
[ ! -d ${MPLCONFIGDIR} ] && mkdir ${MPLCONFIGDIR}
module unload PrgEnv-pgi
module load PrgEnv-gnu
module load yt
aprun -n 12 python ./simple_slice.py --parallel
(simple_slice.py)
from yt.mods import *
DD = 40
fileName = "../DD%s/DD%s" % ((str(DD)).zfill(4), (str(DD)).zfill(4))
pf = load(fileName)
pc = PlotCollection(pf)
pc.add_slice("Density", 0)
pc.save("./%s" % str(DD).zfill(4))
Hi,
I've written a script to calculate the contours in the potential field but I'm having a couple of problems:
Firstly, the script is too slow. It is running for several days and still hasn't completed. I think it does not even reached the contour calculation and is still reading in the data set.
It is printing to the screen:
yt : [INFO ] 2011-07-28 18:52:23,722 Getting field PotentialField from 1720
yt : [INFO ] 2011-07-28 18:52:51,350 Getting field dx from 1720
yt : [INFO ] 2011-07-28 18:53:00,316 Getting field dy from 1720
yt : [INFO ] 2011-07-28 18:53:05,817 Getting field dz from 1720
yt : [INFO ] 2011-07-28 18:53:18,258 Getting field x from 1720
yt : [INFO ] 2011-07-28 18:54:21,300 Getting field y from 1720
yt : [INFO ] 2011-07-28 18:55:23,956 Getting field PotentialField from 1720
yt : [INFO ] 2011-07-28 18:55:47,502 Getting field dx from 1720
yt : [INFO ] 2011-07-28 18:55:55,878 Getting field dy from 1720
yt : [INFO ] 2011-07-28 18:56:01,247 Getting field dz from 1720
yt : [INFO ] 2011-07-28 18:56:13,819 Getting field x from 1720
yt : [INFO ] 2011-07-28 18:57:00,063 Getting field y from 1720
.
.
.
This looks to me like it is somehow getting stuck on grid number 1720. Is that right? It creates a slice just fine, but that might well avoid that particular grid.
Secondly, occasionally when I run I see this error:
---> 52 escapevel = sqrt(2.0)*data["SubtractBackgroundPotential"]/sqrt(fabs(data["SubtractBackgroundPotential"]))
53 return (escapevel)
54
NameError: global name 'sqrt' is not defined
I have no idea why it is suddenly complaining about sqrt? Sometimes when I run it is fine and then I'll re-run and it'll hit this error. Is there an import command that might only sometimes get called?
I've attached my script below and would appreciate any advice! (I'm about to return from Japan via Seoul so I may have my internet connection knocked out for a couple of days; I'm very sorry if there is a delay before I respond).
Thank you,
Elizabeth
#!/usr/bin/python
from yt.mods import *
from yt.analysis_modules.level_sets.api import identify_contours
import pickle
def _DiskRadius(field, data):
center = 0.5*(data.pf.domain_right_edge - data.pf.domain_left_edge)
DW = data.pf.domain_right_edge - data.pf.domain_left_edge
dradius = na.zeros(data["x"].shape, dtype='float64')
for i, ax in enumerate('xy'):
r = na.abs(data[ax] - center[i])
dradius += na.minimum(r, na.abs(DW[i]-r))**2.0
na.sqrt(dradius, dradius)
return dradius
add_field("DiskRadius", function=_DiskRadius)
def _SubtractBackgroundPotential(field, data):
np = 500
source = data.pf.h.disk((16,16,16), (0,0,1), 20, 15.6e-3/pf['kpc'])
profile = BinnedProfile1D(source, np, "DiskRadius", 0.1, 18.0, log_space=False)
profile.add_fields("PotentialField", weight="CellVolume")
# print profile["PotentialField"], profile["DiskRadius"]
pot = na.zeros(data["PotentialField"].shape, dtype='float64')
for r in range(np):
# unlike where, this produces a true/false array of the same size as data
index = ((data["DiskRadius"] >= profile["DiskRadius"][r])
& (data["DiskRadius"] < profile["DiskRadius"][r+1]))
# oddly, it's ok to then put this straight back into data to pull out the right index. Python magic
backpot = profile["PotentialField"][r]+(profile["PotentialField"][r+1]-profile["PotentialField"][r])*(data["DiskRadius"][index]-profile["DiskRadius"][r])/(profile["DiskRadius"][r+1]-profile["DiskRadius"][r])
pot[index] = data["PotentialField"][index]-backpot
pot[(pot==0)] = -1.0e-10
return pot
add_field("SubtractBackgroundPotential", function=_SubtractBackgroundPotential)
def _EscapeVelocity(field, data):
escapevel = na.zeros(data["SubtractBackgroundPotential"].shape, dtype="float64")
escapevel = sqrt(2.0)*data["SubtractBackgroundPotential"]/sqrt(fabs(data["SubtractBackgroundPotential"]))
return (escapevel)
add_field("EscapeVelocity", function=_EscapeVelocity)
def _NegEscapeVelocity(field, data):
# sign flip to allow multi-contour finding schemes work in yt
return (-1*data["EscapeVelocity"])
add_field("NegEscapeVelocity", function=_NegEscapeVelocity)
# Grab data
fn = "GravPotential/DD0301/GT_BTAccel_256AMR4_PeHeat_sf5_SNe_0301"
pf = load(fn)
dd = pf.h.all_data()
min, max = dd.quantities["Extrema"]("NegEscapeVelocity")
contouredclouds = dd.extract_connected_sets("NegEscapeVelocity", 12, 15.0, max, log_space=False)
Hi everyone,
I was making some projections using the YT module installed on Nautilus and encountered an odd issue: The projection is broken up and appears to be sort of tiled and jumbled around. You can see the result here: http://galactica.icer.msu.edu/~bcrosby/data/1024box/images/RedshiftOutput00…
It looks similar to problems that arise from off-axis projections, but this projections are straight down the axis. I'm puzzled about what would be causing this, as the same script produced normal, smooth images earlier. Here's the plotting call that I used:
fn = "RD%04i/RedshiftOutput%04i" %(fnum,fnum)
pf = load(fn)
pc = PlotCollection(pf, center=[0.5,0.5,0.5])
for axis in range(3):
p = pc.add_projection("particle_density",axis)
Any clues as to what might be going on?
Thanks,
Brian
Hi Bretton,
After I put the command you gave me, this appears:
['0', '(do', 'not', 'modify')']
I guess the reason is what you said, but then I have another problem:
I can't find in the documentation any example I can use the HOP with;
there are examples of commands to use, but no one says what parameter file
are they using, if you can help I'd would be very grateful (I already am
actually).
Thank you.
Miguel
Hi YT user,
I'm new using YT, and I'm already having problems when I've tried
to use the HOP, when I write the commands in Python, this comes out:
>>> from yt.mods import *
>>> from yt.analysis_modules.halo_finding.api import *
>>> pf = load("data0004")
>>> halo_list = HaloFinder(pf)
yt INFO 2011-07-21 19:38:42,838 Getting the binary hierarchy
yt INFO 2011-07-21 19:38:42,846 Finished with binary hierarchy
reading
yt INFO 2011-07-21 19:38:42,849 Adding Phi to list of fields
Warning: invalid value encountered in sqrt
Warning: invalid value encountered in sqrt
yt WARNING 2011-07-21 19:38:43,009 No particle_type, no
creation_time, so not distinguishing.
yt INFO 2011-07-21 19:38:43,013 Getting ParticleMassMsun using
ParticleIO
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File
"/usr/local/yt-unknown/src/yt-hg/yt/analysis_modules/halo_finding/halo_objects.py",
line 1821, in __init__
self._mpi_allsum((self._data_source["ParticleMassMsun"][select]).sum(dtype='float64'))
File "/usr/local/yt-unknown/src/yt-hg/yt/data_objects/data_containers.py",
line 279, in __getitem__
self.get_data(key)
File "/usr/local/yt-unknown/src/yt-hg/yt/data_objects/data_containers.py",
line 1918, in get_data
self.particles.get_data(field)
File "/usr/local/yt-unknown/src/yt-hg/yt/data_objects/particle_io.py", line
92, in get_data
conv_factors)
File "/usr/local/yt-unknown/src/yt-hg/yt/frontends/enzo/io.py", line 154,
in _read_particles
filenames, ids, conv_factors = zip(*sorted(zip(filenames, ids,
conv_factors)))
ValueError: need more than 0 values to unpack
Does anyone know how to fix this trouble?
Thanks in advance, Miguel Gonzalez.
Hi YT user,
I recently move the stable 2.1 to development 2.2.
Then I try to run simple slice script that worked with 2.1, but it crash
My simple script is as follow:
-------------------------------------
from yt.mods import *
from yt.analysis_modules.api import EnzoSimulation
import matplotlib.pylab as pylab
pf = load("DD0000/DD0000")
# create density slices
pc = PlotCollection(pf, center=[0.5,0.5,0.5])
pc.add_slice("Density", 0)
pc.save("DD0000")
--------------------------------------
and the error message is
Traceback (most recent call last):
File "SliceDenItr.py", line 1, in <module>
from yt.mods import *
File
"/home/jhchoi/common/lib/python2.6/site-packages/yt-2.2dev-py2.6-linux-x86_64.egg/yt/mods.py",
line 43, in <module>
from yt.utilities.cookbook import Intent
File
"/home/jhchoi/common/lib/python2.6/site-packages/yt-2.2dev-py2.6-linux-x86_64.egg/yt/utilities/cookbook.py",
line 30, in <module>
import argparse
ImportError: No module named argparse
Does anyone know the reason of error and way to fix it?
Thanks in advance,
Junhwan Choi
--
--------------------------------------------------------------
Jun-Hwan Choi, Ph.D.
Department of Physics and Astronomy, University of Kentucky
Tel: (859) 897-6737 Fax: (859) 323-2846
Email: jhchoi(a)pa.uky.edu URL: http://www.pa.uky.edu/~jhchoi
--------------------------------------------------------------
Hi,
I'm trying to install yt on the xt4 machine in Tokyo, which obnoxiously refuses to let you use wget.
I scp-ed a copy from my desktop to the xt4 and then told the install_script.sh where to find yt so it didn't try and download the packages. I also removed 'done' from each of the packages in src.
Unfortunately, I get:
gcc -fPIC -fPIC -o bzip2. bzip2.o -L. -lbz2
bzip2.o file not recognized: File format not recognized
xt4 does use modules for loading packages, but I don't know if I'm lacking a module or if my self-cleaning is flawed.
Elizabeth
Hi,
I was wondering if it is possible to specify two different sources of
input to create a derived field. Currently with a single source, I have a
derived field defined as:
def _HII_HFraction(field, data):
return data["HII_Density"]/(data["HI_Density"]+data["HII_Density"])
add_field("HII_HFraction", function=_HII_HFraction,
units=r"\frac{\rho_{HII}}{\rho_H}")
And I can access the total quantity in parallel
pf=load(file)
dd = pf.h.all_data()
dd.quantities["TotalQuantity"]("HII_HFraction")[0]
But I'm dealing with
pf1=load(file1)
pf2=load(file2)
pf1 has the HI_Density field data and pf2 has the HII_Density field data.
Is there a way to create a derived field and use TotalQuantity to operate
on the data in parallel?
From
G.S.
PS. I guess my alternative is to glue the two HDF5 files into one, but
want to avoid it if possible.