Hi all,
Currently in yt, only the default cubic spline kernel function is provided and used for the smoothing. I’m thinking about adding more kernel functions, providing the possibility to choose from them and also the possibility for the users to provide their own ones. In my own research, a quintic spline kernel is used, for example. So I guess this functionality might also be useful for other users.
My current plan goes like this:
1. Add several common kernel functions besides the current one.
2. Build a naming system.
3. Put a kwarg in appropriate places to pass the name of the kernel function. For now, I’m thinking about put it in the ParticleSmoothOperation class as a parameter of __init__.
4. Reserve a special kwarg value which can be used to pass the user-provided kernel functions.
I’m not much experienced with yt. So correct me if there are any mistakes or omissions! Any suggestions are much appreciated!
Thanks,
Bili
Hi folks,
I'm trying to install the dev branch of yt and I'm hitting some stumbling
blocks. (For those of you who I talked to about this in May, the short
version is that I've managed to recreate the same issues I had on my laptop
but now on my desktop.)
The first thing I tried was the
http://bitbucket.org/yt_analysis/yt/raw/yt/doc/install_script.sh script,
which fails at installing HDF5 (http://pastebin.com/QSnKqHtW). The same
thing happens for the stable branch install script, and when I try this on
a different machine (laptop vs desktop, but similar setups and python
installations).
I can install the stable branch no problem just doing pip install yt. But
I'd still like the dev branch and / or source code. When I do hg clone
https://bitbucket.org/yt_analysis/yt and then python setup.py develop, I
get a:
[oak:~/yt] molly% python setup.py develop
Cython is a build-time requirement for the source tree of yt.
Please either install yt from a provided, release tarball,
or install Cython (version 0.22 or higher).
You may be able to accomplish this by typing:
pip install -U Cython
Which, naturally, I did, and it says it's successfully installed, but I get
the same error when I then rerun the setup.py. (I think this is the same
line we went through when trying to get it working on my laptop which is
why we eventually used conda to get the stable version + source on my
laptop.) FWIW, I seem to not be able to uninstall cython, though it's
somewhat unclear on why:
[oak:~/yt] molly% pip uninstall cython
Not uninstalling Cython at
/usr/stsci/ssb/python/lib/python2.7/site-packages/Cython-0.21.1-py2.7-macosx-10.6-x86_64.egg,
outside environment /Users/molly/ssbvirt/ssb-osx
Soo...suggestions? It looks like I can get the binary no problem but if I
want the source + dev branch, there are issues.
--Molly
New issue 1055: RAMSES units error with boxlen != 1
https://bitbucket.org/yt_analysis/yt/issues/1055/ramses-units-error-with-bo…
Ji-hoon Kim:
Dear All,
The current way of assigning lengh_unit and density_unit doesn't seem to properly work for a RAMSES dataset with boxlen != 1. FYI, I am testing yt-3.2 on the Ramses entry for the AGORA isolated disk comparison (no RT). As I understand, yt-dev has been working fine for this particular dataset until PR #1610 is merged. I think part of the problem -- though maybe not the entirety of the problem -- is that "unit_l" gets multiplied by boxlen twice (frontend/ramses/data_structures.py; first in line 498, then in 521). I think this alone may warrant a reinvestigation of the RAMSES units in yt.
```
#!python
length_unit = self.parameters['unit_l'] * boxlen
density_unit = self.parameters['unit_d']/ boxlen**3
mass_unit = density_unit * length_unit**3
...
self.density_unit = self.quan(density_unit, 'g/cm**3')
self.length_unit = self.quan(length_unit * boxlen, "cm")
self.mass_unit = self.quan(mass_unit, "g")
```
I tried seemingly the most obvious fix of removing the second boxlen multiplication, but it fails again with too low density (off by boxlen**3).
```
#!python
>>> pf.all_data()["density"]()
YTArray([ 2.87245413e-38, 2.87426855e-38, 2.87426181e-38, ...,
3.07532664e-38, 3.07330200e-38, 3.07943311e-38]) g/cm**3
>>> pf.all_data()["density"].median()
3.09606619982e-33 g/cm**3
```
FWIW, I did learn about the history behind the data_structures.py by reading through previous conversations dating back last year -- among Ricarda, Nathan, Desika, Ben, Sam, Matt, and others. Especially after going through its lengthy history and multiple changes and reverts, I wasn't certain that I could *find and verify* the correct way of setting units for all Ramses users and datasets out there. But, once Ricarda and a broader yt-Ramses user base come up with the **most general** way of assigning units, I will be more than happy to test the fix on the dataset I have.
Many thanks as always to the yt community!
Best wishes,
Ji-hoon
* * *
PS. Below are just my two cents.
IMHO, *as far as length_unit and density_unit are concerned*, I think pre-PR#1610-era solution is more consistent with pymses (snippet in https://bitbucket.org/yt_analysis/yt/issues/939/units-error-in-ramses-front…). Please see below for comparison.
However, there is one important outstanding issue. Judging from the comments in the pymses snippet, the “unit_mass” in pymses seems *only* for DM particles, and boxlen renormalization is *only* applied to “unit_mass". (Am I correct here?) In contrast, “mass_unit” in yt could be for any entity including star particles. This might explain the issue Desika originally reported with stellar masses in another AGORA-disk run (http://lists.spacepope.org/pipermail/yt-dev-spacepope.org/2015-June/019345.…). Unfortunately, however, this is a pure speculation based *not* on extensive exposures to various Ramses datasets, *but* only on my reading of the pymses snippet.
[A] Pre-PR#1610-era length_unit and density_unit (e.g. changeset 242b9f739613):
```
#!python
length_unit = self.parameters['unit_l']
density_unit = self.parameters['unit_d']
mass_unit = density_unit * (length_unit * boxlen)**3
...
self.density_unit = self.quan(density_unit, 'g/cm**3')
self.length_unit = self.quan(length_unit * boxlen, "cm")
self.mass_unit = self.quan(mass_unit, "g")
```
[B] Compare with the pymses code snippet:
```
#!python
info_dict["unit_length"] = float(par_dict["unit_l"])*C.cm
info_dict["unit_density"] = float(par_dict["unit_d"])*C.g/C.cm**3
...
# Boxlen renormalisation : used only for dark matter particles,
# so we do this just before computing the mass unit
info_dict["unit_length"] = info_dict["unit_length"] * info_dict["boxlen"]
info_dict["unit_mass"] = info_dict["unit_density"] \
* info_dict["unit_length"]**3
```
I am sure that Ricarda and other experienced Ramses users will be better positioned to suggest the *most general* solution. I look forward to hearing their insights. (Questions I couldn't quite answer for myself: Would any suggested changes cause troubles in other units such as velocity, pressure, or B-field? If my reading of the pymses code is correct, how could yt possibly assign two mass_units?)
Hi all,
Does anyone remember why max_level was removed from the projection data
object?
It looks like the max_level keyword argument in ProjectionPlot is
vestigial, and currently is only present in the API and doesn't actually do
anything.
I don't think it would be crazy make max_level work again, but I don't have
context about why it was removed in the first place.
Thanks for any insight anyone can provide,
-Nathan
New issue 1054: Derived magnetic quantities are broken under the chombo frontend
https://bitbucket.org/yt_analysis/yt/issues/1054/derived-magnetic-quantitie…
Mark Krumholz:
Various quantities that are derived from the magnetic field, including 'plasma_beta' and 'magnetic_pressure', do not work with chombo in the current development branch. The problem is a naming convention conflict: yt/fields/magnetic_field.py defines 'magnetic_energy' to be the magnetic energy density (i.e., energy per unit volume), but the chombo frontend calls all per unit volume quantities X_density (e.g., 'magnetic_energy_density'), and redefines 'magnetic_energy' to be the energy per unit mass, consistent with how 'kinetic_energy' and 'thermal_energy' are defined. This breaks everything in magnetic_field.py that expects the 'magnetic_energy' field to be an energy density.
As far as I can tell this issue is specific to the chombo frontend. From a code standpoint the fix is trivial: just get rid of the 'magnetic_energy' field in the chombo frontend, and revert to the one defined generically. However, this would break the naming convention for chombo, because it would make 'magnetic_energy' an energy per unit volume, while 'thermal_energy' and 'kinetic_energy' are energies per unit mass. The cleanest fix is therefore probably to change all the energy quantity names in the same way, for example changing 'magnetic_energy' to 'specific_magnetic_energy', and similarly for the other quantities. This would probably be a good change on its own, since it's a bit confusing to define 'magnetic_energy' as an energy per unit mass, and similar for other quantities.
The problem is that this fix is likely to break some analysis scripts that rely on the current naming convention. I would therefore like to ask other chombo users to weigh in on the best course of action. If there is consensus that it's ok to change the definitions of the energies, I can submit a patch quite easily.
Responsible: atmyers
New issue 1053: add_field fails in Python 3
https://bitbucket.org/yt_analysis/yt/issues/1053/add_field-fails-in-python-3
astrofrog:
The following example:
```python
import os
from yt import load
ds = load(os.path.join('DD0010', 'moving7_0010'))
def _dust_density(field, data):
return data["density"].in_units('g/cm**3') * 0.01
ds.add_field("dust_density", function=_dust_density, units='g/cm**3')
```
fails in Python 3 with the following error:
```python
Traceback (most recent call last):
File "derived.py", line 9, in <module>
ds.add_field("dust_density", function=_dust_density, units='g/cm**3')
File "/Users/tom/miniconda3/envs/production/lib/python3.4/site-packages/yt/data_objects/static_output.py", line 910, in add_field
deps, _ = self.field_info.check_derived_fields([name])
File "/Users/tom/miniconda3/envs/production/lib/python3.4/site-packages/yt/fields/field_info_container.py", line 343, in check_derived_fields
self.ds.derived_field_list = list(sorted(dfl))
TypeError: unorderable types: str() < tuple()
```
This is with yt 3.2 in Python 3.4. The issue is that in Python 3, strings can no longer be compared to tuples, so the sorting fails.
Hi all,
There should now be conda packages for yt 3.2 in the official anaconda
package repository. If you use anaconda, you should be able to get an
installation of yt 3.2 by doing "conda install yt".
If you already have yt installed via conda, you can do "conda update yt" to
update your installation in-place. If you have a pip-managed installation
of yt (for example, if you are running the development version directly
from the the yt source), you might need to do "pip uninstall yt" before
doing "conda install yt".
These packages should work on 64 bit linux, OSX, and windows flavors of
anaconda python. Please let us know if you have any issues so we can fix
them in the next release.
I'd like to personally thank Continuum Analytics for creating these
binaries and distributing them in the official Anaconda package repository.
-Nathan
New issue 1052: Field access tests intermittently fail under python3
https://bitbucket.org/yt_analysis/yt/issues/1052/field-access-tests-intermi…
Nathan Goldbaum:
There seems to be an issue with the way the field access tests work under python3.
This can (only sometimes, unfortunately) be reproduced by running `nosetests fields` and waiting until the `gas` field tests run. The first failure will usually be for accessing `('gas', 'averaged_density')`.
Some hints:
* only happens for field access tests with `nprocs != 1`. This doesn't actually use multiple processors, instead the stream dataset is segmented into multiple grids, so it does have something to do with the selection system in nontrivial grid hierarchies.
* Might have something to do with the `__hash__` function implementation for selector objects. The definition is in `selection_routines.pyx`.
* Might have something to do with hash randomization in python 3. I was able to get the errors to stop happening after doing `PYTHONHASHSEED=0` before running nose.
* Happens intermittently. I was able to reproducibly trigger this earlier today but am now no longer able to trigger it at all.
I'm honestly mystified. I'm going ahead and creating an issue in case my findings are useful for others who want to investigate.
Unfortunately this blocks turning on the test bot for python3 CI.