The documentation sprint is next Monday and Tuesday for those of you who
want to participate. I'll send out another email regarding that in the
next day or so.
In preparation for that, though, I wanted to request input from the
developer community on something related to the docs.
Right now, the cookbook page contains a lot of recipes for doing various
things, and I think it is hugely beneficial to the community to maintain
this (I personally use this page a lot too!). However, with the advent of
ipython notebooks over the last year, we are faced with a question: should
we move toward incorporating more notebooks into our documentation, and
specifically, do you we want to transfer the existing cookbook to a series
of notebooks for each task?
--Portability: users can download an entire notebook for both viewing how
it should work as well as being able to execute it locally on their own
--Illustrative: Interim steps in a cookbook can produce output that can
show up inside the notebook, instead of being a single script which
generates an image/output at the end (as is the case in the current
--Narrative: notebooks provide more space for narrating each step, instead
of confining any narrative to comments in the recipe itself
--Work: it is going to take a decent amount of work to move all of the
recipes over from the existing cookbook to individual notebooks
--Bulking of repo: In the current paradigm, images associated with each
recipe are generated dynamically on the server by executing each script,
thereby minimizing the number of files that need to be tracked by
mercurial. By moving to a notebook with images that are embedded in each
notebook, we'd potentially increase the footprint of the repository
substantially, especially if there were frequent updates of individual
I also like the yt bootcamp notebooks that Matt put together a year ago. I
think they are great for getting new users up to speed on how to use
various aspects of the code. Perhaps this notebook could make its way into
the beginning of the cookbook for a more streamlined approach to the
So now is your chance to vote:
Move cookbook to ipython notebooks? +/- 0-1?
Move yt bootcamp to cookbook? +/- 0-1?
University of Arizona
New issue 687: Load test data without an explicit path
It would be nice if we could load data that lives in the test_data_dir config path without using the full, explicit path.
That way this would work from any folder:
pf = load('IsolatedGalaxy/galaxy0030/galaxy0030)
+/- 1? I'll happily implement this if no one has any objections.
Titus Brown just posted this to the SWCarpentry discussion list:
I thought people here may be interested. And, it occurs to me that in
yt, these rules are things we have attempted to support, but without
codifying them -- and, we could do a better job of supporting them.
In particular, I think rules 5 and 7 are things we could do a better
job of supporting.
As an example:
* FRBs are difficult to store
* Underlying slices/projections/etc are difficult to store (the raw
data is not, but the intermediate products are)
* Profiles are not easily saved
I think over time as we split the viz layer further from the data, and
make the data accessible more easily through the viz, these will be
improved. But, it's something to think about. And, the article is a
good read, too!
New issue 685: add codeline on install page explicitly telling the user to run the activate script
some users just scan the install docs for the lines they need to run in order to set up yt, instead of actually reading all of the text. let's add a single line to this install yt page, which shows in code-mode how to run the activate script, since without running it, yt will break.
So I've been playing with the new conda installed yt. In principle, I
really like the idea: a simple, completely isolated python distribution
that does dependency management. However, I'm running into some problems
that are potentially big problems for yt down the road. My issue is that
I'm trying to install Dedalus into the yt-provided conda install. However,
to do that, I need to interface with external libraries--ones that I do
*not* want to have built as conda packages. The two that come up first are
MPI and FFTW. Since the Dedalus FFTW bindings are actually used in yt for
its FFT analysis, this is relevant for yt too.
On a supercomputing center, one would very much want to use their MPI and
FFT libraries, rather than some binary blob from conda, I think we need to
figure out how to do this. The problem I'm getting specifically is that if
I build FFTW (for example) myself, I want to link against the system libm.
But conda provides its OWN libm, which, it turns out is not GLIBC
compatible with what FFTW was built against. I believe this is because libm
from conda was not built against the same glibc as my libm, which FFTW was
built against. I don't know how to solve this problem, since I don't really
understand dynamic libraries, nor conda. Does anyone have any ideas about
this? Is this something I should take up with the conda list?
My apologies if this is unclear. I am not fully sure I understand what is
New issue 681: Cannot link against png provided by ubuntu 13.04 libpng12-devel package
I think this is due to multiarch, which is bandaided over in setup.py.
$ python setup.py develop
Since you are using multiarch distro it's hard to detect
whether library matches the header file. We will assume
it does. If you encounter any build failures please use
proper cfg files to provide path to the dependencies
Reading PNG location from png.cfg failed.
Please place the base directory of your
PNG install in png.cfg and restart.
(ex: "echo '/usr/local/' > png.cfg" )
You can locate the path by looking for png.h
$ locate libpng.so
$ locate png.h
@xarthisius, do you see a workaround?