A while back we were intending to backport bugfixes from the yt branch to
the stable branch and then regularly issue bugfix releases. This hasn't
happened, and new users in particular feel the pain this causes.
Does someone want to volunteer to backport bugfix pull requests to the
stable branch? It should be more or less automatic, although you will
likely need to do it one pull request at a time, and it will be hard to
review. This would be a great, relatively straightforward if tedious task
to do at the scipy sprints, if anyone is going to that.
Should we just abandon this idea and go ahead and release 3.2 sooner rather
Is there a way to sustainably make sure bugfixes eventually make their way
to the stable branch, either with or without changing our development
At yesterday's yt team meeting we discussed the need to deal with open PRs
on a more regular basis, so we've decided to start a weekly PR triage
hangout. If you think you'll participate, here's a doodle poll
<http://doodle.com/a5943ysntaddmc4f> for times. Treat the poll as "your
average summer week" (as much as that concept exists), rather than these
specific dates. To respect everyone's other commitments and times, each
session will only run for one hour.
Here's the link again: http://doodle.com/a5943ysntaddmc4f
We will have a yt developer team meeting tomorrow at 2pm EDT, 11am PDT.
These are regularly scheduled meetings to discuss the state of the
project. The meeting will be held in a yt hangout and all are welcome to
attend and voice their opinions. We may need to invite people to the
hangout, so if you'd like to come, just let me know and I'll make sure
you're included. Below is a rough agenda, but if there's something you'd
like to discuss, bring it along.
Adopting astropy code of conduct
Improving the PR review process
regular PR review hangouts
some other thing
Project infrastructure evaluation
Is the doc system working?
Is Slack a good idea?
Do bugs on the tracker work?
This is interesting. We do something similar in our current test setup, but
it's nice to see this sort of thing become part of a testing library.
Looking into moving from nose to py.test should definitely be on our radar
given the migration of the Python community in that direction over the past
year or two. py.test has some interesting features (like test fixtures)
that might make it easier for us to write tests going forward. That said,
the answer testing framework is quite complicated, and it might take some
effort to port that. I hope to look at this myself in the coming months,
but if anyone wants to take up the slack in looking at the feasibility of
us in py.test, while I start my new position, feel free to try.
---------- Forwarded message ----------
From: *Thomas Robitaille* <thomas.robitaille(a)gmail.com>
Date: Thursday, June 25, 2015
Subject: [Matplotlib-users] ANN: pytest-mpl v0.2
I have just released a small plugin for py.test that wraps the image
comparison functionality in matplotlib.testing, for use in other
packages that use py.test as the testing framework instead of nose:
The idea is to make it easy to write a test such as:
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
which can then be run in three ways:
- Running py.test as usual will simply check the tests run but won't
check whether the figure is correct.
- Running py.test with the --mpl option will make sure that the figure
produced by the test is the same as a reference image
- Running py.test with the --mpl-generate-path option will generate the
reference images from the tests themselves.
There are a number of other options, including ways to pass arguments to
savefig, customizing the image names, or setting the tolerance for the
comparison. All the documentation is contained in the README.md file:
You can install this plugin with:
pip install pytest-mpl
I would welcome any feedback and/or contributions!
Monitor 25 network devices or servers for free with OpManager!
OpManager is web-based network management software that monitors
network devices and physical & virtual servers, alerts via email & sms
for fault. Monitor 25 devices for free with no restriction. Download now
Matplotlib-users mailing list
New issue 1037: Units system has problems in comparing between cosmological datasets with different scale factors
At present, when you try to compare/combine quantities from two separate datasets, yt defaults to internally converting those quantities to non-comoving physical units before making the comparison/operation. I think this *should* be the default behavior as it covers the bulk of the use cases, but it would be good to allow for comparisons/combinations using comoving units (of different scale factors) as well.
I recently ran into this issue in trying to compare two separate datasets at different redshifts (ie scale factors) from a cosmological simulation where the units were enzo's `code_length`, which is in comoving units. In the end, I had to just grab the `.v` values out of the `YTArray` in order manually subtract them because subtracting using the `YTArray` yielded incorrect answers.
Here is an example demonstrating this problem. Let's say a galaxy sits at the center of this box at both z=99 and z=0. we can see how far it has moved in comoving units by subtracting off it's position at z=0 from it's position at z=99.
and its output:
I think this could be addressed by just creating an additional attribute on `YTArrays` for converting to comoving units when making comparisons/operations with other `YTArrays`.
New issue 1036: BinnedProfile1D seems to be broken
It seems that BinnedProfile1D does not work correctly in yt3.1, at least in some cases. Using it with a derived field appears to produce unpredictable results. An example script that should work but doesn't seem to is at http://paste.yt-project.org/show/5648/. Since I am informed that BinnedProfile1D is obsolete and has been replaced by create_profile, perhaps it should just be removed, or deprecation warnings should be added.
New issue 1035: defining a bounding box causes units problems in tipsy data sets
FYI, this may be related to [PR # 1524](https://bitbucket.org/yt_analysis/yt/pull-request/1524/added-functi...
If I run the following on the tipsy data set:
import numpy as np
fname = 'TipsyGalaxy/galaxy.00300'
boxsize = 100
bbox = [[-boxsize,boxsize],
ds = yt.load(fname,bounding_box=bbox)
px = yt.ProjectionPlot(ds,"z",('deposit', 'Gas_density'),center='m')
I'll get an image back out that has the galaxy as 50 cm big! However, if I run the same code but replace the load command with:
ds = yt.load(fname)
then no problem.
Another way this manifests itself is if I load with the bounding_box set in the load command, then:
In : ds.length_unit
Out: 1.0 kpc
In : ds.domain_left_edge
Out: YTArray([-100., -100., -100.]) code_length
In : ds.domain_left_edge.in_units('kpc')
Out: YTArray([ -3.24077929e-20, -3.24077929e-20, -3.24077929e-20]) kpc
I've looked through frontends/tipsy/data_structures and can't seem to figure what's doing this.
Again, this might not be related to this PR, but it sure seems similar.