New issue 1094: Update installations docs to include information about SSL errors
When people try to run the install script and get the error:
ImportError: cannot import name HTTPSHandler
They need to update their SSL library or install with miniconda.
This problem occurs frequently and should be documented.
I tried to install YT in Laptop using install script. But when I attempt to
run the script it shows the following error:
File "get-pip.py", line 82, in bootstrap
File "/tmp/tmpTGocNa/pip.zip/pip/__init__.py", line 15, in <module>
File "/tmp/tmpTGocNa/pip.zip/pip/vcs/subversion.py", line 9, in <module>
File "/tmp/tmpTGocNa/pip.zip/pip/index.py", line 30, in <module>
File "/tmp/tmpTGocNa/pip.zip/pip/wheel.py", line 35, in <module>
File "/tmp/tmpTGocNa/pip.zip/pip/_vendor/distlib/scripts.py", line 14, in
File "/tmp/tmpTGocNa/pip.zip/pip/_vendor/distlib/compat.py", line 31, in
ImportError: cannot import name HTTPSHandler
I also tried with installing get-pip.py first seperately as root but still
it show the same error. I thing I didn't get the error exactly.
How to get rid of this error and install YT properly ?
Recently I opened a PR (
https://bitbucket.org/yt_analysis/yt/pull-requests/1784) to improve the
time to create instances of yt Dataset objects.
I found that our use of the python builtin 'type' function to handle
associating data object factories (e.g. the ds.all_data() function) with
Dataset instances was a large contribution, with overhead from sympy
dominating the load time for a single dataset. Since sympy uses caching, in
the long term the call to "type" dominates the runtime.
'type', the way we were using it, is equivalent to having a class
definition, so a long time ago it was set up such that all of the data
object definitions were given names like YTSliceBase, but then you would
get back an instance of YTSlice since the 'type' function was dynamically
defining a subclass that ensured the dataset instance is passed to the data
I found that we can do this a lot faster using functools.partial, at the
cost of all data objects having incorrect class names (i.e. ds.all_data()
becomes an instance of YTRegionBase instead of YTRegion).
I think the correct thing to do is to have all yt data objects drop the
"Base" part of their name and just forget about this type indirection
business. On the other hand, the data object API is about as core to yt as
anything else is, and I worry that doing this might break user scripts.
Does anything think anyone is actually importing YTRegionBase or other yt
data object and using it in their code somehow? Our documented entry point
is via the data object creation member functions, although we do mention
the names of these classes in the API docs. Should I define a set of
aliases for compatibility so people's scripts don't break? Is this whole
approach too risky?
Thanks for your help and advice,
Sarah Sharp, formerly the maintainer of the linux USB stack, recently
stepped down from linux kernel development due to the toxic nature of that
While we're miles ahead of the tenor on the linux kernel mailing list,
there are always ways we can improve the community.
Just today Sarah published this post on her blog, which has lots of
concrete suggestions for making communities more welcoming.
Are there things on these lists that the yt community could be doing that
we aren't right now? There are lots of project ideas here, some much bigger
than others, so I'm just throwing it out here in the hopes that will pique
the interest of a few of you to implement some of these suggestions.
Sorry for the novel of an e-mail! I wanted to be as detailed as possible, I
hope it isn't too much.
Right now our test suite does a lot of test yielding. Under this paradigm,
every yielded test counts as a "dot" in the nose testing output and
contributes to the number of tests printed at the end of the test suite. In
addition, yielding tests makes it so that if a test function has a failing
test, all of the tests yielded by that function will be run despite the
fact that one or more tests failed.
It's definitely nice to see that we're running thousands of tests, and I'd
happy with continuing to do it, except I've learned that this approach adds
some technical hurdles for people who actually need to deal with the test
Personally, I've noticed that it makes debugging failing tests much more
annoying than it needs to be. Since a test is yielded, the traceback
generated by a failing or erroring test ends up somewhere in nose rather
than in the yt test suite function that actually yielded the test. This can
make it difficult to determine where a failing test is coming from in the
test suite, particularly if the test that gets yielded is just an assert.
In addition, Kacper tells me that our practice of yielding tests makes it
difficult to simplify our jenkins setup since yielded tests are not
I'd like to propose a modification to the yt codebase and developer guide.
Rather than encouraging that all asserts and test classes be yielded, we
should instead *not* yield them and just call them as regular functions or
classes. To make that concrete, the following set of tests from the NMSU
ART answer tests would go from looking like this:
to looking like this:
Each individual assert and test class instantiation would no longer count
as an individual test in nose's test statistics. Instead, referring to the
example above, the test_d9p function would be the only test reported by
nose, even though the test function does many asserts and instantiates many
test class instances.
For me, the main win would be that it would be easier to determine which
exact test failed, because the traceback reported by nose due to the test
failure would include the line in the test file that produced a failing
assert or caused an unhandled exception to be raised.
To make that concrete, I've just made the following modification to the
`test_dimensionless` function to force a test to fail:
Running `nosetests units` in the root of the repository, I get the
Note how the test traceback does *not* include the `test_dimensionless`
function. If I instead make it so the failing test is not yielded:
I get a much nicer traceback:
I can see a few reasons why we might not want to do this.
1. This is an invasive, large-scale change. It might be automatable (I
think I could write an emacs macro to do this, for example), but it would
in the end be difficult to review.
2. Since test functions would terminate on the first failure, it might lead
to annoying debug cycles where one assert gets fixed but then the next
assert fails, forcing people to wait for the full test suite to be rerun
just to get to the next failing test.
For the second point, I think we can remedy this just by improving our
testing docs to encourage people to run tests locally as much as possible
and also explain better how to run only a specific test function from the
If people are interested in making this modification globally, I think I
could take it on as part of my general efforts to clean up the codebase.
New issue 1092: Nyx and Castro particles are broken
We used to support Nyx and Castro particles, but it was broken at some point:
ds = yt.load('nyx_small/nyx_small_00000/')
ad = ds.all_data()
AttributeError Traceback (most recent call last)
<ipython-input-7-3933388c1ff5> in <module>()
2 ds = yt.load('nyx_small/nyx_small_00000/')
3 ad = ds.all_data()
----> 4 ad['particle_position_x']
/Users/atmyers/yt-x86_64/src/yt-hg/yt/data_objects/data_containers.pyc in __getitem__(self, key)
251 return self.field_data[f]
--> 253 self.get_data(f)
254 # fi.units is the unit expression string. We depend on the registry
255 # hanging off the dataset to define this unit object.
/Users/atmyers/yt-x86_64/src/yt-hg/yt/data_objects/data_containers.pyc in get_data(self, fields)
744 read_particles, gen_particles = self.index._read_particle_fields(
--> 745 particles, self, self._current_chunk)
746 for f, v in read_particles.items():
747 self.field_data[f] = self.ds.arr(v, input_units = finfos[f].units)
/Users/atmyers/yt-x86_64/src/yt-hg/yt/geometry/geometry_handler.pyc in _read_particle_fields(self, fields, dobj, chunk)
232 self._chunk_io(dobj, cache = False),
--> 234 fields_to_read)
235 return fields_to_return, fields_to_generate
/Users/atmyers/yt-x86_64/src/yt-hg/yt/utilities/io_handler.pyc in _read_particle_selection(self, chunks, selector, fields)
159 # Here, ptype_map means which particles contribute to a given type.
160 # And ptf is the actual fields from disk to read.
--> 161 psize = self._count_particles_chunks(chunks, ptf, selector)
162 # Now we allocate
163 # ptf, remember, is our mapping of what we want to read
/Users/atmyers/yt-x86_64/src/yt-hg/yt/utilities/io_handler.pyc in _count_particles_chunks(self, chunks, ptf, selector)
130 def _count_particles_chunks(self, chunks, ptf, selector):
131 psize = defaultdict(lambda: 0) # COUNT PTYPES ON DISK
--> 132 for ptype, (x, y, z) in self._read_particle_coords(chunks, ptf):
133 psize[ptype] += selector.count_points(x, y, z, 0.0)
134 return dict(psize.items())
AttributeError: 'IOHandlerBoxlib' object has no attribute '_read_particle_coords'
New issue 1091: The VR docs on the dev head are from the "experimental" bookmark
I'm not sure how it happened, but the yt docs on the dev head (ie at http://yt-project.org/docs/dev/visualizing/volume_rendering.html) are referencing the new `Scene` class, which is not yet in the dev head, as it was supposed to be confined to the "experimental" head. This means the docs are referencing functionality not yet available in the mainline of the code. I'm worried that perhaps other commits got checked into the main dev branch instead of the experimental branch.
Looking at some basic locations where experimental head changes have occurred, the VR cookbook recipes were also updated.