Hi all,
I'm taking a look at the plotting docs right now.
One of the scripts in the callback documentation currently fails with a
NotImplementedError.
The somewhat scipt is here: http://paste.yt-project.org/show/4857/
And the traceback is here: http://paste.yt-project.org/show/4858/
I'm not sure whether anyone has looked in detail at the clump finder in
3.0. Does anyone know about its status? I'm not even sure if this is an
issue with the clump finder per se.
-Nathan
Hi all,
I'm looking over the answer test failures which came from the change
in output/input units. I spent a good portion of yesterday and some
of this morning digging into this, examining which mesh cells each
particle got deposited in, and I think I've come up with a *reason*,
if not an *answer*.
What seems to be happening is that at some point as a result of the
additional convert_to_units calls, there is a drift in particle
positions at the scale of a few NULP. Unfortunately, there are a
handful of particles that this causes to shift between zones in the
mesh of the octree -- not enough to change the octree structure, but
enough to cause a difference. I was able to reduce the number of
differences by using IEEE754 rounding, rather than simple truncation,
during cell assignment.
When the deposition is done, this is only a relative difference of
~1e-16, but when projected (and especially, when projected with a
*weight*) this gets amplified to the point that it triggers our answer
tests to fail.
I was able to prove to myself that this is the case by comparing the
results of truncating to float32 precision inside the get() function,
which means all oct-identification for deposition occurs at the scale
of 32 bits rather than 64. I then compared the old results (with the
truncation in precision) to the new results (with the same truncation
in precision) and got identical results, all of which passed the
answer test suite. This doesn't solve the problem, but it points to a
reason for it.
Even with that reason, however, I'm really quite dissatisfied with the
idea that we're introducing this jitter in the first place. It seems,
to my eye, to be coming from multiple calls to unit conversion, etc
etc, that introduce slight differences. I've attempted to reduce
these calls in the PR.
There was an additional issue, in that we were iterating over sets of
files, which introduced the possibility of iteration order
differences. That's been addressed in an outstanding PR.
Anyway, I'll send an update once I've completely tracked this down.
-Matt
Hi all,
What should we do (in general) about non-periodic datasets and getting
ghost zones for the domain edges?
This is the reason that the cookbook recipe
simple_slice_with_multiple_fields.py fails -- it's trying to get
vorticity (needs GZs) for a non-periodic dataset.
Fixing the cookbook only requires swapping out the dataset, but I
thought we should probably open this discussion up a bit too.
-Matt
I've been trying to figure out if there is a way to create a HaloCatalog
that contains dark matter and star particles, but as far as I can tell
there isn't one. From what I understand, the following things are true of
the current HaloCatalog implementation (but please correct me if I'm wrong):
Rockstar doesn't support multiple mass particles so can't be used for this.
Only previously created Rockstar halos can be loaded into a HaloCatalog
with halo_pf (not hop or fof).
Any halo finder can be called when a HaloCatalog is created, but there
isn't a way to pass keyword arguments to the finder so they are always run
with dm_only = True.
Is there another method that I'm missing that gets around these problems?
If not, it's probably worth adding a way to pass keyword arguments to the
individual halo finders from HaloCatalog, especially if the plan for 3.0 is
to have all halo functionality centered around these catalogs.
- Josh
Hi all,
One thing we really tried to do with 3.0 was break all the APIs we
thought we'd need to before release. This included things like ds/pf,
index/hierarchy, the way data selections were made, etc.
It's starting to become clear that we are approaching maturity at
different rates in these initiatives. I am wondering if perhaps we
should de-couple the release from all of the API breakages, and
instead note which interfaces we know are going to change in the
future.
Pragmatically, what this would mean is:
* Release a 3.0 with the old VR and halo finding interfaces
* Release a 3.1 with either the new VR or the new halo finding (or both)
* Do the same for 3.2
This doesn't fit with the usual "major numbers are where APIs break"
philosophy that comes from semantic versioning, but I think from the
perspective of pragmatism, if we identify those sections of the code
that are *going* to change, and we pitch 3.0 as the first part of a
staged release of totally rewritten infrastructure, we can likely come
out okay.
I'd like to put this out there for discussion.
-Matt
I can help fix the halo docs. I had started going through the old pages the
last time I was working on this but got waylaid by traveling. I've got a
bit of time now though!
-Hilary
>
> Message: 1
> Date: Wed, 25 Jun 2014 08:42:53 -0700
> From: Nathan Goldbaum <nathan12343(a)gmail.com>
> To: "yt-dev(a)lists.spacepope.org" <yt-dev(a)lists.spacepope.org>
> Subject: Re: [yt-dev] RFC: Staggered deprecation through 3.1 and 3.2
> Message-ID:
> <
> CAJXewOksXTNE349-+8wkBR56oMqK+viX60iMYwS18quVCcuTLw(a)mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> There are still some vestiges of the old halo machinery in the docs.
> Britton, do you think you'd be up for looking over the halo analysis docs
> and updating it where necessary? I'm specifically referring to some of the
> errors and warnings in the docs build, (see
> http://tests.yt-project.org/job/yt-docs-3.0/lastSuccessfulBuild/consoleFull
> ,
> search for "halo") but it probably also deserves a full proofread.
>
> I haven't felt comfortable touching the halo finding docs since I'm not
> familiar with any of the halo analysis code.
>
> On Wednesday, June 25, 2014, Britton Smith <brittonsmith(a)gmail.com> wrote:
>
> > Getting all the cookbook recipes working is absolutely a blocker for 3.0.
> > I think we're all in agreement on that. Someone should correct me if
> I'm
> > wrong, but all of the existing recipes can and will be made to work by
> > porting existing functionality to yt-3.0. This mostly involves adopting
> > the various newisms, units, field names, etc. We will not release yt-3.0
> > with broken functionality. This is doable on a relatively short
> timescale
> > I think. However, the new VR machinery will not be ready for a bit
> longer,
> > so I think it's ok for that to come in 3.1.
> >
> > As far as halos are concerned, here's the status there:
> > - halo_analysis completely replaces the HaloProfiler. The halo_analysis
> > functionality is fully documented (Thanks Hilary!) and the HaloProfiler
> has
> > been removed from the source and the docs.
> > - all halo finders have been ported and are now compliant with
> > halo_analysis (Thanks again to Hilary).
> > - HaloCatalogTimeSeries is a ways off and will need to be pushed to 3.1
> or
> > 3.2. This is functionality that never existed anywhere, so that's fine.
> > - the other cards on the halo_analysis board are minor enhancements, not
> > blockers
> > - the halo mass function is almost ready and has an open PR that just
> > needs a few things cleaned up and it's ready.
> > - the big thing we are missing is a merger tree as neither of the old
> ones
> > were ported. The idea is for them to work with the halo_analysis and
> > HaloCatalogTimeSeries, which will probably require work akin to a
> redesign.
> > For this reason, this will probably need to wait until 3.1.
> >
> > Britton
> >
> >
> > On Wed, Jun 25, 2014 at 2:35 AM, Cameron Hummels <chummels(a)gmail.com
> > <javascript:_e(%7B%7D,'cvml','chummels(a)gmail.com');>> wrote:
> >
> >> I will continue to work on these, but you're right, we need more people
> >> working on these little issues or this will continue to be a big blocker
> >> and not get done. I can assign specific bugs to the specialists on
> those
> >> issues, if it is helpful (see below).
> >>
> >> For all issues, see this page:
> >> https://trello.com/c/NJMfoXdl/22-be-able-to-run-the-cookbook . Reading
> >> the description can give you some clues as to the source of the error,
> but
> >> the best way to diagnose it is simply to go into the doc cookbook
> directory
> >> and try to run the python script:
> >>
> >> e.g.
> >> $ cd $YT_DEST/doc/source/cookbook
> >> $ python script.py
> >>
> >>
> >>
> >> * VR - KDTree issues (Sam Skillman? Matt Turk?)
> >> ----amrkdtree_downsampling.py
> >> ----camera_movement.py -- something going on with statusbar in brick
> >> counting!
> >> ----opaque_rendering.py -- same as camera_movement.py
> >> ----render_with_box_and_grids.py
> >>
> >> * Light Rays and Light Cones (Britton Smith, John Zuhone, Devin Silvia)
> >> ----make_light_ray.py
> >> ----unique_light_cone_projections.py
> >> ----light_cone_projection.py
> >> ----light_cone_with_halo_mask.py
> >>
> >> * Profiles (Matt Turk, Nathan Goldbaum, other?)
> >> ----global_phase_plots.py
> >> ----profile_with_variance.py
> >> ----rad_velocity.py
> >> ----radial_profile_styles.py?
> >> ----simple_profile.py
> >> ----time_series_profiles.py
> >> ----save_profiles.py
> >>
> >> * Units working with everything in yt (Nathan Goldbaum, Matt Turk, John
> >> Zuhone)
> >> ----hse_field.py
> >> ----simple_off_axis_projection.py
> >>
> >> * Miscellany
> >> ----aligned_cutting_plane.py -- something is wrong with the derived
> >> quantity AngularMomentumVector() in that it doesn't work fully for gas,
> but
> >> seems to work OK for particles.
> >> ----free_free_field.py--creating a new derived field (Matt Turk, Nathan
> >> Goldbaum)
> >> ----simple_slice_with_multiple_fields.py -- vorticity_squared fails as a
> >> field.
> >>
> >>
> >> This is by no means a requirement to work on this, but it would help to
> >> take a look at it to see if you can help correct a small bug here and
> there
> >> if your name is listed here (or even if it isn't!)
> >>
> >> Cameron
> >>
> >>
> >>
> >> On Tue, Jun 24, 2014 at 6:09 PM, Nathan Goldbaum <nathan12343(a)gmail.com
> >> <javascript:_e(%7B%7D,'cvml','nathan12343(a)gmail.com');>> wrote:
> >>
> >>> Ah - I see that now.
> >>>
> >>> Yes, I agree that fixing the cookbook should be a blocker. FWIW that
> >>> card is marked as such. As I said the other day, the blockers are on
> the
> >>> yt-3.0 and the documentation board. We won't do the yt-3.0 release
> until
> >>> all the cards marked as blockers are cleared.
> >>>
> >>> This is a big task - any help we can get on this from anyone following
> >>> along would be much appreciated. There are a lot of little tasks or
> tasks
> >>> that can be completed in a couple hours by anyone that has a little
> with
> >>> yt-3.0.
> >>>
> >>>
> >>> On Tue, Jun 24, 2014 at 6:04 PM, Cameron Hummels <chummels(a)gmail.com
> >>> <javascript:_e(%7B%7D,'cvml','chummels(a)gmail.com');>> wrote:
> >>>
> >>>> I think all of the cookbook items are blockers to be honest, because
> >>>> the cookbook recipes should be tests that are seen as failing.
> >>>>
> >>>> https://trello.com/c/NJMfoXdl/22-be-able-to-run-the-cookbook
> >>>>
> >>>>
> >>>> On Tue, Jun 24, 2014 at 5:32 PM, Nathan Goldbaum <
> nathan12343(a)gmail.com
> >>>> <javascript:_e(%7B%7D,'cvml','nathan12343(a)gmail.com');>> wrote:
> >>>>
> >>>>>
> >>>>>
> >>>>> On Tuesday, June 24, 2014, Cameron Hummels <chummels(a)gmail.com
> >>>>> <javascript:_e(%7B%7D,'cvml','chummels(a)gmail.com');>> wrote:
> >>>>>
> >>>>>> I think there remain some issues in the VR working in 3.0, which I
> >>>>>> identified on the cookbook post on the trello board for yt-3.0. For
> >>>>>> example, I know the overlaying grids and overlaying boundaries does
> not
> >>>>>> currently work.
> >>>>>>
> >>>>>
> >>>>> I don't see this on trello. Can you make a card and mark it as a
> >>>>> blocker?
> >>>>>
> >>>>>
> >>>>>> That may be an easy fix, but it's something to keep in mind. I was
> >>>>>> going to work on it last week as I was doing the cookbook update,
> but I
> >>>>>> figured it was just going to get replaced with the scene interface,
> so it
> >>>>>> wasn't worth the time.
> >>>>>>
> >>>>>> I guess I'd still like to have all of the API breakage occur in the
> >>>>>> big jump from 2.x to 3.0, but if people really want to get 3.0 out
> the door
> >>>>>> asap, then perhaps that isn't compatible. Personally, I'm +1 on
> waiting to
> >>>>>> have all the halo+VR stuff in 3.0 instead of 3.1, but if everyone
> else
> >>>>>> wants a 3.0 out sooner, I will not block it. I think having a
> super fancy
> >>>>>> VR and awesome halo interface is one of the big pulls to getting
> people who
> >>>>>> have not yet switched to join 3.0 (from both 2.x as well as
> non-users) even
> >>>>>> if it takes a few more months, but I may be the minority here.
> >>>>>>
> >>>>>> Cameron
> >>>>>>
> >>>>>>
> >>>>>> On Tue, Jun 24, 2014 at 10:08 AM, Nathan Goldbaum <
> >>>>>> nathan12343(a)gmail.com> wrote:
> >>>>>>
> >>>>>>> Ok - I think the script in the issue description is sufficient.
> Let
> >>>>>>> me know if you need something more detailed.
> >>>>>>>
> >>>>>>>
> >>>>>>> On Tue, Jun 24, 2014 at 10:07 AM, Matthew Turk <
> >>>>>>> matthewturk(a)gmail.com> wrote:
> >>>>>>>
> >>>>>>>> That's the one -- you mentioned it in a blockers email a few days
> >>>>>>>> ago.
> >>>>>>>>
> >>>>>>>> On Tue, Jun 24, 2014 at 12:06 PM, Nathan Goldbaum <
> >>>>>>>> nathan12343(a)gmail.com> wrote:
> >>>>>>>> > Sorry - not sure which issue you're talking about - this one
> >>>>>>>> maybe?
> >>>>>>>> >
> >>>>>>>> >
> >>>>>>>>
> https://bitbucket.org/yt_analysis/yt/issue/827/enzo-particle-fields-work-di…
> >>>>>>>> >
> >>>>>>>> >
> >>>>>>>> >
> >>>>>>>> >
> >>>>>>>> > On Tue, Jun 24, 2014 at 10:02 AM, Matthew Turk <
> >>>>>>>> matthewturk(a)gmail.com>
> >>>>>>>> > wrote:
> >>>>>>>> >>
> >>>>>>>> >> Related to that, do you have a reproducible script for the
> >>>>>>>> particle
> >>>>>>>> >> issue you reported? If so, could you add that to either an
> >>>>>>>> issue or a
> >>>>>>>> >> trello card so I can work on it?
> >>>>>>>> >>
> >>>>>>>> >> On Tue, Jun 24, 2014 at 11:58 AM, Nathan Goldbaum <
> >>>>>>>> nathan12343(a)gmail.com>
> >>>>>>>> >> wrote:
> >>>>>>>> >> > I'd be +1 on this plan, although we should note that this is
> >>>>>>>> the plan in
> >>>>>>>> >> > the
> >>>>>>>> >> > release announcement. We may also want to note that there
> are
> >>>>>>>> some
> >>>>>>>> >> > issues
> >>>>>>>> >> > with volume rendering of oct and particle data at the moment
> >>>>>>>> (I believe
> >>>>>>>> >> > that's the case - let me know if I'm wrong there).
> >>>>>>>> >> >
> >>>>>>>> >> > I think that leaves analysis modules and documentation as the
> >>>>>>>> main
> >>>>>>>> >> > blockers
> >>>>>>>> >> > for a 3.0 release.
> >>>>>>>> >> >
> >>>>>>>> >> > -Nathan
> >>>>>>>> >> >
> >>>>>>>> >> >
> >>>>>>>> >> >
> >>>>>>>> >> > On Tue, Jun 24, 2014 at 9:53 AM, John ZuHone <
> >>>>>>>> jzuhone(a)gmail.com> wrote:
> >>>>>>>> >> >>
> >>>>>>>> >> >> +1 on Matt's proposal. -1 on a beta.
> >>>>>>>> >> >>
> >>>>>>>> >> >> My worry about a beta release is that it will slow adoption,
> >>>>>>>> whether
> >>>>>>>> >> >> rightly or wrongly. I think we agree that we're ready to
> >>>>>>>> encourage
> >>>>>>>> >> >> adoption
> >>>>>>>> >> >> of 3.0.
> >>>>>>>> >> >>
> >>>>>>>> >> >> John ZuHone
> >>>>>>>> >> >> Laboratory for High-Energy Astrophysics
> >>>>>>>> >> >> NASA/Goddard Space Flight Center
> >>>>>>>> >> >> 8800 Greenbelt Rd., Mail Code 662
> >>>>>>>> >> >> Greenbelt, MD 20771
> >>>>>>>> >> >> (w) 301-286-2531
> >>>>>>>> >> >> (m) 781-708-5004
> >>>>>>>> >> >> john.zuhone(a)nasa.gov
> >>>>>>>> >> >> jzuhone(a)gmail.com
> >>>>>>>> >> >>
> >>>>>>>> >> >> > On Jun 24, 2014, at 12:38 PM, Matthew Turk <
> >>>>>>>> matthewturk(a)gmail.com>
> >>>>>>>> >> >> > wrote:
> >>>>>>>> >> >> >
> >>>>>>>> >> >> > I think Britton covered the halos, but the VR works as-is.
> >>>>>>>> As far as
> >>>>>>>> >> >> > 3.0beta, I'm a bit nervous about that as we want to avoid
> >>>>>>>> the
> >>>>>>>> >> >> > situation where we are in beta for 1+ years... I am
> worried
> >>>>>>>> about the
> >>>>>>>> >> >> > perception of a "beta" tag. Is that overblown? Would
> >>>>>>>> calling it
> >>>>>>>> >> >> > "yt-3.0-2014" work?
> >>>>>>>> >> >> >
> >>>>>>>> >> >> >> On Tue, Jun 24, 2014 at 10:32 AM, Nathan Goldbaum
> >>>>>>>> >> >> >> <nathan12343(a)gmail.com> wrote:
> >>>>>>>> >> >> >> Do the old VR and halo interfaces work? Not much effort
> >>>>>>>> has gone
> >>>>>>>> >> >> >> into
> >>>>>>>> >> >> >> porting them, I think.
> >>>>>>>> >> >> >>
> >>>>>>>> >> >> >>
> >>>>>>>> >> >> >>> On Tuesday, June 24, 2014, Sam Skillman <
> >>>>>>>> samskillman(a)gmail.com>
> >>>>>>>> >> >> >>> wrote:
> >>>>>>>> >> >> >>>
> >>>>>>>> >> >> >>> I'm +1 on this, particularly since I'm at fault for not
> >>>>>>>> pushing on
> >>>>>>>> >> >> >>> the
> >>>>>>>> >> >> >>> VR
> >>>>>>>> >> >> >>> as much as I'd like to.
> >>>>>>>> >> >> >>>
> >>>>>>>> >> >> >>>
> >>>>>>>> >> >> >>> On Tue, Jun 24, 2014 at 7:44 AM, Matthew Turk
> >>>>>>>> >> >> >>> <matthewturk(a)gmail.com>
> >>>>>>>> >> >> >>> wrote:
> >>>>>>>> >> >> >>>>
> >>>>>>>> >> >> >>>> Hi all,
> >>>>>>>> >> >> >>>>
> >>>>>>>> >> >> >>>> One thing we really tried to do with 3.0 was break all
> >>>>>>>> the APIs we
> >>>>>>>> >> >> >>>> thought we'd need to before release. This included
> >>>>>>>> things like
> >>>>>>>> >> >> >>>> ds/pf,
> >>>>>>>> >> >> >>>> index/hierarchy, the way data selections were made,
> etc.
> >>>>>>>> >> >> >>>>
> >>>>>>>> >> >> >>>> It's starting to become clear that we are approaching
> >>>>>>>> maturity at
> >>>>>>>> >> >> >>>> different rates in these initiatives. I am wondering
> if
> >>>>>>>> perhaps
> >>>>>>>> >> >> >>>> we
> >>>>>>>> >> >> >>>> should de-couple the release from all of the API
> >>>>>>>> breakages, and
> >>>>>>>> >> >> >>>> instead note which interfaces we know are going to
> >>>>>>>> change in the
> >>>>>>>> >> >> >>>> future.
> >>>>>>>> >> >> >>>>
> >>>>>>>> >> >> >>>> Pragmatically, what this would mean is:
> >>>>>>>> >> >> >>>>
> >>>>>>>> >> >> >>>> * Release a 3.0 with the old VR and halo finding
> >>>>>>>> interfaces
> >>>>>>>> >> >> >>>> * Release a 3.1 with either the new VR or the new halo
> >>>>>>>> finding (or
> >>>>>>>> >> >> >>>> both)
> >>>>>>>> >> >> >>>> * Do the same for 3.2
> >>>>>>>> >> >> >>>>
> >>>>>>>> >> >> >>>> This doesn't fit with the usual "major numbers are
> where
> >>>>>>>> APIs
> >>>>>>>> >> >> >>>> break"
> >>>>>>>> >> >> >>>> philosophy that comes from semantic versioning, but I
> >>>>>>>> think from
> >>>>>>>> >> >> >>>> the
> >>>>>>>> >> >> >>>> perspective of pragmatism, if we identify those
> sections
> >>>>>>>> of the
> >>>>>>>> >> >> >>>> code
> >>>>>>>> >> >> >>>> that are *going* to change, and we pitch 3.0 as the
> >>>>>>>> first part of
> >>>>>>>> >> >> >>>> a
> >>>>>>>> >> >> >>>> staged release of totally rewritten infrastructure, we
> >>>>>>>> can likely
> >>>>>>>> >> >> >>>> come
> >>>>>>>> >> >> >>>> out okay.
> >>>>>>>> >> >> >>>>
> >>>>>>>> >> >> >>>> I'd like to put this out there for discussion.
> >>>>>>>> >> >> >>>>
> >>>>>>>> >> >> >>>> -Matt
> >>>>>>>> >> >> >>>> _______________________________________________
> >>>>>>>> >> >> >>>> yt-dev mailing list
> >>>>>>>> >> >> >>>> yt-dev(a)lists.spacepope.org
> >>>>>>>> >> >> >>>>
> >>>>>>>> http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
> >>>>>>>> >> >> >>
> >>>>>>>> >> >> >> _______________________________________________
> >>>>>>>> >> >> >> yt-dev mailing list
> >>>>>>>> >> >> >> yt-dev(a)lists.spacepope.org
> >>>>>>>> >> >> >>
> >>>>>>>> http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
> >>>>>>>> >> >> > _______________________________________________
> >>>>>>>> >> >> > yt-dev mailing list
> >>>>>>>> >> >> > yt-dev(a)lists.spacepope.org
> >>>>>>>> >> >> >
> >>>>>>>> http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
> >>>>>>>> >> >> _______________________________________________
> >>>>>>>> >> >> yt-dev mailing list
> >>>>>>>> >> >> yt-dev(a)lists.spacepope.org
> >>>>>>>> >> >>
> http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
> >>>>>>>> >> >
> >>>>>>>> >> >
> >>>>>>>> >> >
> >>>>>>>> >> > _______________________________________________
> >>>>>>>> >> > yt-dev mailing list
> >>>>>>>> >> > yt-dev(a)lists.spacepope.org
> >>>>>>>> >> > http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
> >>>>>>>> >> >
> >>>>>>>> >> _______________________________________________
> >>>>>>>> >> yt-dev mailing list
> >>>>>>>> >> yt-dev(a)lists.spacepope.org
> >>>>>>>> >> http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
> >>>>>>>> >
> >>>>>>>> >
> >>>>>>>> >
> >>>>>>>> > _______________________________________________
> >>>>>>>> > yt-dev mailing list
> >>>>>>>> > yt-dev(a)lists.spacepope.org
> >>>>>>>> > http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
> >>>>>>>> >
> >>>>>>>> _______________________________________________
> >>>>>>>> yt-dev mailing list
> >>>>>>>> yt-dev(a)lists.spacepope.org
> >>>>>>>> http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
> >>>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> _______________________________________________
> >>>>>>> yt-dev mailing list
> >>>>>>> yt-dev(a)lists.spacepope.org
> >>>>>>> http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
> >>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>>
> >>>>>> --
> >>>>>> Cameron Hummels
> >>>>>> Postdoctoral Researcher
> >>>>>> Steward Observatory
> >>>>>> University of Arizona
> >>>>>> http://chummels.org
> >>>>>>
> >>>>>
> >>>>> _______________________________________________
> >>>>> yt-dev mailing list
> >>>>> yt-dev(a)lists.spacepope.org
> >>>>> <javascript:_e(%7B%7D,'cvml','yt-dev(a)lists.spacepope.org');>
> >>>>> http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
> >>>>>
> >>>>>
> >>>>
> >>>>
> >>>> --
> >>>> Cameron Hummels
> >>>> Postdoctoral Researcher
> >>>> Steward Observatory
> >>>> University of Arizona
> >>>> http://chummels.org
> >>>>
> >>>> _______________________________________________
> >>>> yt-dev mailing list
> >>>> yt-dev(a)lists.spacepope.org
> >>>> <javascript:_e(%7B%7D,'cvml','yt-dev(a)lists.spacepope.org');>
> >>>> http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
> >>>>
> >>>>
> >>>
> >>> _______________________________________________
> >>> yt-dev mailing list
> >>> yt-dev(a)lists.spacepope.org
> >>> <javascript:_e(%7B%7D,'cvml','yt-dev(a)lists.spacepope.org');>
> >>> http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
> >>>
> >>>
> >>
> >>
> >> --
> >> Cameron Hummels
> >> Postdoctoral Researcher
> >> Steward Observatory
> >> University of Arizona
> >> http://chummels.org
> >>
> >> _______________________________________________
> >> yt-dev mailing list
> >> yt-dev(a)lists.spacepope.org
> >> <javascript:_e(%7B%7D,'cvml','yt-dev(a)lists.spacepope.org');>
> >> http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
> >>
> >>
> >
>
Hi all,
Matt has submitted a proposal for a yt sprint at the Scipy conference. I
think the proposal is pretty much pro forma - there will almost certainly
be a yt sprint.
The sprints will be July 11th and 12th. I'm planning on being there for
both days but will have to leave to catch a plane the afternoon of July
12th.
Please reply to this thread if you're interested in participating. Even if
you're not going to be at the scipy conference this year, remote
participation will certainly be doable.
-Nathan
New issue 851: ray object fields are out of order
https://bitbucket.org/yt_analysis/yt/issue/851/ray-object-fields-are-out-of…
Britton Smith:
Ray objects no longer return fields in order along the ray from the start to the end position. This can be seen with the following:
```
#!python
ds = load(...)
ray = ds.ray(start, end)
print ray["t"]
```
The t values should go increase monotonically from 0 to 1, but instead are scrambled. This can be worked around with an argsort of the fields, but I think this is still a bug, or at least a regression.
Hi all,
Before 3.0 goes out, how do you all feel about a PR that replaces "pf" with
"ds" across the whole codebase?
I think this is worth doing, since it reduces the cognitive load of dealing
with yt internals. One downside is that it will break user scripts if
they're using internal APIs. It will also be a big, difficult-to-review PR.
Now is the time to do this if it ever gets done since there are a large
number of other breaking changes in 3.0.
I'd be happy to do this if there's a consensus that it's worth doing.
Nathan
Hi all,
Kacper and I have been going back and forth a bit on getting FLASH
memory usage down. I thought I knew the reason it was high, but turns
out, I didn't. So after some experimentation, I think the reason is
that the IO chunking was pulling the full dataset, since FLASH has
only one "file" which was only giving it one method of subchunking.
The heuristic right now is how many "grids" can be in a single chunk,
and I've set this to be 1000. For FLASH this isn't so bad, but I
wonder if Enzo and other patch datasets that have many grids in a
single file might end up suffering. I don't know if it's too common
for those data types to have >>1000 grids in a file (my outputs never
did) but if it is, then there will be overhead to reading, iterating,
reading, during data IO.
Anyway, we're playing around with it here:
https://bitbucket.org/yt_analysis/yt/pull-request/962/wip-reduce-memory-usa…
So if you want, try this out, let us know if it improves or degrades things.
-Matt