Yt 4.0.0 was released on July the 6th of last year, and there’s been a bunch of contributions since then.
I would like to propose that we try to cut a 4.1 release before the summer break, roughly a year after 4.0
I think it’s both a reasonable goal and a good time for an important release. I anticipate no one will have time for coordinated work during summer, and the fall is traditionally not ideal either: many of us are on teaching duty, and it’s also when the Python ecosystem is least stable.
There are a couple blockers that need to be adressed to get the dev branch back to a release-ready state, namely
1) in 4.0 we promised (in the form of deprecation warnings) that in 4.1, errors would be raised from ambiguous name-only field keys. Actually implementing this poses a couple difficulties (see https://github.com/yt-project/yt/issues/3381 and https://github.com/yt-project/yt/issues/3839) but nothing insurmountable.
2) We need to reach a consensus on how the new axis flipping/swapping machinery should behave. There’s an open discussion for this here https://github.com/yt-project/yt/issues/3890
To get a broader (possibly more confusing) view of the TODO list, see the open milestone: https://github.com/yt-project/yt/milestone/17
I’ve highlighted what I think are the most crucial points with the “release critical” label.
You can help by discussing and triaging open issues and PRs to and from the milestone.
It’s also a good time to get feature PRs to the finish line.
We haven’t made any big “promises” for yt 4.2 (or nothing as significant as the ambiguous field stuff), so I’m hopeful that getting 4.1 out the door will allow us to make more frequent feature releases in the foreseeable future.
Any feedback is most welcome
We are pleased to announce the v1.0 release of Enzo-E, a new parallel
adaptive mesh refinement magnetohydrodynamics code. Enzo-E is based on
Cello, a highly scalable, fully-distributed array-of-octree parallel
adaptive mesh refinement (AMR) framework, and Enzo-E is a scalable branch
of the original Enzo parallel astrophysics and cosmology application that
has been ported to use Cello. Enzo-E’s parallel scalability is enabled by
Charm++ (https://charmplusplus.org/), an advanced parallel runtime system
developed at the University of Illinois. A short paper describing Enzo-E
can be found here
Three HD/MHD solvers: PPM, PPML, VL+CT
Self gravity with a multigrid Poisson solver
A wide range of star formation and sink algorithms
Scalable HDF I/O
Radiative cooling and chemistry via connection to the Grackle package
Source code repository: https://github.com/enzo-project/enzo-e
Cello was written by James Bordner and Enzo-E includes contributions from:
Buket Benek Gursoy
Enzo-E / Cello has benefitted from the following funding sources in reverse
NASA TCAN 80NSSC21K1053 (PI: Peeples; Co-PIs: Bryan, Norman, O’Shea,
Irish Research Council New Foundations Scheme (2019; PI: Regan)
NSF OAC-1835402 (PI: Norman; Institutional PIs: Bryan, O’Shea, Wise)
NSF SI2 SSE-1440709 (PI: Norman)
NSF PHY-1104819 (PI: Norman)
NSF AST-0808184 (PI: Norman)