I'm writing with an update about a previous email I sent a while ago:
At the University of Illinois, we're looking to hire a visiting
research scientist to work full-time on yt and yt-related projects.
This position *can* be remote, and the expected salary will be at or
above $65k (plus benefits). I also want to note that this position
will certainly involve engagement beyond yt's traditional applications
The job posting is here -- we extended the search, and it will now
close on March 15, with a flexible start date that could be as soon as
This job would be to work on the underlying infrastructure of yt, and
would present opportunities to familiarize yourself with some other
components of the modern data science ecosystem and the pydata stack,
including things like dask, xarray and so forth.
I think that this position may be particularly interesting to
individuals looking to transition to industry, as well as those who
are interested in pursuing jobs in academia that are more focused on
research software. This job is posted as part of the School of
Information Sciences, but it's within a research group that spans the
National Center for Supercomputing Applications (NCSA), the Astronomy
department, the Institute for Genomic Biology and other groups and
institutes on campus.
This position will also provide ample opportunities for
interdisciplinary work with academics, the open source scientific
software ecosystem (SciPy, PyData, etc) and for travel and
professional development. You'd get to work with folks involved in yt
and you'd get to have a hand in developing and designing some really
fun tools for data analysis and visualization. (*And* we have an
annual sweetcorn festival in Urbana -- $1/cob!)
Please do reach out to let me know if you have any questions about the
opportunity, and if you know anyone you think might be interested, it
would be very helpful if you could pass this information along to
I've been working on updating the merge between yt-4.0 and master.
As it stands:
* All travis tests and appveyor tests pass. This includes the
"minimal answers," which I personally verified were correctly updated.
(You can see the HTML outputs of the diffs here:
* py2 tests are all failing, which is to be expected.
* py3 answer tests are failing, which is *also* to be expected.
This means that the answer tests, which necessarily should fail
because *we have changed behavior for SPH/n-body results*, are the
remaining item to verify. I am working to prepare a set of
verification tests for the results which I can manually check off,
which I will put in the pull request as I check them off with manual