Fellow yt users:
A few weeks ago, we asked yt users and developers to fill out a brief survey (http://goo.gl/forms/hRNryOWTPO) to provide us with feedback on how well yt is meeting the needs of the community and how we can improve. Thank you to all 39 people who responded, as it has given us a great deal to consider as we move forward with the code. We summarize the results of the survey below, but I start with the basic takeaway from the survey:
Overall Survey Takeaway:
The survey respondents are generally pleased with yt. It meets their needs, has a wonderful community, is relatively easy to install, and has fair documentation. Major short-term requests were for improvements in documentation, particularly in API docs and source code commenting, as well as more cross-linking in the existing documentation and making sure docs were up to date. Furthermore, people wanted more attention to making sure existing code in 3.0 works and for resurrecting all 2.x functionality in 3.0.
The single biggest takeaway from the survey is that the transition to yt 3.0 has been fraught with difficulties. Many submitters expressed satisfaction with the new functionality in 3.0, but the overall process of transition through documentation, analysis modules and community response has been found to be lacking.
There were 39 unique responses to our survey. 75% of the respondents were grads and postdocs with a smattering of faculty, undergrads, and researchers. Nearly everyone is at 4-year universities. 50% of the respondents consider themselves intermediate users, 20% novice, 20% advanced, and 10% gurus.
90% of the respondents use the standalone install script, with a several users employing other methods (potentially in addition to the standalone script). 95% of the respondents rated installation as a 3 or better (out of 5) with most people settling on a 4 out of 5. Installation comments were aimed at having better means of installing on remote supercomputing systems and/or making pip installs work more frequently.
72% of respondents gave yt 5 out of 5 and 97% were 3 or greater for community responsiveness. Clearly this is our strong point. There was a very wide distribution of ways in which people contacted the community for help with the most popular means being the mailing lists, the irc channel, mailing developers directly, and searching google. Comments in this section were mostly positive, but one user wished for more concrete action to be taken after bugs were reported.
77% of respondents gave 4 or 5 out of 5 for the overall rating of the documentation. Individual docs components were more of a mix. Cookbooks were ranked very highly, and quickstart notebooks and narrative docs were generally ranked well. The two documentation components that seemed be ranked lower (although still fair) were API docs and comments in the source code with 15% of respondents noting that they were “mostly not useful” (ie 2 / 5). There were a lot of comments regarding ways to improve the docs, which I bullet point here:
Organization of docs is difficult to parse; Hard to find what you’re looking for.
Hard to know what to search for, so make command list (ie API docs) more prominent
Docs not always up to date (even between 3.0 and dev)
Discrepancies between API docs and narrative docs
Examples are either too simple or too advanced--need more intermediate examples
Units docs need more explanation
Not enough source code commenting or API docs
Not enough cross-linking between docs
More FAQ / Gotchas for common mistakes
API docs should include more examples and also note how to use all of the options, not just the most common.
88% of respondents found yt to meet their research needs (4 or 5 out of 5). Respondents are generally using yt on a variety of datasets including grid data, octree data, particle data, and MHD with only a handful of users dealing with spherical or cylindrical data at present. Nearly all of the frontends are being used by respondents, with a few exceptions: Chombo, Moab, Nyx, Pluto, and non-astro data. Visualization remains the main use of yt with 97% of respondents, but simple analysis received 82% and advanced analysis received 62%. Interestingly, 31% of respondents use halo analysis tools, with only 15% using synthetic observation analysis.
51% of respondents gave yt 5 out of 5 for general satisfaction, with 28% 4 out of 5 and 15% 3 out of 5. Overall, this is pretty good but probably biased by the fact that people filled out this survey. Comments on the greatest strengths of yt include:
Comments on the biggest shortcomings of yt include:
documentation (see above)
learning to “think in yt”
making new functionality when there is existing broken functionality (or missing documentation)
making sure 3.0 matches all functionality from 2.x
keeping the documentation up to date
making the transition from 2.x to 3.0 easier (how to update scripts)
Things to focus on in the next year:
documentation (almost unanimously)
making sure 3.0 can do all functionality from 2.x
Thank you for all of the valuable feedback. We sincerely appreciate the constructive criticism in making for a better code and community! We will put together a blueprint of how to address these shortcomings soon. Look for it after the holiday break. Have a wonderful holiday!
On behalf of the yt development team,
Cameron--Cameron HummelsPostdoctoral ResearcherSteward ObservatoryUniversity of Arizona
yt-dev mailing list