This tweet  was brought to my attention. There is good criticism in that thread so let me respond:
1. It is ridiculous the YT paper is not cited. It defiantly was last I checked. How that was dropped is beyond me. I will make sure that is somehow fixed.
2. The "single slice plot of a single time step took 13 minutes" is also misleading and is a mistake that I will make sure is somehow corrected. I ran the simulations and wrote those sections and someone else ran the plotting routines - who is admittedly a Paraview developer - and I should have checked this text more closely. They were running both YT and Paraview from their Apple Macbook. The 13 minutes was how long it took that Macbook to volume render the dump versus Paraview, also running on a Macbook but using an HPC set of nodes in the backend. It's a feature of Paraview that you can do this.
Thus it is very misleading, I am embarrassed I missed it, and will make sure it is corrected.
3. That said, there are some real deficiencies in YT for HPC-class datasets
- The ability to overlay several variables using different color schemes on the same plot. There are times when having a temperature field and metallically field - for example - overlaying a density field is helpful. This was mentioned in the paper.
- The ability to volume render interactively until you get a plot exactly how you want it. With YT you have to keep running a script over and over again until it looks right. With paraview - even with a 26 Tb dump on our classified systems - files can be open and rotated and modified in real time to be exactly how you want in like 5 minutes. This was the cumbersome comment.
- Paraview's ability to open up a huge file - again, we have a 26 terabyte file on our classified side we used Paraview - and let the clusters do all the work in minutes from the comfort of your personal Macbook. A file that YT was unable to ever fit in memory for volume rendering no matter how many nodes we tried. This failure was not mentioned in the paper, even though some working with such huge datsets may be interested.
- Another omission from the paper is we did the same thing with gadget data, that is arguably used more than Enzo. Long story short, YT could never volume render the data, a known issue for at least 5 years , whereas paraview both reads and volume renders gadget data natively. Again, there is no reason to point this out in print even though some would be interested.
- The issue with Paraview not reading Enzo natively was true at the time of the analysis but is no longer true. There is a development version by the above developer that reads Enzo.
- Some of you work with Alex Gagliano. Though he has been working on this water network, he has nothing to do with how any of this paper is worded. Please do not give him grief for what is in it. He has a future paper on the subject and I guarantee these problems will not be repeated.
Anyway, I think YT is a fine product. I use it almost every day and encourage all students to use it. I am embarrassed this was lost from the paper and will work to restore it. But at the same time, to deny that Paraview comes with many advantages - especially as the world becomes more huge 3D TB sized datasets - is a little naive.