For the past couple of days I've been trying to figure out halo profiles and virial radius calculations in yt, which so far has mostly consisted of trying to understand the differences in the "Overdensity" field between yt versions.

The biggest difference is due to yt-2.x using an incorrect redshift scaling for the overdensity calculation. The field is defined as

Matter_Density / (rho_crit_now * data.pf.hubble_constant**2 * (1 + data.pf.current_redshift)**3)

which scales the critical density by (1+z)^3 instead of only the matter component.

This error is fixed in yt 3, but I'm not sure if it's corrected for or known by people still using legacy versions of yt.

After accounting for this, there is still a ~1.5% difference in the mean overdensity in my simulation between yt 2.x and 3 (the yt 3 value is lower). This difference comes from the particle_density component of the matter_density. The yt 2.x version uses the particle masses from Enzo and yt's CICDeposit_3. The yt 3 version uses the Dark_Matter_Density field from Enzo, which uses Enzo's build in cic method. As far as I understand, differences in cic methods could lead to different values over subdomains of a simulation, but shouldn't lead to different total masses over the entire simulation box.

I went through each grid of my simulation individually, and in every one of them the sum of the Dark_Matter_Density values is less than the sum of the particle_mass values. Differences range from 0.4% to 3.5% across individual grids. My guess is that this means the Dark_Matter_Density field is losing mass that is deposited in the ghost zones outside of the grid boundaries.

I'm not familiar enough with the internal workings of Enzo to know if this has any impact on the actual simulations, or if it is only an issue with the data outputs. Either way, it might be a good idea to have yt return to calculating the particle_density field from particle masses and do the cic deposition in yt, at least for Enzo data.
    - Josh