I don't work with datasets that large, but I know others have in the past.
It's hard to say exactly what's going wrong here without seeing how you construct the cut_region. Would you mind sharing your script?
One explanation for the behavior you're seeing comes from the fact that cut_regions are defined in terms of other fields, and those fields will need to be loaded off disk before the data you are interested in can be selected. If you're not already doing so, defining the cut_region in terms of a geometric object that covers a small fraction of the domain will probably be more memory efficient.
Failing that, if you have a cluster available, you should be able to split the memory load over multiple compute nodes by running yt in parallel.
Hope that helps, please let us know if you have further questions.