
Hi, We are trying to deal with very large images (they typically do not fit in-memory), and for visualizing purposes, I am trying to use some downsampling before sending the bits to the graphics render. Our files are typically in HDF5 (or NetCDF4) format, so the data on-disk is accessible randomly (i.e. they support indexes a la NumPy), and I noticed that the recently added `skimage.transform.downscale_local_mean()` function (which is just perfect for our purposes) also works with these on-disk arrays. The problem is that the function takes *ages* to finish. My guess is that this is mainly a consequence of the fact that accessing data on-disk without too much care about data locality, is much more expensive than doing the same in-memory. I suppose the only solution is to rewrite the algorithms we are interested in so that leveraging spatial locality would be critical, but I wanted to report that here just in case someone has some insight on what to do in this case. Thanks, -- Francesc Alted
participants (1)
-
Francesc Alted