Hello,
I think you must all pay attention to standards if you want to act on nD images. For structured data files there is a library that hab been used for decade and which is very efficient, it's netcdf (http://www.unidata.ucar.edu/software/netcdf/), there is already tools in python to deal with it. This library allows one to deal with data in // which is important when one deals with big datas. The last version is based on hdf wich is the tools used on HPC.
If your data are not stored in a standard format, the library ITK has provide a binding called SimpleITK that allows one to read every format ITK konws.
It is important to take into account the fact that the operation on pixel depend on the grid spacing, as for exampe the finite difference gradient which is wrong when one chooses a step of one because the file as not been read correctly.
As the 3D datas often comes from the measurement X-Ray Tomography, MRI, Echography, it is important to consider the device information. The format dicom as been made in this specific purpose there is a python binding called pydicom.
I know that scikit team does'nt want to have to much dependances, but for nD case I think it is important to use existing libraries.
Best regards
Le vendredi 30 novembre 2012 05:45:49 UTC+1, Juan Nunez-Iglesias a écrit :
Hey Guys,
I mentioned this briefly at SciPy, but I would like to reiterate: a lot of data is 3D images these days, and more and more data is being generated that is multi-channel, 3D+t. Therefore, it would be awesome if scikit-image started making more of an effort to support these. In the best case, the dimension of the underlying array can be abstracted away — see
here for example, the functions juicy_center (which extracts the centre of an array, along all dimensions), surfaces (grabs the "border" arrays along each dimension), hollowed (zeroes-out the centre), and more. Otherwise, writing a 3D function that gracefully degrades to 2D when one of the dimensions is 1 is also possible.
In general, the amount of additional effort to make code 3-, 4- or n- dimensional is relatively low when you write the algorithm initially, relative to refactoring a whole bunch of functions later. I'll try to fiddle with whichever code I need, but in the meantime, what do you think about adding a paragraph or a sentence about this issue in the scikit-image
contribute section, so that people at least have this in mind when they are thinking of writing something new?
Thanks,
Juan.