
Le 19/09/2013 10:36, Almar Klein a �crit :
1) Allowing extra attributes for the Image class, like "sampling" that specifies the distance between the pixels. This attribute can then be used by algorithms to take anisotropy into account, and visualization toolkits could use it to scale the image in the correct way automatically. This may not seem a very common use case for 2D images, but 3D data is usually not isotropic.
Other attributes that I think may be of use for the Image class are "origin" that specifies the location of the topleft pixel relative to an arbitrary coordinate frame, and "meta" for the meta data (e.g. EXIF tags).
Hi, I am not related to the development of scikit-image but I guess its goal is to work on images in general and not get too specialised, for instance in Medical images. I am working on a Python interface for a C++ medical imaging library, and I chose to subclass np.ndarray, images keeping a header information ( 'dim', 'orientation', 'origin' and 'pixelSize'), the main feature is to keep track of those spatial coordinate when we slice or resample the array. As St�fan pointed out " when you slice out a scalar, or sum, you get an Image object out!", but this has not been an issue so far, I just hide it by overriding: def __str__(self): if len(self.shape) == 0: return str(self.view(np.ndarray)) else: return self.__repr__() The documentation I placed online might give you more ideas on what could be done with a subclass of np.ndarray dedicated to images: http://www.doc.ic.ac.uk/~kpk09/irtk/#irtk.imread May I ask: if you were to add sampling information and spatial coordinates, such as an origin for the top-left pixel, how would you input that information into scikit-image, could you read it directly from the input files or would you need some user input? Kind regards, Kevin

On Thu, Sep 19, 2013 at 12:03 PM, Almar Klein <almar.klein@gmail.com> wrote:
I suppose that the most important bit is that functions that support anisotropy should look whether a "sampling" attribute is present in the given array.
I don't think we currently have any of these, but for now we can probably include a `sampling` argument to functions that support it.
Mmm, you seem to be right, but you're *going* to :) The new marching cubes algorithm has a sampling argument, and I think Josh spoke about adding it so some morphological operators. I hope to do a PR on the MCP algorithm soon, which will add support for anisotropy as well.
I do not necessarily mean that a PointSet represents an image (although it
could), but more generally to for instance store the locations of detected feature points.
How do you think this would fit into the scope of image processing? (Asked out of curiosity, not at all to put the idea down.)
That's a good point. I think a PointSet class fits image processing because many image processing algorithms either accept or produce some form of locations or vectors. I would still call it "image processing" when you process the resulting locations/vectors. However, such algorithms probably fall out the scope of scikit-image. So probably a better place for a PointSet class would be Scipy, but I have a feeling they would not be interested in including it. I am not related to the development of scikit-image but I guess its goal is
to work on images in general and not get too specialised, for instance in Medical images.
I agree with that. I think that the only parameter of interest to scikit image is the "sampling" to deal with anisotropic arrays. So in that sense, it should be sufficient that sciki-image provides an Image class to which arbitrary attributes can be attached. Functions in scikit-image can the check if the array has a "sampling" attribute, and use it. If it returns an array with the same shape, it would be nice to also set the sampling on that. May I ask: if you were to add sampling information and spatial coordinates,
such as an origin for the top-left pixel, how would you input that information into scikit-image, could you read it directly from the input files or would you need some user input?
I am not sure I understand. In most cases the attributes are added to the image (i.e. numpy array) when it is read.
participants (2)
-
Almar Klein
-
Kevin Keraudren