Hi,

Many functions in skimage return a numpy array, but the meaning of that array can differ significantly: sometimes it represents an image, sometimes a set of locations or vectors, etc.

Skimage also has the Image class (that inherits from np.ndarray), which I believe is mostly there to make the image be shown as an image in IPython notebook. What about taking this idea a bit further? In particular, I am thinking about two things:

1) Allowing extra attributes for the Image class, like "sampling" that specifies the distance between the pixels. This attribute can then be used by algorithms to take anisotropy into account, and visualization toolkits could use it to scale the image in the correct way automatically. This may not seem a very common use case for 2D images, but 3D data is usually not isotropic.

Other attributes that I think may be of use for the Image class are "origin" that specifies the location of the topleft pixel relative to an arbitrary coordinate frame, and "meta" for the meta data (e.g. EXIF tags).

2) Using a PointSet class to represent numpy arrays that are point sets or vector sets. Also such a class can make working with point data much easier, both in internal algorithms, and for the end-user.

An example class can be seen here: https://gist.github.com/almarklein/6620956 it allow things like appending/removing/inserting/popping individual points, and calculating things like normals, angles, distances, etc.

I would like to stress that both Image and PointSet class do (and should) inherit from np.ndarray, such that they are completely compatible with existing code in skimage and other packages.

I am interested to learn whether these ideas can find their way into skimage or not,
  Almar