In Marianne's case, there is a 3D volumetric image *in addition to* a time axis. Furthermore, if the time resolution in t is sufficient, many nD algorithms can be used, along t as well (with suitable parameters e.g. sigma for gaussian gradient magnitude). For an example, see: Andres, B., Kroeger, T., Briggman, K. L., Denk, W., Korogod, N., Knott, G., Koethe, U., and Hamprecht, F. A. (2012). Globally optimal closed-surface segmentation for connectomics. ECCV, 778–791. where they use a 3D segmentation method to do tracking in 2D+t video. On Mon, Apr 29, 2013 at 12:04 PM, Ankit Agrawal <aaaagrawal@gmail.com>wrote:
On Mon, Apr 29, 2013 at 7:13 AM, Juan Nunez-Iglesias <jni.soma@gmail.com>wrote:
On Mon, Apr 29, 2013 at 11:06 AM, Ankit Agrawal <aaaagrawal@gmail.com>wrote:
@Josh and Juan, Thanks for your explanation.
I may be wrong but I feel that there would be a limited number of algorithms that are nD aware but will scale down nicely if provided with a 2D image. For instance, if we have 3D data of the type (m x n x p), many functions and algorithms involving spatial components for eg: gradient based edge detectors won't be applicable since our 3rd dimension represents a series of images, we can't have something like a gradient in that dimension.
@Ankit, actually, edge detectors generalise quite nicely to nD, e.g.:
http://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.filters.ga...
Yes, edge detectors, filters and interpolating function generalize to nD quite easily, but my point was something else which slipped through because of my ambiguous explanation. The edge detectors will be good if the the data is volumetric in nature. On the contrary, if the data is 3D(series of images) where the dimension is not the z co-ordinate but a time instance, like in Marianne's case, gradient along the first two dimensions would be w.r.t space, while gradient along the third dimension would be w.r.t time(like the Optical Flow algorithm<http://people.csail.mit.edu/bkph/papers/Optical_Flow_OPT.pdf>in Computer Vision), which according to me, are fundamentally different in true sense.
Many other examples of nD algorithms: http://docs.scipy.org/doc/scipy/reference/ndimage.html
Instead, if our data is 3D volumetric image, a great percentage of
Computer Vision algorithms won't be of any use since they rely on making sense 3D world from 2D data. I would love to hear any comments on this point. Thanks.
Photographs are rarely 3D, but various kinds of microscopy produce truly 3D images, not a sequence of unrelated images. If you give specific algorithms, we might be better able to point out how to generalise to 3D, but the gist is that most algorithms *do* generalise. It is the implementations that are 2D, not the algorithms.
Regards, Ankit Agrawal, Communication and Signal Processing, IIT Bombay.
-- You received this message because you are subscribed to the Google Groups "scikit-image" group. To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe@googlegroups.com. For more options, visit https://groups.google.com/groups/opt_out.