@Josh and Juan, Thanks for your explanation. On Sat, Apr 27, 2013 at 11:58 AM, Ankit Agrawal <aaaagrawal@gmail.com>wrote:
Hi all,
I guess I need some clarification what nD images exactly mean. For 3D, these are the following ways I can think of it -- 1. RGB image (n x m x 3) : Total m*n pixels 2. Series of grey-scale images (m x n x p): Total p images each with m*n pixels 3. A grey-scale(m x n x 2) or an RGB image(m x n x 4) with depth value at each pixel like a Pointcloud(http://pointclouds.org/)?
As others have pointed out, I think (1) and (3) are 2D+c, and (2) is 3D. You can also have (m x n x p x 3), or even (t x m x n x p x c) for arbitrary t and c.
In general, when I said 3D I meant (m x n x p) and (m x n x p x 3), and when I said nD I meant (t x m x n x p x c). But, importantly, what I'm looking for is functions that are nD aware but will degrade nicely if provided with e.g. a 2D image.
I may be wrong but I feel that there would be a limited number of algorithms that are nD aware but will scale down nicely if provided with a 2D image. For instance, if we have 3D data of the type (m x n x p), many functions and algorithms involving spatial components for eg: gradient based edge detectors won't be applicable since our 3rd dimension represents a series of images, we can't have something like a gradient in that dimension. Instead, if our data is 3D volumetric image, a great percentage of Computer Vision algorithms won't be of any use since they rely on making sense 3D world from 2D data. I would love to hear any comments on this point. Thanks. Regards, Ankit Agrawal, Communication and Signal Processing, IIT Bombay.