Often with these sort of discussions I end up taking the opposite side regardless of my actual position to make sure we look at all of the important aspects critically. Historically I've been very positive on 3D+ support even before Juan joined the party, but I'm going to examine this from a pragmatic angle.
As someone who works in medical imaging, I am generally very keen on having 3D implementations (and, in a perfect world, these would support anisotropic spacing along at least one dimension because slice thicknesses are very often not the same as the in-plane spacing and reslicing volumes to anisotropic is expensive and tricky in terms of interpolation used). Time isn't a true 4th dimension, and the vast majority of data above 3D is multispectral, rather than volumetric in higher dimensions. I think for the majority of users, 3D anisotropic implementations would be sufficient; how many people come to us with actual 4D+ data? When we're talking N-D from an algorithmic standpoint, the data needs to have N spatial dimensions for this abstraction to make real sense. I think this is actually quite rare above N=3, though I could be wrong (I'm interested in any examples you all are aware of which use 4D+ spatial data). The reason I bring this up is because if we hypothetically set the bar at some point, say 3D, we have a lot more options than for N-D in terms of clean JIT and maintainer friendly code.
This is because there is actually a massive difference between an algorithm written with nested loops in 2D and 3D separately, versus a truly N-D version of the same algorithm. This is not a quick fix; it's usually an entirely different approach. The massive difference dramatically narrows the potential contributors which could bring such an algorithm to us, and even asking the question of people generally will result in abandoned PRs and - like it or not - the perception that if any potential contributor can't (or doesn't think they can) manage things in N-D they might as well not try. Do we find this acceptable?
The performance and future support implications are also nontrivial. The theoretical nested loop code I mention above is very easy to understand, for reviewers and future maintainers, and is already in an ideal form for Numba to JIT. As Numba continues to improve, the nested loops with branching paths for 2D and 3D might well beat a complex N-D implementation written in Cython (which may have no real pathway to JITting). At SciPy I heard Pandas is considering Numba as a hard dependency. We should probably be paying close attention to that.
An example from direct experience: I had flood fill written in Cython for 2D over 3 years ago (see
https://gist.github.com/JDWarner/83808ce4c9374a242905). The 3D case is trivial to add as a separate path, but we wanted this to be N-D. I didn't see an ideal approach to do so, thus it went on the back burner until I saw Lars' work on
#3022. Borrowing and building on that framework, years later we got N-D flood fill in
#3245. Along the way there were unintuitive pitfalls to the "ravel approach" as we're calling it in
#4088 which were difficult for me to find, and likely a big challenge for reviewers to catch. This approach also requires the image be padded by 1, then cropped back down, as a way to know where the borders are since we're working on the raveled array. Copying the input and then making it even larger is bad practice for large images and may cause memory issues (nested loop implementations, in contrast, do not require copies or padding). So yes, flood fill is now N-D and there is one code path, but it took over 2 additional years... and would probably be more performant in 2D/3D as nested loops, more maintainable, as well as better future-proofed (Numba). As nice as it is I do not think this algorithm as written could be extracted from Cython and handed to Numba. Would it have been better to have robust 2D and 3D flood fills years earlier?
Food for thought.
Josh