Hi Stefan,
Thanks for the suggestions!
Is this related to
http://projects.scipy.org/scipy/scipy/ticket/213
in any way?
As far as I can see, the problems look different, but thanks for the example of how to document this. I did confirm that your example exhibits the same behaviour under numarray, in case that is useful.
Code snippets to illustrate the problem would be welcome.
OK. I have had a go at producing a code snippet. I apologize that this is based on numarray rather than numpy, since I'm using STScI Python, but I think it should be very easy to convert if you have numpy instead.
What I am doing is to transform overlapping input images onto a common, larger grid and co-adding them. Although I'm using affine_ transform on 3D data from FITS images, the issue can be illustrated using a simple 1D translation of a single 2D test array. The input values are just [4., 3., 2., 1.] in each row. With a translation of -0.1, the values should therefore be something like [X, 3.1, 2.1, 1.1, X, X], where the Xs represent points outside the original data range. What I actually get, however, is roughly [X, 3.1, 2.1, 1.0, 1.9, X]. The 5th value of 1.9 contaminates the co-added data in the final output array. Now I'm looking at this element-by-element, I suppose the bad value of 1.9 is just a result of extrapolating in order to preserve the original number of data points, isn't it? Sorry I wasn't clear on that in my original post -- but surely a blank value (as specified by cval) would be better?
I suppose I could work around this by blanking out the extrapolated column after doing the affine_transform. I could calculate which is the column to blank out based on the sense of the offset and the input array dimensions. It seems pretty messy and inefficient though. Another idea is to split the translation into integer and fractional parts, keep the input and output array dimensions the same initially and then copy the output into a larger array with integer offsets. That is messy to keep track of though. Maybe a parameter could instead be added to affine_transform that tells it to shrink the number of elements instead of extrapolating? I'd be a bit out of my depth trying to implement that though, even if the authors agree... (maybe in a few months though).
Can anyone comment on whether this problem should be considered a bug, or whether it's intended behaviour that I should work around?
The code snippet follows below. Thanks for your patience with someone who isn't accustomed to posting questions like this routinely :-).
James.
-----
import numarray as N import numarray.nd_image as ndi
# Create a 2D test pattern: I = N.zeros((2,4),N.Float32) I[:,:] = N.arange(4.0, 0.0, -1.0)
# Transformation parameters for a simple translation in 1D: trmatrix = N.array([[1,0],[0,1]]) troffset = (0.0, -0.1)
# Apply the offset to the test pattern: I_off1 = ndi.affine_transform(I, trmatrix, troffset, order=3, mode='constant', cval=-1.0, output_shape=(2,6))
I_off2 = ndi.affine_transform(I, trmatrix, troffset, order=3, mode='constant', cval=-1.0, output_shape=(2,6), prefilter=False)
# Compare the data before and after interpolation: print I print I_off1 print I_off2
Sorry I accidentally broke this thread in 2 (see thread of March 9).
I tried manually adding the right reference in the mail header to continue the same thread, but obviously got it wrong (I think because I replied to the digest email instead of starting a new one). Not sure whether there is a better way to reply to messages in a digest (I'm using Thunderbird)? Hope it works this time...
James.
James Turner wrote:
Sorry I accidentally broke this thread in 2 (see thread of March 9).
I tried manually adding the right reference in the mail header to continue the same thread, but obviously got it wrong (I think because I replied to the digest email instead of starting a new one). Not sure whether there is a better way to reply to messages in a digest (I'm using Thunderbird)? Hope it works this time...
I see it correctly threaded in Thunderbird. Don't worry about it, regardless. We're smart people; we'll catch on. :-)
Hi James
On Thu, Mar 15, 2007 at 06:01:55PM -0400, James Turner wrote:
Code snippets to illustrate the problem would be welcome.
OK. I have had a go at producing a code snippet. I apologize that this is based on numarray rather than numpy, since I'm using STScI Python, but I think it should be very easy to convert if you have numpy instead.
What I am doing is to transform overlapping input images onto a common, larger grid and co-adding them. Although I'm using affine_ transform on 3D data from FITS images, the issue can be illustrated using a simple 1D translation of a single 2D test array. The input values are just [4., 3., 2., 1.] in each row. With a translation of -0.1, the values should therefore be something like [X, 3.1, 2.1, 1.1, X, X], where the Xs represent points outside the original data range. What I actually get, however, is roughly [X, 3.1, 2.1, 1.0, 1.9, X]. The 5th value of 1.9 contaminates the co-added data in the final output array. Now I'm looking at this element-by-element, I suppose the bad value of 1.9 is just a result of extrapolating in order to preserve the original number of data points, isn't it? Sorry I wasn't clear on that in my original post -- but surely a blank value (as specified by cval) would be better?
I'd like to confirm that you see the same results when running your script:
[[ 4. 3. 2. 1.] [ 4. 3. 2. 1.]] [[-1. 3.12520003 2.11439991 1.01719999 1.87479997 -1. ] [-1. 3.12520003 2.11439991 1.01719999 1.87479997 -1. ]] [[-1. 3.0996666 2.0999999 1.34300005 1.90033329 -1. ] [-1. 3.0996666 2.0999999 1.34300005 1.90033329 -1. ]]
Cheers Stéfan
Hi Stefan,
I'd like to confirm that you see the same results when running your script:
[[ 4. 3. 2. 1.] [ 4. 3. 2. 1.]] [[-1. 3.12520003 2.11439991 1.01719999 1.87479997 -1. ] [-1. 3.12520003 2.11439991 1.01719999 1.87479997 -1. ]] [[-1. 3.0996666 2.0999999 1.34300005 1.90033329 -1. ] [-1. 3.0996666 2.0999999 1.34300005 1.90033329 -1. ]]
Yes, I get exactly the same numbers with numarray on a PC running Linux. I just rounded the numbers off in my explanatory text to simplify the discussion; sorry if that was confusing.
Thanks!
James.
On Fri, Mar 16, 2007 at 11:49:56AM -0400, James Turner wrote:
Hi Stefan,
I'd like to confirm that you see the same results when running your script:
[[ 4. 3. 2. 1.] [ 4. 3. 2. 1.]] [[-1. 3.12520003 2.11439991 1.01719999 1.87479997 -1. ] [-1. 3.12520003 2.11439991 1.01719999 1.87479997 -1. ]] [[-1. 3.0996666 2.0999999 1.34300005 1.90033329 -1. ] [-1. 3.0996666 2.0999999 1.34300005 1.90033329 -1. ]]
Yes, I get exactly the same numbers with numarray on a PC running Linux. I just rounded the numbers off in my explanatory text to simplify the discussion; sorry if that was confusing.
Not at all, just wanted to make sure. I am starting to form an idea of what is happening here. Check out the following result:
In [25]: import numpy as N
In [26]: x = N.array([[4,3,8,1],[4,3,8,1.]])
In [27]: ndi.geometric_transform(x,shift,output_shape=(2,6),prefilter=False,order=0,cval=-1) Out[27]: array([[-1., 3., 8., 1., 8., -1.], [-1., 3., 8., 1., 8., -1.]])
Looks like the spline fit is done on mirrored input data, instead of padding the input with the cval.
Regards Stéfan
Thanks, Stefan.
In [25]: import numpy as N In [26]: x = N.array([[4,3,8,1],[4,3,8,1.]]) In [27]: ndi.geometric_transform(x,shift,output_shape=(2,6),prefilter=False,order=0,cval=-1) Out[27]: array([[-1., 3., 8., 1., 8., -1.], [-1., 3., 8., 1., 8., -1.]])
Your example seems to be missing a definition for "shift", but I see that shift = lambda (y, x): (y, x-xoff) reproduces your result for any 0. < xoff <= 0.5.
Looks like the spline fit is done on mirrored input data, instead of padding the input with the cval.
Yes, that makes sense. I suppose the idea is to *avoid* creating edge effects due to an artificial sharp cut-off(?). That is fine for calculating the interpolant, but it seems like a mistake to include the mirrored values in the output result, doesn't it?
Do people agree that points outside the bounds of the original data should be blank, such that the correct result in the above case would be [-1., 3., 8., 1., -1., -1.]? If so, I can go ahead and file a bug report (should it be a scipy-ticket, an email to STScI or both?). I would offer to have a go at changing the nd_image code, but I think I might be well out of my depth at this point...
Cheers,
James.
The people at STScI put me in touch with Peter Verveer, the author of nd_image. Unfortunately Peter is currently unable to maintain the code (either in numarray or scipy), but he did send me some comments on the problem discussed in this thread. Here's what he said:
James.
-----
Hi James,
Yes, it could be that you see some mirroring. Let me first explain what the mode options do:
If you try to interpolate a value that falls outside of the boundaries, then that is done by either setting it constant (mode='constant') or by mapping to a position inside the boundaries, i.e. by mirroring, and then interpolating. So the mode argument really deals with extrapolating.
Problem is when you are mapping a value that is inside the boundaries, but very close. Interpolation is done by splines which require that a window is placed around the point you are interpolating, and the result is calculated from the values inside that window. Thus, if you are close to the boundary, part of that window will fall outside the boundaries, and the algorithm must choose how to set the values outside the boundary. Ideally that would be done in the same way as above, i.e. you should have the choice if that is done by a constant value, or by mirroring etc.
Unfortunately, the spline algorithms that I use (references for the algorithm are in the manual) have an intrinsic mirroring assumption build in. Thus for interpolating values inside the boundaries, a mirror boundary is the only option. I did not figure out a way to modify the interpolation algorithms. Therefore, if you choose a mode different from mirroring, then you end up with small artifacts at the boundaries. Presumably these artifacts will get bigger if the order of spline interpolation you select is larger.
So, its not really a bug, its a undesired feature...
Cheers, Peter
On 22/03/07, James Turner jturner@gemini.edu wrote:
So, its not really a bug, its a undesired feature...
It is curable, though painful - you can pad the image out, given an estimate of the size of the window. Yes, this sucks.
Anne.
On Thu, Mar 22, 2007 at 02:41:52PM -0400, Anne Archibald wrote:
On 22/03/07, James Turner jturner@gemini.edu wrote:
So, its not really a bug, its a undesired feature...
It is curable, though painful - you can pad the image out, given an estimate of the size of the window. Yes, this sucks.
I would rather opt for changing the spline fitting algorithm than for padding with zeros.
Stéfan
Stefan van der Walt wrote:
On Thu, Mar 22, 2007 at 02:41:52PM -0400, Anne Archibald wrote:
On 22/03/07, James Turner jturner@gemini.edu wrote:
So, its not really a bug, its a undesired feature...
It is curable, though painful - you can pad the image out, given an estimate of the size of the window. Yes, this sucks.
I would rather opt for changing the spline fitting algorithm than for padding with zeros.
From what I understand, the splines used in ndimage have the implicit mirror-symmetric boundary condition which also allows them to be computed rapidly. There may be ways to adapt other boundary conditions and maintain rapid evaluation, but it is not trivial as far as I know. Standard spline-fitting allows multiple boundary conditions because matrix inversion is used. I think the spline-fitting done in ndimage relies on equal-spacing and mirror-symmetry to allow simple IIR filters to be used to compute the spline coefficients very rapidly.
This assumption would have to be re-visited for any changes to be made.
-Travis
On Thu, Mar 22, 2007 at 04:33:53PM -0700, Travis Oliphant wrote:
I would rather opt for changing the spline fitting algorithm than for padding with zeros.
From what I understand, the splines used in ndimage have the implicit mirror-symmetric boundary condition which also allows them to be computed rapidly. There may be ways to adapt other boundary conditions and maintain rapid evaluation, but it is not trivial as far as I know. Standard spline-fitting allows multiple boundary conditions because matrix inversion is used. I think the spline-fitting done in ndimage relies on equal-spacing and mirror-symmetry to allow simple IIR filters to be used to compute the spline coefficients very rapidly.
Thanks, Travis. I wasn't aware of these restrictions.
Would it be possible to call fitpack to do the spline fitting? I noticed that it doesn't exhibit the same mirror-property:
In [24]: z = scipy.interpolate.splrep([0,1,2,3,4],[0,4,3,2,1])
In [25]: scipy.interpolate.splev([0,1,2,3,4,5],z) Out[25]: array([ -1.32724622e-16, 4.00000000e+00, 3.00000000e+00, 2.00000000e+00, 1.00000000e+00, -1.25000000e+00])
Regards Stéfan
Stefan van der Walt wrote:
On Thu, Mar 22, 2007 at 04:33:53PM -0700, Travis Oliphant wrote:
I would rather opt for changing the spline fitting algorithm than for padding with zeros.
From what I understand, the splines used in ndimage have the implicit mirror-symmetric boundary condition which also allows them to be computed rapidly. There may be ways to adapt other boundary conditions and maintain rapid evaluation, but it is not trivial as far as I know. Standard spline-fitting allows multiple boundary conditions because matrix inversion is used. I think the spline-fitting done in ndimage relies on equal-spacing and mirror-symmetry to allow simple IIR filters to be used to compute the spline coefficients very rapidly.
Thanks, Travis. I wasn't aware of these restrictions.
Would it be possible to call fitpack to do the spline fitting? I noticed that it doesn't exhibit the same mirror-property:
It doesn't. It uses standard spline-fitting techniques (which are also slower). So, yes you could call fitpack but for a large multi-dimensional array it's going to take longer.
-Travis
Thanks for the explanation, Travis.
It now looks like there are 2 distinct issues getting mixed up here, however. First, there is the issue of the mirror symmetry of the spline algorithm affecting the data towards the edges, as described by Peter and Travis (this is probably also what Anne is referring to). It is certainly useful to know about this, but I think this effect *within* the data boundaries must be relatively subtle compared with what I'm noticing. Second, there is a reflection 1 pixel *outside* the bounds of the original data, instead of the blank value I would expect. I think the code is just calculating one too many rows. I talked to Peter about this a bit more and he now agrees that it might be a real bug. I'll attach the most relevant part of his email below, along with my question (quoted).
This doesn't solve the problem of fixing the code, of course, but I think it's useful to agree what the behaviour should be.
Cheers,
James.
-----
Actually, just going back to your last email, it looks like the mirrored values in the output are *outside* the bounds of the input data (by 1 pixel), not just close enough for the interpolation window to overlap the edge. In the example from my last email, the value of 1.899 in the 5th column corresponds to a data point beyond the final 1.0 of the input data.
You are right, that values is actually interpolating from a point just outside the boundary of the input. Probably that should have been -1, so that you could call a bug. Why it calculates that value (and 2 for order=0) I am not sure...
Hi James,
Would it be possible to ask Peter if he knows anything that could help us resolve scipy ticket 213 ( http://projects.scipy.org/scipy/ scipy/ticket/213 )?
The basic issue is that ndimage.spline_filter seems to introduce nasty ringing artifacts, which make all of the geometric transforms (affine_transform included) perform very badly. Examples of these artifacts can be seen in the attachments to this ticket:
http://projects.scipy.org/scipy/scipy/attachment/ticket/213/ rotate_artifacts.jpg http://projects.scipy.org/scipy/scipy/attachment/ticket/213/spline% 20error.png
If Peter has any suggestions at all as to what's happened, that would be very helpful. Hopefully this won't be too much of an imposition, but these problems with the spline prefiltering are really hampering ndimage, so hopefully it will be worth it.
Thanks,
Zach Pincus Program in Biomedical Informatics and Department of Biochemistry Stanford University School of Medicine
On Mar 22, 2007, at 11:34 AM, James Turner wrote:
The people at STScI put me in touch with Peter Verveer, the author of nd_image. Unfortunately Peter is currently unable to maintain the code (either in numarray or scipy), but he did send me some comments on the problem discussed in this thread. Here's what he said:
James.
Hi James,
Yes, it could be that you see some mirroring. Let me first explain what the mode options do:
If you try to interpolate a value that falls outside of the boundaries, then that is done by either setting it constant (mode='constant') or by mapping to a position inside the boundaries, i.e. by mirroring, and then interpolating. So the mode argument really deals with extrapolating.
Problem is when you are mapping a value that is inside the boundaries, but very close. Interpolation is done by splines which require that a window is placed around the point you are interpolating, and the result is calculated from the values inside that window. Thus, if you are close to the boundary, part of that window will fall outside the boundaries, and the algorithm must choose how to set the values outside the boundary. Ideally that would be done in the same way as above, i.e. you should have the choice if that is done by a constant value, or by mirroring etc.
Unfortunately, the spline algorithms that I use (references for the algorithm are in the manual) have an intrinsic mirroring assumption build in. Thus for interpolating values inside the boundaries, a mirror boundary is the only option. I did not figure out a way to modify the interpolation algorithms. Therefore, if you choose a mode different from mirroring, then you end up with small artifacts at the boundaries. Presumably these artifacts will get bigger if the order of spline interpolation you select is larger.
So, its not really a bug, its a undesired feature...
Cheers, Peter
Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Hi Zachary,
OK - I sent Peter the URL for your post...
Cheers,
James.
By the way, ringing at sharp edges is an intrinsic feature of higher- order spline interpolation, right? I believe this kind of interpolant is really intended for smooth (band-limited) data. I'm not sure why the pre-filtering makes a difference though; I don't yet understand well enough what the pre-filter actually does.
I'm not sure what people normally do in computer graphics, since I'm working more with natural band-limited images, but in the case of Stefan's "rotate_artifacts" example, wouldn't it be appropriate to use the 1st- or 0th-order spline instead of the 2nd order? If your real-life data are smooth enough, however, then in theory the ringing with higher orders should go away.
Sorry if I'm stating the obvious and missing the real point! I just wanted to make sure this hasn't been overlooked... Likewise, sorry I didn't mention this before if it does turn out to be relevant. Let me know if you want references to explain what I said above.
James.
Hello all,
By the way, ringing at sharp edges is an intrinsic feature of higher- order spline interpolation, right? I believe this kind of interpolant is really intended for smooth (band-limited) data. I'm not sure why the pre-filtering makes a difference though; I don't yet understand well enough what the pre-filter actually does.
I'm not sure what people normally do in computer graphics, since I'm working more with natural band-limited images, but in the case of Stefan's "rotate_artifacts" example, wouldn't it be appropriate to use the 1st- or 0th-order spline instead of the 2nd order? If your real-life data are smooth enough, however, then in theory the ringing with higher orders should go away.
James is indeed correct. This whole thing is my mistake based on a mis-interpretation of terminology. I've more carefully re-read the paper on which the ndimage spline code is based ('Splines: A Perfect Fit for Signal/Image Processing' by Michael Unser; it's on citeseer). I now (hopefully) understand that the spline "pre-filter", while explicitly analogous to a traditional anti-aliasing pre-filter, is actually computing the spline coefficients via a filtering-type operation. While a traditional anti-aliasing filter should blur an image (as a band-limiting step), the fact that the anti-aliasing pre- filter does not is of no concern since the filtered output isn't a band-limited set of pixels, but a set of coefficients for a band- limited spline.
The actual transform operators then use these coefficients to (properly) compute pixel values at different locations. I just assumed that the "pre-filtering" was broken (even on natural images with smooth variations) because images pre-filtered with the spline filter didn't look like traditional pre-filtered images ought.
IMO, ticket 213 should be closed as PEBCAK (I'll do that forthwith); further I'm sorry to have caused Peter to be further bothered about this non-issue.
Zach
On Mar 22, 2007, at 8:13 PM, James Turner wrote:
By the way, ringing at sharp edges is an intrinsic feature of higher- order spline interpolation, right? I believe this kind of interpolant is really intended for smooth (band-limited) data. I'm not sure why the pre-filtering makes a difference though; I don't yet understand well enough what the pre-filter actually does.
I'm not sure what people normally do in computer graphics, since I'm working more with natural band-limited images, but in the case of Stefan's "rotate_artifacts" example, wouldn't it be appropriate to use the 1st- or 0th-order spline instead of the 2nd order? If your real-life data are smooth enough, however, then in theory the ringing with higher orders should go away.
Sorry if I'm stating the obvious and missing the real point! I just wanted to make sure this hasn't been overlooked... Likewise, sorry I didn't mention this before if it does turn out to be relevant. Let me know if you want references to explain what I said above.
James.
Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Hi Zach,
further I'm sorry to have caused Peter to be further bothered about this non-issue.
Don't worry -- I have let him know that we've probably figured it out. I hope Stefan agrees.
I now (hopefully) understand that the spline "pre-filter", while explicitly analogous to a traditional anti-aliasing pre-filter, is actually computing the spline coefficients via a filtering-type operation.
Ah, sounds like it's just calculating the weights for the specific sample grids. That makes sense.
So, if you want a smooth result in the rotate_artifacts example, you probably first want to apply a convolution as an anti-aliasing step using one of the nd_image filter functions (eg. convolve or gaussian_filter), then interpolate as before with the higher-order spline. That ought to get rid of the artefacts if nd_image is indeed doing the right thing. I think you figured out that much already. Although not strictly band limited, I think a Gaussian with a sigma of around 1 pixel should reduce the aliasing to just a couple of percent (I did that calculation before in connection with my astronomy stuff).
I just tried adding the following line to the code snippet from ticket 213, before the line beginning "I_rot_1 =" and it looks to me like the artefacts have gone (plays havoc with the colours though!). It's hard to tell the difference between the 1st and 2nd order results, however, even zooming right in.
I = ndi.gaussian_filter(I, 1.0, order=0, mode='constant', cval=0.0)
Hope I'm making sense; it's 4:20am here.
Cheers,
James.
On Thu, Mar 22, 2007 at 11:20:37PM -0700, Zachary Pincus wrote:
The actual transform operators then use these coefficients to (properly) compute pixel values at different locations. I just assumed that the "pre-filtering" was broken (even on natural images with smooth variations) because images pre-filtered with the spline filter didn't look like traditional pre-filtered images ought.
IMO, ticket 213 should be closed as PEBCAK (I'll do that forthwith); further I'm sorry to have caused Peter to be further bothered about this non-issue.
The ticket is more concerned with the usability of ndimage. Currently, the principle of least surprise is violated. If you use the default arguments of ndimage.rotate (except for axes, which should still be fixed to be (1,0) by default) and rotate Lena by 30 degrees, you get something with green splotches on (attached to the ticket).
IMO, we should either change the default parameters or update the documentation before closing the ticket.
Cheers Stéfan
Hello all,
On Mar 23, 2007, at 3:04 AM, Stefan van der Walt wrote:
On Thu, Mar 22, 2007 at 11:20:37PM -0700, Zachary Pincus wrote:
The actual transform operators then use these coefficients to (properly) compute pixel values at different locations. I just assumed that the "pre-filtering" was broken (even on natural images with smooth variations) because images pre-filtered with the spline filter didn't look like traditional pre-filtered images ought.
IMO, ticket 213 should be closed as PEBCAK (I'll do that forthwith); further I'm sorry to have caused Peter to be further bothered about this non-issue.
The ticket is more concerned with the usability of ndimage. Currently, the principle of least surprise is violated. If you use the default arguments of ndimage.rotate (except for axes, which should still be fixed to be (1,0) by default) and rotate Lena by 30 degrees, you get something with green splotches on (attached to the ticket).
IMO, we should either change the default parameters or update the documentation before closing the ticket.
Hmm, this is worrisome. There really shouldn't be ringing on continuous-tone images like Lena -- right? (And at no step in an image like that should gaussian filtering be necessary if you're doing spline interpolation -- also right?)
I assumed the problem was a non-issue after some testing with a few natural images didn't introduce any artifacts like those, but now with the Lena example I'm not sure. This is frustrating, to be sure, mostly because of my limited knowledge on the subject -- just enough to be dangerous.
Zach
Hi Zach,
Hmm, this is worrisome. There really shouldn't be ringing on continuous-tone images like Lena -- right? (And at no step in an image like that should gaussian filtering be necessary if you're doing spline interpolation -- also right?)
That's hard to say. Just because it's mainly a continuous-tone image doesn't necessarily mean it is well sampled everywhere. This depends both on the subject and the camera optics. Unlike the data I usually work with, I think everyday digital photographs (probably a photo scan in the case of Lena) do not generally have the detector sampling frequency matched to the optical resolution of the image. If that's true, the presence of aliasing in interpolated images depends on the structure of the subject and whether the scene has edges or high- frequency patterns in it.
Stefan's rotated Lena example is indeed a bit bizarre on zooming in! However, the artefacts are clearly localized to distinct edges, so I suspect this is indeed some kind of aliasing. Moreover, it looks like Lena has been decimated (reduced in size) prior to the rotation. That is definitely a good way to get artefacts, unless an anti-aliasing filter is applied before shrinking the image. My impression is that this image is probably somewhat undersampled (to understand exactly what that means, read up on the Sampling Theorem).
I can't say that the green blobs are *not* caused by a flaw in the algorithm. In particular, I am not used to working with colour images of this kind, so I don't have a good feeling for what aliasing looks like in a case like this. However, I definitely would not rule out intrinsic aliasing effects as the cause of this problem without investigating it further. One experiment might be to blur the original Lena with a Gaussian whose sigma is 1 pixel of the shrunken image before actually shrinking her, then do the rotation.
The first was on Stefan's artificial data which had sharp edges, and got very nasty ringing artifacts even with 3rd order splines. From your recollection, is this expected behavior based on splines and the nature of Stefan's image, or more likely to be a bug?
Your question was aimed at Travis, so I don't want to discourage him from answering it :-), but looking at this in more detail, I do think the amplitude of the artefacts here is greater than I might expect due to ringing with a quadratic b-spline kernel, which I think has minima with amplitudes <10% of the central peak. There has to be SOME oscillation, but in Stefan's "rotate_artifacts" example it seems to be at the level of ~100%. Also, it is not present on one of the inner edges for some reason. So I do wonder if the algorithm in nd_image is making this worse than it needs to be. These thoughts seem consistent with Travis's comments. Is it possible to transform the same data using the fitpack routines that Stefan mentioned in post 026641 and compare the results? I just tried doing a similar rotation in PyRAF on a monochrome image with a bicubic spline, and see considerably smaller artefacts (just a compact overshoot of probably a few % at the edge).
Cheers,
James.
Hello folks,
Hmm, this is worrisome. There really shouldn't be ringing on continuous-tone images like Lena -- right? (And at no step in an image like that should gaussian filtering be necessary if you're doing spline interpolation -- also right?)
That's hard to say. Just because it's mainly a continuous-tone image doesn't necessarily mean it is well sampled everywhere. This depends both on the subject and the camera optics. Unlike the data I usually work with, I think everyday digital photographs (probably a photo scan in the case of Lena) do not generally have the detector sampling frequency matched to the optical resolution of the image. If that's true, the presence of aliasing in interpolated images depends on the structure of the subject and whether the scene has edges or high- frequency patterns in it.
Based on my reading of the two excellent Unser papers (both the one that ndimage's spline code is based on, and the one that Travis linked to), it seems like a major point of using splines for interpolation is *better* behavior in the case of non-band-limited data than the traditional 'anti-aliasing [e.g. lowpass] filter' + 'resampling' + 'reconstruction [e.g. lowpass] filter' procedure.
That is, based on my (very limited) understanding (I understand the basics of the sampling theorem and can follow the Unser paper's update thereof, but not much more than that), in the spline case the spline fitting step *replaces* the anti-aliasing filter in the above procedure as the method for dealing with non-band-limited data. And the claim is that it in fact works better in many cases.
So it seems that something is wrong if the spline interpolation tools in ndimage only work properly in the band-limited case, or require lowpass filtering.
Stefan's rotated Lena example is indeed a bit bizarre on zooming in! However, the artefacts are clearly localized to distinct edges, so I suspect this is indeed some kind of aliasing. Moreover, it looks like Lena has been decimated (reduced in size) prior to the rotation. That is definitely a good way to get artefacts, unless an anti-aliasing filter is applied before shrinking the image. My impression is that this image is probably somewhat undersampled (to understand exactly what that means, read up on the Sampling Theorem).
Agreed, it looks like aliasing. Nevertheless, any resampling procedure is supposed to deal with this internally, right? Either by lowpass filtering (traditional case), or by spline fitting (spline case as described by Unser and understood by me) -- it shouldn't be letting aliasing bubble through, correct?
The first was on Stefan's artificial data which had sharp edges, and got very nasty ringing artifacts even with 3rd order splines. From your recollection, is this expected behavior based on splines and the nature of Stefan's image, or more likely to be a bug?
Your question was aimed at Travis, so I don't want to discourage him from answering it :-), but looking at this in more detail, I do think the amplitude of the artefacts here is greater than I might expect due to ringing with a quadratic b-spline kernel, which I think has minima with amplitudes <10% of the central peak. There has to be SOME oscillation, but in Stefan's "rotate_artifacts" example it seems to be at the level of ~100%. Also, it is not present on one of the inner edges for some reason. So I do wonder if the algorithm in nd_image is making this worse than it needs to be.
Good observation! Here are cubic spline fits to a step and a delta function, which both have quite small amounts of ringing compared to what's observed -- at least on the color images. Maybe 10% ringing in each color channel adds up perceptually more than it does mathematically?
import numpy, Gnuplot, scipy.interpolate
# step x = numpy.arange(-10, 10) y = numpy.where(x > 0, 1, 0) tck = scipy.interpolate.splrep(x, y, s=0, k=3) nx = numpy.linspace(-10, 9, 200, True) ny = scipy.interpolate.splev(nx, tck) d = Gnuplot.Data(numpy.transpose([x, y]), with='points') nd = Gnuplot.Data(numpy.transpose([nx,ny]), with='lines') g = Gnuplot.Gnuplot() g.plot(d, nd) ny.min() # -0.107490563074 -- largest ringing is ~10% of step size
# delta x = numpy.arange(-10, 10) y = numpy.where(x == 0, 1, 0) tck = scipy.interpolate.splrep(x, y, s=0, k=3) nx = numpy.linspace(-10, 9, 200, True) ny = scipy.interpolate.splev(nx, tck) d = Gnuplot.Data(numpy.transpose([x, y]), with='points') nd = Gnuplot.Data(numpy.transpose([nx,ny]), with='lines') g = Gnuplot.Gnuplot() g.plot(d, nd) ny.min() # -0.136449610107 -- largest ringing is ~13% of impulse size
(sample plots are attached for reference).
Now let's use ndimage to rotate a step function by 45%, or zoom in on it:
import numpy, scipy.ndimage o = numpy.ones((100, 50), dtype=float) z = numpy.zeros((100, 50), dtype=float) a = numpy.concatenate([o, z], axis=1) b = scipy.ndimage.rotate(a, 45, order=3) b.max() # 1.08832325055 b.min() # -0.0883232505527 c = scipy.ndimage.zoom(a, 5, order=3) c.min() # -0.107600148039 c.max() # 1.10760014804
This (very simple) example looks exactly right. (Also, when I manually inspect the images in ImageJ -- which can load 32-bit floating point TIFFs, which is great -- they look correct). So what's going on with Lena and the other color/otherwise more complex images?
Zach
Hello folks,
Hmm, this is worrisome. There really shouldn't be ringing on continuous-tone images like Lena -- right? (And at no step in an image like that should gaussian filtering be necessary if you're doing spline interpolation -- also right?)
That's hard to say. Just because it's mainly a continuous-tone image doesn't necessarily mean it is well sampled everywhere. This depends both on the subject and the camera optics. Unlike the data I usually work with, I think everyday digital photographs (probably a photo scan in the case of Lena) do not generally have the detector sampling frequency matched to the optical resolution of the image. If that's true, the presence of aliasing in interpolated images depends on the structure of the subject and whether the scene has edges or high- frequency patterns in it.
Based on my reading of the two excellent Unser papers (both the one that ndimage's spline code is based on, and the one that Travis linked to), it seems like a major point of using splines for interpolation is *better* behavior in the case of non-band-limited data than the traditional 'anti-aliasing [e.g. lowpass] filter' + 'resampling' + 'reconstruction [e.g. lowpass] filter' procedure.
That is, based on my (very limited) understanding (I understand the basics of the sampling theorem and can follow the Unser paper's update thereof, but not much more than that), in the spline case the spline fitting step *replaces* the anti-aliasing filter in the above procedure as the method for dealing with non-band-limited data. And the claim is that it in fact works better in many cases.
So it seems that something is wrong if the spline interpolation tools in ndimage only work properly in the band-limited case, or require lowpass filtering.
Stefan's rotated Lena example is indeed a bit bizarre on zooming in! However, the artefacts are clearly localized to distinct edges, so I suspect this is indeed some kind of aliasing. Moreover, it looks like Lena has been decimated (reduced in size) prior to the rotation. That is definitely a good way to get artefacts, unless an anti-aliasing filter is applied before shrinking the image. My impression is that this image is probably somewhat undersampled (to understand exactly what that means, read up on the Sampling Theorem).
Agreed, it looks like aliasing. Nevertheless, any resampling procedure is supposed to deal with this internally, right? Either by lowpass filtering (traditional case), or by spline fitting (spline case as described by Unser and understood by me) -- it shouldn't be letting aliasing bubble through, correct?
The first was on Stefan's artificial data which had sharp edges, and got very nasty ringing artifacts even with 3rd order splines. From your recollection, is this expected behavior based on splines and the nature of Stefan's image, or more likely to be a bug?
Your question was aimed at Travis, so I don't want to discourage him from answering it :-), but looking at this in more detail, I do think the amplitude of the artefacts here is greater than I might expect due to ringing with a quadratic b-spline kernel, which I think has minima with amplitudes <10% of the central peak. There has to be SOME oscillation, but in Stefan's "rotate_artifacts" example it seems to be at the level of ~100%. Also, it is not present on one of the inner edges for some reason. So I do wonder if the algorithm in nd_image is making this worse than it needs to be.
Good observation! Here are cubic spline fits to a step and a delta function, which both have quite small amounts of ringing compared to what's observed -- at least on the color images. Maybe 10% ringing in each color channel adds up perceptually more than it does mathematically?
import numpy, Gnuplot, scipy.interpolate
# step x = numpy.arange(-10, 10) y = numpy.where(x > 0, 1, 0) tck = scipy.interpolate.splrep(x, y, s=0, k=3) nx = numpy.linspace(-10, 9, 200, True) ny = scipy.interpolate.splev(nx, tck) d = Gnuplot.Data(numpy.transpose([x, y]), with='points') nd = Gnuplot.Data(numpy.transpose([nx,ny]), with='lines') g = Gnuplot.Gnuplot() g.plot(d, nd) ny.min() # -0.107490563074 -- largest ringing is ~10% of step size
# delta x = numpy.arange(-10, 10) y = numpy.where(x == 0, 1, 0) tck = scipy.interpolate.splrep(x, y, s=0, k=3) nx = numpy.linspace(-10, 9, 200, True) ny = scipy.interpolate.splev(nx, tck) d = Gnuplot.Data(numpy.transpose([x, y]), with='points') nd = Gnuplot.Data(numpy.transpose([nx,ny]), with='lines') g = Gnuplot.Gnuplot() g.plot(d, nd) ny.min() # -0.136449610107 -- largest ringing is ~13% of impulse size
Now let's use ndimage to rotate a step function by 45%, or zoom in on it:
import numpy, scipy.ndimage o = numpy.ones((100, 50), dtype=float) z = numpy.zeros((100, 50), dtype=float) a = numpy.concatenate([o, z], axis=1) b = scipy.ndimage.rotate(a, 45, order=3) b.max() # 1.08832325055 b.min() # -0.0883232505527 c = scipy.ndimage.zoom(a, 5, order=3) c.min() # -0.107600148039 c.max() # 1.10760014804
This (very simple) example looks exactly right. (Also, when I manually inspect the images in ImageJ -- which can load 32-bit floating point TIFFs, which is great -- they look correct). So what's going on with Lena and the other color/otherwise more complex images?
Zach
Hi Zach,
Based on my reading of the two excellent Unser papers (both the one that ndimage's spline code is based on, and the one that Travis linked to), it seems like a major point of using splines for interpolation is *better* behavior in the case of non-band-limited data than the traditional 'anti-aliasing [e.g. lowpass] filter' + 'resampling' + 'reconstruction [e.g. lowpass] filter' procedure.
It's certainly true that intermediate-order spline interpolants will cause less ringing than an "ideal" sinc function. So their behaviour is better for non-band-limited data than applying simplistic formulae derived from the Sampling Theorem. This fact would help you out if you don't use a low-pass filter. However, I wouldn't go as far as to say that splines *replace* some form of low-pass filtering. I haven't read Unser's papers in much detail, though (at least not recently) and my applications are different from yours; it may depend exactly what you're trying to do.
So it seems that something is wrong if the spline interpolation tools in ndimage only work properly in the band-limited case, or require lowpass filtering.
It depends what you mean by working properly -- but in this case it does look like something is wrong and that you figured it out in your next post :-).
James.
PS... (sorry for all the posts, for anyone who isn't interested...)
Agreed, it looks like aliasing. Nevertheless, any resampling procedure is supposed to deal with this internally, right? Either by lowpass filtering (traditional case), or by spline fitting (spline case as described by Unser and understood by me) -- it shouldn't be letting aliasing bubble through, correct?
In the general case, I don't think it is appropriate for the resampling procedure to use low-pass filtering internally to avoid artefacts, except perhaps when downsampling. It probably makes sense for computer graphics work, but there are cases where the input data are band limited to begin with and any degradation in resolution is unacceptable. Where needed, I think low-pass filtering should either be the responsibility of the main program or an option. It's not even possible for the resampling procedure to prevent artefacts in every case, since the aliasing in a badly undersampled image cannot be removed post factum (this is for undersampled photos rather than artificial graphics, which I think are fundamentally different because everything is defined on the grid, although I haven't sat down and proved it mathematically). I'm also not sure how the procedure could decide on the level of smoothing needed for a given dataset without external information.
Of course intermediate-order splines will probably keep everyone happy, being reasonably robust against ringing effects without causing much smoothing or interpolation error :-).
By the way, I think you and Stefan might be interested in a medical imaging paper by Lehmann et al. (1999), which gives a very nice overview of the properties of different interpolation kernels:
http://ieeexplore.ieee.org/Xplore/login.jsp?url=/iel5/42/17698/00816070.pdf?...
For what it's worth, I'd agree with both of you that the numeric overflow should be documented if not fixed. It sounds like Stefan has figured out a solution for it though. If you make sense of the code in "ni_interpolation.c", Stefan, I'd be very interested in how to make it calculate one less value at the edges :-).
Cheers,
James.
Thanks for the information and the paper link, James. I certainly appreciate the perspective, and now see why the anti-aliasing and reconstruction filtering might best be left to clients of a resampling procedure.
Hopefully at least some of the kinks in the spline interpolation (to date: obligate mirror boundary conditions, extra edge values, integer overflow) can be smoothed out. I can't guarantee that I'll have the time or expertise to deal with ni_interpolation.c, but I'll try to give it a shot some time here.
Zach
On Mar 26, 2007, at 1:16 AM, James Turner wrote:
PS... (sorry for all the posts, for anyone who isn't interested...)
Agreed, it looks like aliasing. Nevertheless, any resampling procedure is supposed to deal with this internally, right? Either by lowpass filtering (traditional case), or by spline fitting (spline case as described by Unser and understood by me) -- it shouldn't be letting aliasing bubble through, correct?
In the general case, I don't think it is appropriate for the resampling procedure to use low-pass filtering internally to avoid artefacts, except perhaps when downsampling. It probably makes sense for computer graphics work, but there are cases where the input data are band limited to begin with and any degradation in resolution is unacceptable. Where needed, I think low-pass filtering should either be the responsibility of the main program or an option. It's not even possible for the resampling procedure to prevent artefacts in every case, since the aliasing in a badly undersampled image cannot be removed post factum (this is for undersampled photos rather than artificial graphics, which I think are fundamentally different because everything is defined on the grid, although I haven't sat down and proved it mathematically). I'm also not sure how the procedure could decide on the level of smoothing needed for a given dataset without external information.
Of course intermediate-order splines will probably keep everyone happy, being reasonably robust against ringing effects without causing much smoothing or interpolation error :-).
By the way, I think you and Stefan might be interested in a medical imaging paper by Lehmann et al. (1999), which gives a very nice overview of the properties of different interpolation kernels:
http://ieeexplore.ieee.org/Xplore/login.jsp?url=/ iel5/42/17698/00816070.pdf?arnumber=816070
For what it's worth, I'd agree with both of you that the numeric overflow should be documented if not fixed. It sounds like Stefan has figured out a solution for it though. If you make sense of the code in "ni_interpolation.c", Stefan, I'd be very interested in how to make it calculate one less value at the edges :-).
Cheers,
James.
Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Hi,
I notice now that we've been having this discussion on the wrong list -- oops! We're nearly done, though.
On Mon, Mar 26, 2007 at 04:16:51AM -0400, James Turner wrote:
For what it's worth, I'd agree with both of you that the numeric overflow should be documented if not fixed. It sounds like Stefan has figured out a solution for it though. If you make sense of the code in "ni_interpolation.c", Stefan, I'd be very interested in how to make it calculate one less value at the edges :-).
I fixed the overflow issue in SVN. I'd appreciate it if you could test and let me know whether everything works as expected. The output values are now simply clipped to the bounds of the numeric type, i.e. UInt8 to [0,255] etc.
As for the values at the edges, I'm still working on it.
Since we have a fundamental problem with the spline filter approach at the borders, we may also want to look at subdivision interpolation schemes, like the one described in this article:
http://citeseer.ist.psu.edu/devilliers03dubucdeslauriers.html
Regards Stéfan
On Wed, Mar 28, 2007 at 05:14:59PM +0200, Stefan van der Walt wrote:
As for the values at the edges, I'm still working on it.
OK, that was a one-line patch. Please test to see if there are any subtle conditions on the border that I may have missed. I know of one already, but I'd be glad if you can find any others :)
Cheers Stéfan
Very cool -- Thanks for digging into the code and making these fixes, Stéfan! The ndimage C code is non-trivial for sure.
I'll test things out in the next couple of days.
Thanks again,
Zach
On Mar 28, 2007, at 10:25 AM, Stefan van der Walt wrote:
On Wed, Mar 28, 2007 at 05:14:59PM +0200, Stefan van der Walt wrote:
As for the values at the edges, I'm still working on it.
OK, that was a one-line patch. Please test to see if there are any subtle conditions on the border that I may have missed. I know of one already, but I'd be glad if you can find any others :)
Cheers Stéfan _______________________________________________ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
OK, that was a one-line patch. Please test to see if there are any subtle conditions on the border that I may have missed. I know of one already, but I'd be glad if you can find any others :)
Thanks, Stefan! That looks much better.
Today I finally had time to figure out the basics of SVN, make a patch and apply your changes to my numarray version of nd_image (I'll switch to numpy as soon as STScI does a full numpy-based release...). Your integer clipping changes wouldn't compile under numarray unmodified, but my data are floating point anyway, so I just applied and tested the array indexing changes.
It looks like there may still be some edge effects due to the mirroring properties of the spline algorithm for higher orders, but the gross problem of extrapolating 1 pixel into the mirrored data has gone away :-). I think that's good enough for nd_image to be useful for me, but if I can find time later it would be good to look into improving the behaviour further.
For my real data, mode="constant" now seems to work well, but I also tested some simple examples (like in this thread) using "reflect" and "wrap". I'm not 100% sure from the numarray manual what their correct behaviour is supposed to be, but I noticed some things that seem anomalous. For example:
-----
import numarray as N import numarray.nd_image as ndi
I = N.zeros((2,4),N.Float32) I[:,:] = N.arange(4.0, 0.0, -1.0)
trmatrix = N.array([[1,0],[0,1]]) troffset1 = (0.0, -0.1)
I_off1 = ndi.affine_transform(I, trmatrix, troffset1, order=1, mode='reflect', output_shape=(2,6))
print I print I_off1
-----
produces
[[ 4. 3. 2. 1.] [ 4. 3. 2. 1.]] [[ 3.0999999 3.0999999 2.0999999 1.10000002 1.89999998 1.89999998] [ 3.0999999 3.0999999 2.0999999 1.10000002 1.89999998 1.89999998]]
It looks like the last output value is produced by reflecting the input and then interpolating, but presumably then the first value should be 3.9, for consistency, not 3.1? Does that make sense?
Cheers,
James.
It looks like the last output value is produced by reflecting the input and then interpolating, but presumably then the first value should be 3.9, for consistency, not 3.1? Does that make sense?
Aargh. I think I see what's happening now. The input is supposed to be interpolated and then reflected like this:
[4 3 2 1] -> [3.1 3.1 2.1 1.1 1.1]
The problem is that there is still one value too many being "interpolated" here before the reflection takes place. Do the sections beginning at lines 168 & 178 need changing in a similar way to their counterparts at lines 129 & 139? I started looking into this, but I don't understand the code well enough to be sure I'm making the right changes...
Thanks,
James.
Hi James
On Wed, Apr 04, 2007 at 08:29:50PM -0400, James Turner wrote:
It looks like the last output value is produced by reflecting the input and then interpolating, but presumably then the first value should be 3.9, for consistency, not 3.1? Does that make sense?
Aargh. I think I see what's happening now. The input is supposed to be interpolated and then reflected like this:
[4 3 2 1] -> [3.1 3.1 2.1 1.1 1.1]
The problem is that there is still one value too many being "interpolated" here before the reflection takes place. Do the sections beginning at lines 168 & 178 need changing in a similar way to their counterparts at lines 129 & 139? I started looking into this, but I don't understand the code well enough to be sure I'm making the right changes...
Thanks for spotting that. When I fix those lines, I see:
[[ 3.9000001 3.0999999 2.0999999 1.10000002 1.89999998 2.9000001 ] [ 3.9000001 3.0999999 2.0999999 1.10000002 1.89999998 2.9000001 ]]
I'll submit to SVN later today. Note that I also enabled 'mirror' mode, which works almost the same way as reflect:
Reflect: 1 2 3 4 -> 1 2 3 4 4 3 2 1 Mirror: 1 2 3 4 -> 1 2 3 4 3 2 1
Cheers Stéfan
Hi Stefan,
Sorry for the slow reply to this.
Thanks for spotting that. When I fix those lines, I see:
[[ 3.9000001 3.0999999 2.0999999 1.10000002 1.89999998 2.9000001 ] [ 3.9000001 3.0999999 2.0999999 1.10000002 1.89999998 2.9000001 ]]
Actually, I think I made a mistake in my first post of 4 April, when I suggested the first value should be 3.9. I was thinking that the "reflect" mode just mirrors the input (like the "mirror" mode) and then interpolates the resulting values, in which case the first value would be 3.9 and the last would be 1.9 and 2.9. That seems to be what is now happening in your example above. However, I later realized that it is probably supposed to interpolate only within the bounds of the input data and *then* pad the output by "reflecting" the *interpolated* values in the way you describe below. What confused me was the extra 1.9 in the 2nd-last output column, but I think that is just the same problem (calculating one point too many) that I reported and you fixed for the "constant" mode, isn't it?
If this is true, I think the (rounded) result should in fact be
[[ 3.1 3.1 2.1 1.1 1.1 2.1] [ 3.1 3.1 2.1 1.1 1.1 2.1]]
Does that make sense? Sorry if I confused the issue previously.
So that's my impression of the intended behaviour, but whether it makes the most sense or not is a different matter; I don't personally have a use for padding with reflected values at the moment. I notice, however, that what I'm describing here can preserve symmetry when doubling the original array size (eg. [ 3.1 3.1 2.1 1.1 1.1 2.1 3.1 3.1 ]), if shifted appropriately, which may be the idea of "reflect" as opposed to "mirror" (otherwise the latter makes more sense to me). Unfortunately, interpolating breaks this symmetry for more general multiples, by unavoidably reducing the number of known values by one.
I'll submit to SVN later today. Note that I also enabled 'mirror' mode, which works almost the same way as reflect:
Reflect: 1 2 3 4 -> 1 2 3 4 4 3 2 1 Mirror: 1 2 3 4 -> 1 2 3 4 3 2 1
Yes, I noticed the mirror mode. It does seem good to have both options. I may have used the words "mirror" and "reflect" interchangeably in previous emails though.
Cheers,
James.
On Wed, Apr 11, 2007 at 08:10:24PM -0400, James Turner wrote:
Hi Stefan,
Sorry for the slow reply to this.
Thanks for spotting that. When I fix those lines, I see:
[[ 3.9000001 3.0999999 2.0999999 1.10000002 1.89999998 2.9000001 ] [ 3.9000001 3.0999999 2.0999999 1.10000002 1.89999998 2.9000001 ]]
Actually, I think I made a mistake in my first post of 4 April, when I suggested the first value should be 3.9. I was thinking that the "reflect" mode just mirrors the input (like the "mirror" mode) and then interpolates the resulting values, in which case the first value would be 3.9 and the last would be 1.9 and 2.9. That seems to be what is now happening in your example above. However, I later realized that it is probably supposed to interpolate only within the bounds of the input data and *then* pad the output by "reflecting" the *interpolated* values in the way you describe below. What confused me was the extra 1.9 in the 2nd-last output column, but I think that is just the same problem (calculating one point too many) that I reported and you fixed for the "constant" mode, isn't it?
Yes, there is a fundamental problem with using "reflect" mode for interpolation. You have data points
4 3 2 1 | | | |
so you can interpolate between the 4 and the 1. Now, when you use "reflect" mode, the data becomes:
1 2 3 4 4 3 2 1 1 2 3 4 | | | |
This is where things become problematic. If we try to interpolate a point between the two 4's, we are going to get strange results because of the spline-fitting routine (you remember the problems with extrapolation we've had before). So, the easiest way to fix this is simply not to use reflect-mode, but to use mirror-mode instead.
1 2 3 4 3 2 1 2 3 4 | | | |
This causes no problem, since, no matter where you interpolate, no extrapolation is done.
I've enabled mirror-mode, and put in a warning whenever the user tries to interpolate using reflect-mode. There might still be a few minor indexing issues, but fixing them unfortunately won't alleviate this bigger design issue.
To address this problem would entail extending the input dataset by the necessary number of elements (which depends on the order of interpolation), using the applicable mode (i.e. mirror, reflect etc.). Then, all indexing operations need to be adjusted accordingly. Since everything works at the moment when using mirror and constant mode, I'm not sure all that effort is warranted.
Regards Stéfan
On Sat, Mar 24, 2007 at 01:41:21AM -0400, James Turner wrote:
That's hard to say. Just because it's mainly a continuous-tone image doesn't necessarily mean it is well sampled everywhere. This depends both on the subject and the camera optics. Unlike the data I usually work with, I think everyday digital photographs (probably a photo scan in the case of Lena) do not generally have the detector sampling frequency matched to the optical resolution of the image. If that's true, the presence of aliasing in interpolated images depends on the structure of the subject and whether the scene has edges or high- frequency patterns in it.
Agreed, but the aliasing effects isn't not the problem here, as it should be visible in the input image as well. I'd expect a third-order spline interpolation to be more smooth than a first-order interpolant, but in the resulting images this isn't the case.
See
http://mentat.za.net/results/lena_small.png http://mentat.za.net/results/img_rot_30_1.png (1st order spline) http://mentat.za.net/results/img_rot_30_3.png (3rd order spline)
Lena has been decimated (reduced in size) prior to the rotation. That is definitely a good way to get artefacts, unless an anti-aliasing filter is applied before shrinking the image. My impression is that this image is probably somewhat undersampled (to understand exactly what that means, read up on the Sampling Theorem).
The artefacts arn't visible in the source image (url above). The image definately is a scaled down version of the original Lena -- very interesting, btw, see
http://www.cs.cmu.edu/%7Echuck/lennapg/lenna.shtml
investigating it further. One experiment might be to blur the original Lena with a Gaussian whose sigma is 1 pixel of the shrunken image before actually shrinking her, then do the rotation.
A rotation should take place without significant shifts in colour. This almost looks like a value overflow problem.
So I do wonder if the algorithm in nd_image is making this worse than it needs to be.
That is my suspicion, too.
compare the results? I just tried doing a similar rotation in PyRAF on a monochrome image with a bicubic spline, and see considerably smaller artefacts (just a compact overshoot of probably a few % at the edge).
Could you apply the PyRAF rotation on the Lena given above and post the result?
I always thought we could simply revert to using bilinear and bicubic polygon interpolation (instead of spline interpolation), but now I read on wikipedia:
""" In the mathematical subfield of numerical analysis, spline interpolation is a form of interpolation where the interpolant is a special type of piecewise polynomial called a spline. Spline interpolation is preferred over polynomial interpolation because the interpolation error can be made small even when using low degree polynomials for the spline. Thus, spline interpolation avoids the problem of Runge's phenomenon which occurs when using high degree polynomials. """
http://en.wikipedia.org/wiki/Spline_interpolation
also take a look at
http://en.wikipedia.org/wiki/Runge%27s_phenomenon
So much for side-stepping Runge's phenomenon :)
Cheers Stéfan
Hello all,
Mystery solved, I think.
I downloaded Stéfan's Lena images and tried rotating them myself. As far as I can tell, the artifacts are caused by numerical overflow, as Lena is an unsigned 8-bit image.
If Lena is converted to floating-point before the rotation is applied, and then the intensity range is clipped to [0,255] and converted back to uint8 before saving, everything looks fine.
So, the "problem" was indeed the ringing that spline interpolation introduces. Despite the fact that this ringing was on the order of a few percent (as shown earlier), that few percent made a big difference when the it caused intensity values to ring over the top or under the bottom of the numerical type's range. Thus the bizarre wrap-around artifacts only in certain edge regions.
When using floating-point images throughout, the spline interpolation looks great even on the step-function images, up to order 4 and 5 where the ringing gets (slightly) noticeable.
So, is this a bug? Well, I still think so. Given that small ringing is going to happen on all but the very smoothest images, and given that ndimage is going to be used on non-floating-point types, it would be good if there were some explicit internal clipping to the data type's range. Otherwise, the ndimage resampling tools are unfit for use on non-floating-point data that resides near the edges of the range of the data type.
Though I'm not quite sure how one would structure the calculations so that it would be possible to tell when over/underflow happened... it might not be possible. In which case, either the tools should use floating-point math at some of the steps internally (as few as possible) before clipping and converting to the required data type, or explicit warnings should be added to the documentation.
Zach
On Mar 24, 2007, at 1:58 PM, Stefan van der Walt wrote:
On Sat, Mar 24, 2007 at 01:41:21AM -0400, James Turner wrote:
That's hard to say. Just because it's mainly a continuous-tone image doesn't necessarily mean it is well sampled everywhere. This depends both on the subject and the camera optics. Unlike the data I usually work with, I think everyday digital photographs (probably a photo scan in the case of Lena) do not generally have the detector sampling frequency matched to the optical resolution of the image. If that's true, the presence of aliasing in interpolated images depends on the structure of the subject and whether the scene has edges or high- frequency patterns in it.
Agreed, but the aliasing effects isn't not the problem here, as it should be visible in the input image as well. I'd expect a third-order spline interpolation to be more smooth than a first-order interpolant, but in the resulting images this isn't the case.
See
http://mentat.za.net/results/lena_small.png http://mentat.za.net/results/img_rot_30_1.png (1st order spline) http://mentat.za.net/results/img_rot_30_3.png (3rd order spline)
Lena has been decimated (reduced in size) prior to the rotation. That is definitely a good way to get artefacts, unless an anti-aliasing filter is applied before shrinking the image. My impression is that this image is probably somewhat undersampled (to understand exactly what that means, read up on the Sampling Theorem).
The artefacts arn't visible in the source image (url above). The image definately is a scaled down version of the original Lena -- very interesting, btw, see
http://www.cs.cmu.edu/%7Echuck/lennapg/lenna.shtml
investigating it further. One experiment might be to blur the original Lena with a Gaussian whose sigma is 1 pixel of the shrunken image before actually shrinking her, then do the rotation.
A rotation should take place without significant shifts in colour. This almost looks like a value overflow problem.
So I do wonder if the algorithm in nd_image is making this worse than it needs to be.
That is my suspicion, too.
compare the results? I just tried doing a similar rotation in PyRAF on a monochrome image with a bicubic spline, and see considerably smaller artefacts (just a compact overshoot of probably a few % at the edge).
Could you apply the PyRAF rotation on the Lena given above and post the result?
I always thought we could simply revert to using bilinear and bicubic polygon interpolation (instead of spline interpolation), but now I read on wikipedia:
""" In the mathematical subfield of numerical analysis, spline interpolation is a form of interpolation where the interpolant is a special type of piecewise polynomial called a spline. Spline interpolation is preferred over polynomial interpolation because the interpolation error can be made small even when using low degree polynomials for the spline. Thus, spline interpolation avoids the problem of Runge's phenomenon which occurs when using high degree polynomials. """
http://en.wikipedia.org/wiki/Spline_interpolation
also take a look at
http://en.wikipedia.org/wiki/Runge%27s_phenomenon
So much for side-stepping Runge's phenomenon :)
Cheers Stéfan _______________________________________________ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
On Sat, Mar 24, 2007 at 03:25:38PM -0700, Zachary Pincus wrote:
If Lena is converted to floating-point before the rotation is applied, and then the intensity range is clipped to [0,255] and converted back to uint8 before saving, everything looks fine.
Thanks, Zachary! I can confirm that.
So, is this a bug? Well, I still think so. Given that small ringing is going to happen on all but the very smoothest images, and given that ndimage is going to be used on non-floating-point types, it would be good if there were some explicit internal clipping to the data type's range. Otherwise, the ndimage resampling tools are unfit for use on non-floating-point data that resides near the edges of the range of the data type.
I agree.
Though I'm not quite sure how one would structure the calculations so that it would be possible to tell when over/underflow happened... it might not be possible. In which case, either the tools should use floating-point math at some of the steps internally (as few as possible) before clipping and converting to the required data type, or explicit warnings should be added to the documentation.
I think the spline interpolation already uses floating point math? Looks like we are seeing a type conversion without range checking:
In [47]: x = N.array([1.2,200,255,255.6,270])
In [48]: x.astype(N.uint8) Out[48]: array([ 1, 200, 255, 255, 14], dtype=uint8)
I'll have to go and take a look at the code, but this shouldn't be hard to fix -- clipping is fairly fast (don't we have a fast clipping method by David C now?), even for extremely large images.
Regards Stéfan
Hi Stéfan,
Agreed, but the aliasing effects isn't not the problem here, as it should be visible in the input image as well.
It's a bit academic now that Zach seems to have found the answer, but I don't think this is true. Aliasing is *present* in the input image, but is simply manifested as blockiness (due to the larger pixel size relative to the scene). It is only once you interpolate the data in a more general way that the aliasing turns into other artefacts. The reduced image contains too few sample points to determine the frequency content of the underlying scene uniquely for interpolating, but the values still *have* to be meaningful until you resample in some other way, because all you did was average over contiguous areas.
The image definately is a scaled down version of the original Lena -- very interesting, btw, see
Cool. I wondered where the picture came from (I suppose I could have looked that up in wikipedia too). The full frame is also nice :-).
A rotation should take place without significant shifts in colour. This almost looks like a value overflow problem.
Sounds like this idea was correct. I was also just starting to form some suspicion about the integers getting maxed out...
Could you apply the PyRAF rotation on the Lena given above and post the result?
I'd be happy to, but is the problem solved now? I can't handle colour images directly in IRAF/PyRAF, but I could split the 3 channels into separate images, rotate each of them and put them back together. Do I need to use something like PIL to read your PNG into numarray? I usually use FITS files... Let me know if you still want to try this.
I always thought we could simply revert to using bilinear and bicubic polygon interpolation (instead of spline interpolation)
Interesting. In my brief look at your paper and Web page (so far), I had missed that your polygon interpolation can be bicubic, rather than just linear like Drizzle. I'll have to figure out what this means and whether it might be useful for what I'm doing.
Cheers,
James.
James Turner wrote:
By the way, ringing at sharp edges is an intrinsic feature of higher- order spline interpolation, right? I believe this kind of interpolant is really intended for smooth (band-limited) data. I'm not sure why the pre-filtering makes a difference though; I don't yet understand well enough what the pre-filter actually does.
Yes, ringing at edges is an intrinsic feature of higher-order spline interpolation. Eventually, the spline interpolant becomes a band-limited sinc-interpolator which will assume that edges are really points sampled from a sinc. So, if you re-sample at different points you get the "ringing" effect. But, you shouldn't see a lot of ringing below order 7.
The pre-filter obtains the spline-interpolation coefficients. The spline assumes the continuous function represented by the samples at x_i is
f(x,y) = sum(c_ij beta^o(x-x_i) beta^o(y-y_i))
The "pre-filter" is computing the coefficients c_ij. You then evaluate at any point you like using the continuous function implied by the spline. The function beta^o is a spline function and depends on the order of the spline.
I'm not sure what people normally do in computer graphics, since I'm working more with natural band-limited images, but in the case of Stefan's "rotate_artifacts" example, wouldn't it be appropriate to use the 1st- or 0th-order spline instead of the 2nd order? If your real-life data are smooth enough, however, then in theory the ringing with higher orders should go away.
Yes, that is true. But, you really shouldn't see much ringing with a 3rd order spline, though.
I studied these splines for a while, but my memory can fail me. You can look at the papers of M. Unser for much more information. Here is a link to a good review paper.
http://bigwww.epfl.ch/publications/unser0001.pdf
-Travis
Hello again,
On Mar 23, 2007, at 7:56 AM, Travis Oliphant wrote:
I'm not sure what people normally do in computer graphics, since I'm working more with natural band-limited images, but in the case of Stefan's "rotate_artifacts" example, wouldn't it be appropriate to use the 1st- or 0th-order spline instead of the 2nd order? If your real-life data are smooth enough, however, then in theory the ringing with higher orders should go away.
Yes, that is true. But, you really shouldn't see much ringing with a 3rd order spline, though.
There are a couple of different examples of ringing that we've seen (specifically, the first and third attachments to scipy ticket 213: http://projects.scipy.org/scipy/scipy/ticket/213
The first was on Stefan's artificial data which had sharp edges, and got very nasty ringing artifacts even with 3rd order splines. From your recollection, is this expected behavior based on splines and the nature of Stefan's image, or more likely to be a bug?
The second example is rotated version of Lena -- an image with natural and much smoother edges -- where 3rd order spline interpolation still introduced strong ringing (or something nasty). This looks more like a bug, but (as per this discussion) my insight is clearly limited in this case.
Zach