Hankel transforms, again

Hi developers, This is a repost of a message from December 2008 which gave no useful answers. Since then, I’ve had 4-5 requests for the code from people who had a need for it. It’s not a massive demand, but enough that perhaps you’ll consider my offer again. Since the previous posting, I’ve also included alternative filters thanks to Fan-Nian Kong that are shorter and more accurate when the function makes significant changes in more limited intervals. I’m not including the code (since it is mostly thousands of lines of tables), but I will provide the files to anyone who’s interested. Cheers, Tom —— original message below ——— When I recently needed a Hankel transform I was unable to find one in Scipy. I found one in MATLAB though[1], written by Prof. Brian Borchers at New Mexico Tech. The code is based on a previous FORTRAN implementation by W. L. Anderson [2], and the MATLAB code is not marked with any copyright statements. Hankel transforms of the first and second order can be computed through digital filtering. I have rewritten the code in Python/numpy, complete with tests and acknowledgements of the origins, and my employer has agreed that I can donate the code to Scipy. I believe this should be of interest. Hankel transforms arise often in acoustics and other systems with cylinder symmetry. If you want it, I would like a suggestion where it belongs (with other integral transforms, probably) and how I should shape the tests to make them suitable for inclusion. The tests I currently have use Hankel transforms of five different functions with known analytical transforms and compares the transformed values to the numerically evaluated analytical expressions. Currently plots are generated, but for automated testing I suppose something else would be better. Pointing me at an example is sufficient. [1] http://infohost.nmt.edu/~borchers/hankel.html [2] Anderson, W. L., 1979, Computer Program Numerical Integration of Related Hankel Transforms of Orders 0 and 1 by Adaptive Digital Filtering. Geophysics, 44(7):1287-1305. Best regards, -- Tom Grydeland <Tom.Grydeland@(norut.no|gmail.com)>

On Fri, Feb 14, 2014 at 9:45 AM, Tom Grydeland <tom.grydeland@gmail.com> wrote:
Hi developers,
This is a repost of a message from December 2008 which gave no useful answers. Since then, I’ve had 4-5 requests for the code from people who had a need for it. It’s not a massive demand, but enough that perhaps you’ll consider my offer again.
Since the previous posting, I’ve also included alternative filters thanks to Fan-Nian Kong that are shorter and more accurate when the function makes significant changes in more limited intervals. I’m not including the code (since it is mostly thousands of lines of tables), but I will provide the files to anyone who’s interested.
Yes, I think we'd be interested. Please do make a PR. Before you do, please double-check the licensing on the new code that you have added. It does look like Anderson's original code is in the public domain (the paper being published as part of Anderson's work at the USGS as a federal employee), so that part is in the clear. Just so we are clear, the lack of copyright statements (work by US federal employees aside) usually means that you have *no license* to redistribute the work, not that there are no restrictions on redistribution. Thanks!
—— original message below ———
When I recently needed a Hankel transform I was unable to find one in Scipy. I found one in MATLAB though[1], written by Prof. Brian Borchers at New Mexico Tech. The code is based on a previous FORTRAN implementation by W. L. Anderson [2], and the MATLAB code is not marked with any copyright statements. Hankel transforms of the first and second order can be computed through digital filtering.
I have rewritten the code in Python/numpy, complete with tests and acknowledgements of the origins, and my employer has agreed that I can donate the code to Scipy. I believe this should be of interest. Hankel transforms arise often in acoustics and other systems with cylinder symmetry. If you want it, I would like a suggestion where it belongs (with other integral transforms, probably) and how I should shape the tests to make them suitable for inclusion.
The tests I currently have use Hankel transforms of five different functions with known analytical transforms and compares the transformed values to the numerically evaluated analytical expressions. Currently plots are generated, but for automated testing I suppose something else would be better. Pointing me at an example is sufficient.
[1] http://infohost.nmt.edu/~borchers/hankel.html
[2] Anderson, W. L., 1979, Computer Program Numerical Integration of Related Hankel Transforms of Orders 0 and 1 by Adaptive Digital Filtering. Geophysics, 44(7):1287-1305.
Best regards,
-- Tom Grydeland <Tom.Grydeland@(norut.no|gmail.com)>
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-dev
-- Robert Kern

On 2014-02-14, at 13:15, Robert Kern <robert.kern@gmail.com> wrote:
On Fri, Feb 14, 2014 at 9:45 AM, Tom Grydeland <tom.grydeland@gmail.com> wrote:
Hi developers,
This is a repost of a message from December 2008 which gave no useful answers. Since then, I’ve had 4-5 requests for the code from people who had a need for it. It’s not a massive demand, but enough that perhaps you’ll consider my offer again.
Since the previous posting, I’ve also included alternative filters thanks to Fan-Nian Kong that are shorter and more accurate when the function makes significant changes in more limited intervals. I’m not including the code (since it is mostly thousands of lines of tables), but I will provide the files to anyone who’s interested.
Yes, I think we'd be interested. Please do make a PR. Before you do, please double-check the licensing on the new code that you have added. It does look like Anderson's original code is in the public domain (the paper being published as part of Anderson's work at the USGS as a federal employee), so that part is in the clear. Just so we are clear, the lack of copyright statements (work by US federal employees aside) usually means that you have *no license* to redistribute the work, not that there are no restrictions on redistribution.
Hello again, To the last point first: I agree that Anderson’s work is in the public domain. I contacted Fannian Kong regarding terms for his filters, whether he would be willing to put them in the public domain or release them under a BSD-style license. I explained that in either case others were free to use them, even for profit, without any compensation. I got the following reply: —————————— Copy right things are complicated things to me. Please treat those material as published results from an open journal. So, as long as the journal paper is quoted as the reference, everybody can use the journal results. —————————— Frankly, I don’t know if that is enough that we can include these filters or not. Opinions? To the first point: I’ll require a few hints and pointers. If the original function is f and its transform F, then F(b) = ([f(y/b)]^t * w)/b, where y and w are vectors of a certain length (801 for Anderson; 61, 121 or 241 for Kong — these are the tables I mentioned previously). In other words, each transformed point requires a certain number of function evaluations. Orders 0 and 1 transforms differ only in the weighting vectors w, so if both are needed, much is saved by computing both at once on the same grid. In my application I would typically transform a number of functions to the same offsets b, so I would call one method on a transform object to set up a ‘k’ grid ( = y/b), evaluate my function(s) on that grid, and then call a ‘transform’ method with these function evaluations to obtain the transformed quantities F(b). This is sufficiently different from what one would use for other types of integral transforms that I’m open to other suggestions when it comes to interfaces. Also, I don’t see an obvious place where this should live. I’m thinking SciPy rather than NumPy, but it is not obviously a fit for any of the existing top-level namespaces. The closest thing is fftpack, but this isn’t part of fftpack. Arguments could be made for ndimage or signal also, but not very convincingly.
Thanks!
Robert Kern
Cheers, Tom Grydeland

I'm no lawyer, but in my experience it is enough to cite the paper describing the algorithm in your own implementation of it. I've seen this done thousands of times. It gets a little bit strange when software patents are concerned, but as far as copyrights go you should be fine. On Mon, Feb 17, 2014 at 3:45 PM, Tom Grydeland <tom.grydeland@gmail.com> wrote:
On 2014-02-14, at 13:15, Robert Kern <robert.kern@gmail.com> wrote:
On Fri, Feb 14, 2014 at 9:45 AM, Tom Grydeland <tom.grydeland@gmail.com> wrote:
Hi developers,
This is a repost of a message from December 2008 which gave no useful answers. Since then, I’ve had 4-5 requests for the code from people who had a need for it. It’s not a massive demand, but enough that perhaps you’ll consider my offer again.
Since the previous posting, I’ve also included alternative filters thanks to Fan-Nian Kong that are shorter and more accurate when the function makes significant changes in more limited intervals. I’m not including the code (since it is mostly thousands of lines of tables), but I will provide the files to anyone who’s interested.
Yes, I think we'd be interested. Please do make a PR. Before you do, please double-check the licensing on the new code that you have added. It does look like Anderson's original code is in the public domain (the paper being published as part of Anderson's work at the USGS as a federal employee), so that part is in the clear. Just so we are clear, the lack of copyright statements (work by US federal employees aside) usually means that you have *no license* to redistribute the work, not that there are no restrictions on redistribution.
Hello again,
To the last point first: I agree that Anderson’s work is in the public domain.
I contacted Fannian Kong regarding terms for his filters, whether he would be willing to put them in the public domain or release them under a BSD-style license. I explained that in either case others were free to use them, even for profit, without any compensation.
I got the following reply:
—————————— Copy right things are complicated things to me. Please treat those material as published results from an open journal. So, as long as the journal paper is quoted as the reference, everybody can use the journal results. ——————————
Frankly, I don’t know if that is enough that we can include these filters or not. Opinions?
To the first point: I’ll require a few hints and pointers.
If the original function is f and its transform F, then F(b) = ([f(y/b)]^t * w)/b, where y and w are vectors of a certain length (801 for Anderson; 61, 121 or 241 for Kong — these are the tables I mentioned previously). In other words, each transformed point requires a certain number of function evaluations. Orders 0 and 1 transforms differ only in the weighting vectors w, so if both are needed, much is saved by computing both at once on the same grid.
In my application I would typically transform a number of functions to the same offsets b, so I would call one method on a transform object to set up a ‘k’ grid ( = y/b), evaluate my function(s) on that grid, and then call a ‘transform’ method with these function evaluations to obtain the transformed quantities F(b). This is sufficiently different from what one would use for other types of integral transforms that I’m open to other suggestions when it comes to interfaces.
Also, I don’t see an obvious place where this should live. I’m thinking SciPy rather than NumPy, but it is not obviously a fit for any of the existing top-level namespaces. The closest thing is fftpack, but this isn’t part of fftpack. Arguments could be made for ndimage or signal also, but not very convincingly.
Thanks!
Robert Kern
Cheers,
Tom Grydeland
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-dev

On Mon, Feb 17, 2014 at 2:45 PM, Tom Grydeland <tom.grydeland@gmail.com> wrote:
On 2014-02-14, at 13:15, Robert Kern <robert.kern@gmail.com> wrote:
On Fri, Feb 14, 2014 at 9:45 AM, Tom Grydeland <tom.grydeland@gmail.com> wrote:
Hi developers,
This is a repost of a message from December 2008 which gave no useful answers. Since then, I’ve had 4-5 requests for the code from people who had a need for it. It’s not a massive demand, but enough that perhaps you’ll consider my offer again.
Since the previous posting, I’ve also included alternative filters thanks to Fan-Nian Kong that are shorter and more accurate when the function makes significant changes in more limited intervals. I’m not including the code (since it is mostly thousands of lines of tables), but I will provide the files to anyone who’s interested.
Yes, I think we'd be interested. Please do make a PR. Before you do, please double-check the licensing on the new code that you have added. It does look like Anderson's original code is in the public domain (the paper being published as part of Anderson's work at the USGS as a federal employee), so that part is in the clear. Just so we are clear, the lack of copyright statements (work by US federal employees aside) usually means that you have *no license* to redistribute the work, not that there are no restrictions on redistribution.
Hello again,
To the last point first: I agree that Anderson’s work is in the public domain.
I contacted Fannian Kong regarding terms for his filters, whether he would be willing to put them in the public domain or release them under a BSD-style license. I explained that in either case others were free to use them, even for profit, without any compensation.
I got the following reply:
—————————— Copy right things are complicated things to me. Please treat those material as published results from an open journal. So, as long as the journal paper is quoted as the reference, everybody can use the journal results. ——————————
Frankly, I don’t know if that is enough that we can include these filters or not. Opinions?
Short answer: no, that's too vague of a statement, and he might be wanting more restrictions than we are prepared to place on code in scipy. The BSD license that we use for scipy does not require anyone to cite the journal article. We will include a citation in our code, certainly, but we can make no guarantee that any downstream users of these functions will include that citation in their code or papers that use this code. We do include *encouragement* for users of such functions to cite the journal article when the original author wishes it. The BSD license does require that the copyright notice remain attached to the code, though, and that may be enough for him. We would need a positive statement from him that we can redistribute his code under a BSD license. If he does not take the time to read and understand the consequences of the BSD license, I would not be comfortable taking that statement of his as assenting to it. That said, the bulk of this code appears to just be tables of constants with a few fairly trivial function calls. Under US law, these aren't really copyrightable, but under EU law, the tables might be (it seems Fan-Nian Kong works in Norway). Can these tables be recreated somehow besides copying the data files on his website? Is there an algorithm for doing so? If you can rewrite the code from just the description of the algorithm in the paper and not by translating the MATLAB code from his site, then we are in the clear, IMO. (IANAL. TINLA.) Since he does seem to want citations, we should include the citation in the docstrings and encourage users of those functions to cite it too.
To the first point: I’ll require a few hints and pointers.
If the original function is f and its transform F, then F(b) = ([f(y/b)]^t * w)/b, where y and w are vectors of a certain length (801 for Anderson; 61, 121 or 241 for Kong — these are the tables I mentioned previously). In other words, each transformed point requires a certain number of function evaluations. Orders 0 and 1 transforms differ only in the weighting vectors w, so if both are needed, much is saved by computing both at once on the same grid.
In my application I would typically transform a number of functions to the same offsets b, so I would call one method on a transform object to set up a ‘k’ grid ( = y/b), evaluate my function(s) on that grid, and then call a ‘transform’ method with these function evaluations to obtain the transformed quantities F(b). This is sufficiently different from what one would use for other types of integral transforms that I’m open to other suggestions when it comes to interfaces.
Also, I don’t see an obvious place where this should live. I’m thinking SciPy rather than NumPy, but it is not obviously a fit for any of the existing top-level namespaces. The closest thing is fftpack, but this isn’t part of fftpack. Arguments could be made for ndimage or signal also, but not very convincingly.
I would opt for scipy.signal. The interface can be whatever you find the most useful. If we can build on top of that an interface similar to other transforms, albeit inefficiently, so much the better, but let your actual use cases drive the interface. -- Robert Kern

On 2014-02-17, at 16:54, Robert Kern <robert.kern@gmail.com> wrote:
Short answer: no, that's too vague of a statement, and he might be wanting more restrictions than we are prepared to place on code in scipy. The BSD license that we use for scipy does not require anyone to cite the journal article. We will include a citation in our code, certainly, but we can make no guarantee that any downstream users of these functions will include that citation in their code or papers that use this code. We do include *encouragement* for users of such functions to cite the journal article when the original author wishes it. The BSD license does require that the copyright notice remain attached to the code, though, and that may be enough for him.
Okay, I can ask again with these clarifications.
That said, the bulk of this code appears to just be tables of constants with a few fairly trivial function calls. Under US law, these aren't really copyrightable, but under EU law, the tables might be (it seems Fan-Nian Kong works in Norway). Can these tables be recreated somehow besides copying the data files on his website? Is there an algorithm for doing so?
The algorithm for doing so is what is described in the paper. (The paper is here: DOI:10.1111/j.1365-2478.2006.00585.x and I have PDF that I won’t spam the mailing list with.)
If you can rewrite the code from just the description of the algorithm in the paper and not by translating the MATLAB code from his site, then we are in the clear, IMO. (IANAL. TINLA.)
As you say, the mathematical machinery of the code is simple enough, once you have the tables. My Python code (apart from the tables) does not even vaguely resemble the Matlab code. I called it an “adaptation”, not a “translation”, but even “adaptation” may be too specific.
Since he does seem to want citations, we should include the citation in the docstrings and encourage users of those functions to cite it too.
Absolutely.
Also, I don’t see an obvious place where this should live. I’m thinking SciPy rather than NumPy, but it is not obviously a fit for any of the existing top-level namespaces. The closest thing is fftpack, but this isn’t part of fftpack. Arguments could be made for ndimage or signal also, but not very convincingly.
I would opt for scipy.signal. The interface can be whatever you find the most useful. If we can build on top of that an interface similar to other transforms, albeit inefficiently, so much the better, but let your actual use cases drive the interface.
Fine.
Robert Kern
Thank you for your feedback. I’ll ask Dr. Kong again, and proceed with a PR using Anderson (with provisions for adding Kong later) —T

On 2014-02-14, at 13:15, Robert Kern <robert.kern@gmail.com> wrote:
On Fri, Feb 14, 2014 at 9:45 AM, Tom Grydeland <tom.grydeland@gmail.com> wrote:
Hi developers,
This is a repost of a message from December 2008 which gave no useful answers. Since then, I’ve had 4-5 requests for the code from people who had a need for it. It’s not a massive demand, but enough that perhaps you’ll consider my offer again.
Since the previous posting, I’ve also included alternative filters thanks to Fan-Nian Kong that are shorter and more accurate when the function makes significant changes in more limited intervals. I’m not including the code (since it is mostly thousands of lines of tables), but I will provide the files to anyone who’s interested.
Yes, I think we'd be interested. Please do make a PR.
Sorry this has taken a while, I got bogged down with some other stuff. The changes are, I believe, here: https://github.com/togry/scipy/compare/signal-hankel-transform (I’m completely unfamiliar with Git, so bear with me if this should be done differently)
Before you do, please double-check the licensing on the new code that you have added. It does look like Anderson's original code is in the public domain (the paper being published as part of Anderson's work at the USGS as a federal employee), so that part is in the clear. Just so we are clear, the lack of copyright statements (work by US federal employees aside) usually means that you have *no license* to redistribute the work, not that there are no restrictions on redistribution.
I couldn’t get a clearer statement from Fan-Nian Kong, so I’ve only included the Anderson filters. There’s a reference to Kong’s paper in the docstrings, however, so adding the filters from whatever sources should be simple.
Thanks!
I hope others find this useful also
Robert Kern
—Tom Grydeland

On Tue, Mar 11, 2014 at 12:19 AM, Tom Grydeland <tom.grydeland@gmail.com>wrote:
On 2014-02-14, at 13:15, Robert Kern <robert.kern@gmail.com> wrote:
Hi developers,
This is a repost of a message from December 2008 which gave no useful answers. Since then, I've had 4-5 requests for the code from people who had a need for it. It's not a massive demand, but enough that perhaps you'll consider my offer again.
Since the previous posting, I've also included alternative filters
On Fri, Feb 14, 2014 at 9:45 AM, Tom Grydeland <tom.grydeland@gmail.com> wrote: thanks to Fan-Nian Kong that are shorter and more accurate when the function makes significant changes in more limited intervals. I'm not including the code (since it is mostly thousands of lines of tables), but I will provide the files to anyone who's interested.
Yes, I think we'd be interested. Please do make a PR.
Sorry this has taken a while, I got bogged down with some other stuff.
The changes are, I believe, here:
https://github.com/togry/scipy/compare/signal-hankel-transform
(I'm completely unfamiliar with Git, so bear with me if this should be done differently)
Hi Tom. The commit looks fine. I guggest you send this as a pull request, so it's easier to review. A few comments already: - the API looks slightly awkward, instantiating the class is basically a do-nothing operation. You'd normally do this with a plain function that has a ``method='anderson'`` keyword. - the hankel0, hankel1 and hankel01 methods look unnecessary. - The file names of all the Python files you add in scipy/signal/ should start with an underscore, so it's clear that they are private. - The docstrings could use an example and should be formatted according to https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt - the ``try: iter(B)`` blocks can be written simply as ``B = np.asarray(B)``. The way it's now, B as a list will raise an error later on. - The for-loop over all values of B looks quite inefficient. Cheers, Ralf
Before you do, please double-check the licensing on the new code that you have added. It does look like Anderson's original code is in the public domain (the paper being published as part of Anderson's work at the USGS as a federal employee), so that part is in the clear. Just so we are clear, the lack of copyright statements (work by US federal employees aside) usually means that you have *no license* to redistribute the work, not that there are no restrictions on redistribution.
I couldn't get a clearer statement from Fan-Nian Kong, so I've only included the Anderson filters. There's a reference to Kong's paper in the docstrings, however, so adding the filters from whatever sources should be simple.
Thanks!
I hope others find this useful also
Robert Kern
--Tom Grydeland _______________________________________________ SciPy-Dev mailing list SciPy-Dev@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-dev

On Mon, Mar 17, 2014 at 12:12 AM, Ralf Gommers <ralf.gommers@gmail.com> wrote:
Hi Tom. The commit looks fine. I guggest you send this as a pull request, so it's easier to review.
Will do.
A few comments already: - the API looks slightly awkward, instantiating the class is basically a do-nothing operation. You'd normally do this with a plain function that has a ``method='anderson'`` keyword.
Agreed -- will change (I have had more stuff in the class previously, such as keeping track of scaled function arguments for various arguments 'B' of the transform, but the resulting interface was sufficiently hard to explain that I decided to cut it down to its minimum, I didn't realise there was nothing left :D )
- the hankel0, hankel1 and hankel01 methods look unnecessary.
Here, I disagree. They are merely convenience functions, but it is much more obvious to say "hfunc = hankel0(func, bvals)" than "hfunc = hankel(func, bvals, order=[0])[0]", and yet I wanted the actual worker routine to be factored out, since the function evaluations can be reused for both transforms. I could agree that hankel01 is redundant, if the default order argument to 'hankel' becomes '[0, 1]'.
- The file names of all the Python files you add in scipy/signal/ should start with an underscore, so it's clear that they are private.
Okay, will change. How do I expose the callable functions then?
- The docstrings could use an example and should be formatted according to https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt
Okay, will change. Will the unit tests be suitable examples?
- the ``try: iter(B)`` blocks can be written simply as ``B = np.asarray(B)``. The way it's now, B as a list will raise an error later on.
Okay, will change.
- The for-loop over all values of B looks quite inefficient.
It is. It is possible to arrange your sequence of 'B' values such that you can reuse a large portion of function evaluations for each additional transformed point, but that requires the caller to know in some detail the properties of the grid being used, and I have not implemented this. Various tricks are imaginable to overcome this, e.g. creating a grid covering all the desired B values and using some sort of interpolation on the results. It is also possible, when you want to transform several functions that are related in some way, to keep intermediate evaluations to perform all transforms at once. The interfaces quickly become muddled, however. For an initial inclusion, I'd like to make sure the interface is simple and usable, and the results predictable. An interface for better efficiency can come as a result of usage and experience. There are also certain ranges of inputs where the transform is better performed using contour integration techniques. I have not implemented that either. Thank you, and best regards, -- Tom Grydeland <Tom.Grydeland@(gmail.com)>

On 2014-03-17, at 11:48, Tom Grydeland <tom.grydeland@gmail.com> wrote:
On Mon, Mar 17, 2014 at 12:12 AM, Ralf Gommers <ralf.gommers@gmail.com> wrote:
Hi Tom. The commit looks fine. I guggest you send this as a pull request, so it's easier to review.

On Mon, Mar 17, 2014 at 11:48 AM, Tom Grydeland <tom.grydeland@gmail.com>wrote:
On Mon, Mar 17, 2014 at 12:12 AM, Ralf Gommers <ralf.gommers@gmail.com> wrote:
Hi Tom. The commit looks fine. I guggest you send this as a pull request, so it's easier to review.
Will do.
A few comments already: - the API looks slightly awkward, instantiating the class is basically a do-nothing operation. You'd normally do this with a plain function that has a ``method='anderson'`` keyword.
Agreed -- will change
(I have had more stuff in the class previously, such as keeping track of scaled function arguments for various arguments 'B' of the transform, but the resulting interface was sufficiently hard to explain that I decided to cut it down to its minimum, I didn't realise there was nothing left :D )
- the hankel0, hankel1 and hankel01 methods look unnecessary.
Here, I disagree. They are merely convenience functions, but it is much more obvious to say "hfunc = hankel0(func, bvals)" than "hfunc = hankel(func, bvals, order=[0])[0]", and yet I wanted the actual worker routine to be factored out, since the function evaluations can be reused for both transforms. I could agree that hankel01 is redundant, if the default order argument to 'hankel' becomes '[0, 1]'.
It still doesn't look right to me. Either you have three functions, or one function which takes an `order` keyword. Not both. Also the keywords `method, ybase, wt0, wt1` are all doing nothing right now. hankel() is simple enough that there's little value in providing those for the few souls that take the effort to track down alternative coefficients. After looking at this in some more detail, it looks to me like `hankel` is too generic a name - that's what I would expect as the name for `f(x, nu)` instead of `(f(x, 0), f(x, 1))`.
- The file names of all the Python files you add in scipy/signal/ should
start with an underscore, so it's clear that they are private.
Okay, will change. How do I expose the callable functions then?
- $ git mv hankel.py _hankel.py - in signal/__init__.py: from _hankel import hankel0, ....
- The docstrings could use an example and should be formatted according to https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt
Okay, will change. Will the unit tests be suitable examples?
I would actually define a normal function instead of a lambda, one which has a closed-form solution like f(x) = x**(-0.5) * exp(-x), then plot that solution together with the numerical evaluation of it.
- the ``try: iter(B)`` blocks can be written simply as ``B = np.asarray(B)``. The way it's now, B as a list will raise an error later on.
Okay, will change.
- The for-loop over all values of B looks quite inefficient.
It is.
It is possible to arrange your sequence of 'B' values such that you can reuse a large portion of function evaluations for each additional transformed point, but that requires the caller to know in some detail the properties of the grid being used, and I have not implemented this.
Various tricks are imaginable to overcome this, e.g. creating a grid covering all the desired B values and using some sort of interpolation on the results. It is also possible, when you want to transform several functions that are related in some way, to keep intermediate evaluations to perform all transforms at once. The interfaces quickly become muddled, however.
That indeed doesn't sound good.
For an initial inclusion, I'd like to make sure the interface is simple and usable, and the results predictable. An interface for better efficiency can come as a result of usage and experience.
Hmm, aiming for first time right would be better.
There are also certain ranges of inputs where the transform is better performed using contour integration techniques. I have not implemented that either.
There's actually a quite large body of literature on numerical evaluation of the Hankel transform (see http://www.sciencedirect.com/science/article/pii/0898122193900816). Some digging turned up for example an improved algorithm from Anderson himself ( http://library.seg.org/doi/abs/10.1190/1.1442650) as well as http://dl.acm.org/citation.cfm?id=317284 which the author's website says is public domain (I think, see http://homepages.tu-darmstadt.de/~wieder/hankel/hankel.html). Some motivation for why the original Anderson algorithm is still suitable to add to Scipy would be useful. Ralf
Thank you, and best regards,
-- Tom Grydeland <Tom.Grydeland@(gmail.com)> _______________________________________________ SciPy-Dev mailing list SciPy-Dev@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-dev

On 2014-03-21, at 00:05, Ralf Gommers <ralf.gommers@gmail.com> wrote:
Some motivation for why the original Anderson algorithm is still suitable to add to Scipy would be useful.
When I have to start _motivating_ my contributions, I think my work is done. Until this has been decided, I guess there’s no point in addressing your other comments. --T

On Fri, Mar 21, 2014 at 8:48 AM, Tom Grydeland <tom.grydeland@gmail.com>wrote:
On 2014-03-21, at 00:05, Ralf Gommers <ralf.gommers@gmail.com> wrote:
Some motivation for why the original Anderson algorithm is still suitable to add to Scipy would be useful.
When I have to start _motivating_ my contributions, I think my work is done.
Until this has been decided, I guess there's no point in addressing your other comments.
I'm sorry, but that's a perfectly valid question for any algorithms we add. I spent quite a bit of effort looking at your code and checking the literature. If we're going to have to deprecate these functions again later because they're not accurate enough while there are improved ones around, we do our users a disservice. If there's another dev who happens to be an expert to whom it's obvious that this is a good way to add hankel0/1 (Robert?), then fine. Otherwise you're going to have to help me a bit. Ralf

On 2014-03-21, at 23:10, Ralf Gommers <ralf.gommers@gmail.com> wrote:
On 2014-03-21, at 00:05, Ralf Gommers <ralf.gommers@gmail.com> wrote:
Some motivation for why the original Anderson algorithm is still suitable to add to Scipy would be useful. When I have to start _motivating_ my contributions, I think my work is done.
I'm sorry, but that's a perfectly valid question for any algorithms we add. I spent quite a bit of effort looking at your code and checking the literature. If we're going to have to deprecate these functions again later because they're not accurate enough while there are improved ones around, we do our users a disservice.
As long as the interface is kept, we can simply replace the implementation. Deprecation isn’t (shouldn’t be) necessary.
If there's another dev who happens to be an expert to whom it's obvious that this is a good way to add hankel0/1 (Robert?), then fine. Otherwise you're going to have to help me a bit.
I’m not an expert. My company needed Hankel transforms at some point, and the precision obtained with these routines was sufficient for our purposes. We needed something which would work satisfactory for inputs (coordinate in the transformed domain) over several orders of magnitude, for which Anderson’s coefficients worked better than Kong’s. For the analytically known transforms (e.g. those used in the unit test), it is simple enough to plot relative and absolute errors over several decades, and typical values can be documented. I’m not using these transforms for my current work, so I am not in a position to volunteer significant development effort along the lines of Anderson’s later papers. I will happily work on the appearance, interfaces, and documentation of the current routines (as demonstrated already), but if they’re not going to be included anyway, I will rather spend my time elsewhere. Hence this remark:
Until this has been decided, I guess there’s no point in addressing your other comments.
Ralf
Best regards, Tom Grydeland
participants (4)
-
Aaron Webster
-
Ralf Gommers
-
Robert Kern
-
Tom Grydeland