Joe Harrington wrote:
My main concern is not for coding labor but rather for numerical accuracy. There are enough subtle differences between Python and IDL that hand-converting large amounts of numerical code would be error-prone. For example, in Python the end of an array slice is one element beyond the end of an IDL slice. An automatic converter would just add one to every slice's ending index. A human wouldn't necessarily remember to do that every time, producing subtle bugs could be very hard to find in some cases.
This is my concern too. In the case of IDL, most algorithms are implemented in single precision floating point. The Python implementation by default would use double precision, unless we explicitly direct it to do otherwise. This problem alone can cause much grief, because the IDL version is presumed to be the correct one, until demonstrated otherwise. (I know this from personal experience.) So in addition to the language being crappy, do we also want to propagate crappy (i.e. unstable) algorithms too? -- Paul -- Paul Barrett, PhD Space Telescope Science Institute Phone: 410-338-4475 ESS/Science Software Branch FAX: 410-338-4767 Baltimore, MD 21218