In the case of IDL, most algorithms are implemented in single precision floating point. The Python implementation by default would use double precision, unless we explicitly direct it to do otherwise.
I personally vote for single precision algorithms to stay single
Just a few comments. Joe Harrington wrote: precision if possible. The Fortran side of the numerical computation community has always felt that C's promotion of everything to double is one of its most ill-considered design decisions. Given that NumPy has single and double arrays, and Numeric 24 (like numarray) doesn't promote a single array to a double when it interacts with a double scalar, I'd think we could keep single precision in many contexts.
I doubt [single vs. double] will make a big difference for most codes, as astronomical data tends to be uncertain no later than the fourth decimal place, and frequently in the first.
pi**2=10 for most astronomers. I second the rest of Joe's comments. (That sounds like an impressive spectrum extraction code, by the way, Joe.) Still, I'd hope that a more feature-ful language would help the conversion as well. No one wrote a Fortran to C translator, at least not one which produced human-readable C, but I think more scientists are writing C than Fortran nowadays, and the reason is probably the big problems in Fortran 77. Fortran 90 fixed most of the problems but came along too late. (We at universities also all found our students were no longer being taught Fortran by our colleagues over in computer science.) So, maybe one effective approach would be to get some IDL and MATLAB users to list their biggest annoyances with the language and then see they don't face same in Scipy. Now if we could just get the lawyers out of our way so we could actually do some work.