There is a considerable amount of warnings present for 64 bit builds on windows. You can see them using VisualStudio 2005 even if you don't have the x64 compilers installed, by turning on "Detect 64 bit portability issues" in the general tab for pythoncore. Now, some of those just need straightforward upgrades of loop counters and so on to Py_ssize_t. Others probably require more judgement. E.g., do we want to change the signature of PyEval_EvalCodeEx() to accept Py_ssize_t counters rather than int? And if not, should we then use Py_SAFE_DOWNCAST() or just regular (int) typecast? Note that on x64 there is rarely any performance cost associated with usin 64 bit variables for function calls, since most of the time arguments are passed in registers. i.e. it is mostly structs that we want to keep unchanged, imo. Any thoughts? Kristján
Any thoughts?
These should be fixed on a case-by-case basis. Please submit patches to SF, and assign them to me. Changes should only go into 2.6. As a principle, values that could exceed 2Gi in a hand-crafted Python program should be Py_ssize_t. Values that can never exceed the int range (because of other constraints, such as limitations of the byte code) should be safe-downcast to int (or smaller). In the particular case of PyEval_EvalCodeEx, I think most values can't grow beyond 2**31 because the byte code format wouldn't allow such indexes. There should be documentation on what the valid ranges are for argcount, kwcount, locals, and what the rationale for these limitations are, and then they should get a consistent datatype. Regards, Martin
participants (2)
-
"Martin v. Löwis"
-
Kristján Valur Jónsson