[Python-Dev] Deprecation warning on integer shifts and such
Tue, 13 Aug 2002 02:15:06 -0400
On Tue, Aug 13, 2002 at 01:57:17AM +0200, Martin v. Loewis wrote:
> Jack Jansen <Jack.Jansen@oratrix.com> writes:
> > Yes, but due to the way the parser works everything works fine for
> > me. In the constant definition file it says "foo = 0xff000000". The
> > parser turns this into a negative integer. Fine with me, the bits
> > are the same, and PyArg_Parse doesn't complain.
> Please notice that this will stop working some day: 0xff000000 will be
> a positive number, and the "i" parser will raise an OverflowError.
The problem is that many programmers have 0xFFFFFFFF pretty much hard-wired
into their brains as -1. How about treating plain hexadecimal literals as
signed 32 bits regardless of platform integer size? I think that this will
produce the smallest number of incompatibilities for existing code and
maintain compatibility with C header files on 32 bit platforms. In this case
0xff000000 will always be interpreted as -16777216 and the 'i' parser will
happily convert it to wither 0xFF000000 or 0xFFFFFFFFFF000000, depending on
the native platform word size - which is probably what the programmer
I don't think that interpreting 0xFF000000 as 4278190080 will really help
anyone. This includes users of 64 bit platforms - I don't think many of them
actually think of 0xFF000000 as 4278190080. To them it probably means "Danger,
Will Robinson! Unportable code!". So what's the point of having Python
interpret it as 4278190080? If what I really meant was 4278190080 I can
represent it portably as 0xFF000000L and in this case the 'i' parser will
complain on 32 bit platforms - with a good reason.
To support header files that can be included from Python and C and produce
unambigous results on both 32 and 64 bit platforms it is possible to add
support for the C suffixes UL/LU and ULL/LLU.