[Python-3000] int-long unification

Guido van Rossum guido at python.org
Sat Aug 19 17:09:53 CEST 2006


I've thought about it more, and I think it's fine to use a single
type. It will surely simplify many things, and that alone might help
us win back some of the inefficiency this introduces. And it is best
for Python-level users.

Are you interested in doing this at the Google sprint next week?

Here's how I would approach it:

0. Benchmark. (Py3k is slower than 2.5 at the moment, I don't know
why.) I would pick the benchmark that showed the biggest sensitivity
in your recent comparisons.

1. Completely gut intobject.[ch], making all PyInt APIs equivalent to
the corresponding PyLong APIs (through macros if possible). The PyInt
macros become functions. I'm not sure whether it would be better for
PyInt_Check() to always return False or to always return True. In
bltinmodule, export "int" as an alias for "long".

2. Bang on the rest of the code until it compiles and passes all unit
tests (except the 5 that I haven't managed to fix yet -- test_class,
test_descr, test_minidom, and the two etree tests). (Right now many
more are broken due to the elimination of has_key; I'll fix these over
the weekend.)

3. Go over much of the C code where it special-cases PyInt and PyLong
separately, and change this to only use the PyLong calls. Keep the
unittests working.

4. Benchmark.

5. Introduce some optimizations into longobject.c, e.g. a cache for
small ints (like we had in intobject.c), and perhaps a special
representation for values less than maxint (or for anything that fits
in a long long). Or anything else you can think of.

6. Benchmark.

7. Repeat from 5 until satisfied.

At this point I wouldn't rip out the PyInt APIs; leaving them in
aliased to PyLong APIs for a while will let us put off the work on
some of the more obscure extension modules.

What do you think?

--Guido van Rossum (home page: http://www.python.org/~guido/)

More information about the Python-3000 mailing list