[Python-Dev] Changing ob_size to [s]size_t

Martin v. Löwis loewis@informatik.hu-berlin.de
06 Jun 2002 20:31:10 +0200


Guido van Rossum <guido@python.org> writes:

> What about my other objections?

Besides "breaks binary compatibility", the only other objection was:

> Also could cause lots of compilation warnings when user code stores
> the result into an int.

True; this would be a migration issue. To be safe, we probably would
define Py_size_t (or Py_ssize_t). People on 32-bit platforms would not
notice the problems; people on 64-bit platforms would soon provide
patches to use Py_ssize_t in the core.

That is a lot of work, so it requires careful planning, but I believe
this needs to be done sooner or later. Given MAL's and your response,
I already accepted that it would likely be done rather later than
sooner.

I don't agree with MAL's objection

> Wouldn't it be easier to solve this particular problem in
> the type used for mmapping files ?

Sure, it would be faster and easier, but that is the dark side of the
force. People will find that they cannot have string objects with more
than 2Gib one day, too, and, perhaps somewhat later, that they cannot
have more than 2 milliard objects in a list.

It is unlikely that the problem will go away, so at some point, all
the problems will become pressing. It is perfectly reasonable to defer
the binary breakage to that later point, except that probably more
users will be affected in the future than would be affected now
(because of the current rareness of 64-bit Python installations).

Regards,
Martin