[Python-Dev] [Python-checkins] r41972 - python/branches/ssize_t/Objects/funcobject.c

Neal Norwitz nnorwitz at gmail.com
Mon Jan 9 19:40:44 CET 2006


On 1/9/06, Tim Peters <tim.peters at gmail.com> wrote:
> ...
>
> [Tim]
> >> That's no more or less painful than using C99's huge pile of PRId8,
> >> PRId16, PRId32 (etc, etc) macros, defined there for similar purposes.
>
> [Martin]
> > Right - and I consider them just as painful.
> >
> > If you believe that this is really what we should be doing, then, well,
> > let's do it.
>
> I think it's clear that in a 64-bit world, we absolutely need a way to
> format 64-bit integers.  I'm dubious that Neil's specific line of code
> needed Py_ssize_t to begin with, but increasing amounts of other code
> will.

I wasn't always sure which choice to make.  I generally tried to leave
the warnings when I wasn't sure.  Like you and Martin I agree this is
a problem, though I don't have any good ideas how to fix.  They all
seem kinda painful.

I often chose to use Py_ssize_t rather than int if it was capturing
the result of a sequence length.  But often this is larger than what
will be allowed (like in this case).  The problem then comes that we
have cases everywhere if we leave things as ints.

Stuff like this:
  len = (int) PyTuple_Size(co_consts);

How do we know that is safe or not?

Also, maybe it would be nice to have defined maximum sizes enforced by
the compiler for lots of fields in code objects, etc.  Like 32k for #
local variables, parameters, free variables, etc.  But when we add
these together in the stack for extras, we would still need to use at
least 32 bits.  And there's still the possibility of overflow when we
add these pieces together.

n


More information about the Python-Dev mailing list