[Numpy-discussion] change default integer from int32 to int64 on win64?

Lars Buitinck larsmans at gmail.com
Thu Jul 24 05:39:30 EDT 2014


Wed, 23 Jul 2014 22:13:33 +0100  Nathaniel Smith <njs at pobox.com>:
> On Wed, Jul 23, 2014 at 9:57 PM, Robert Kern <robert.kern at gmail.com> wrote:
>> That's perhaps what you want, but numpy has never claimed to do this.

... except in np.where, which promises to return indices but actually
returns arrays of longs and thus doesn't work with large arrays on
Windows.

I know this is a bug that can be fixed without changing the size of
np.int, but it goes to show that even core functionality in NumPy gets
it wrong.

> This is true, but it's not very compelling on its own -- "big as a
> pointer" is a much much more useful property than "big as a long". The
> only real reason this made sense in the first place is the equivalence
> between Python int and C long, but even that is gone now with Python
> 3. IMO at this point backcompat is really the only serious reason for
> keeping int32 as the default integer type in win64. But of course this
> is a pretty serious concern...

Hear, hear.

The C type long is only useful as an "at least 32-bit" integer, but on
the platforms that NumPy targets, int is also at least that large. The
only real benefit of long is that it makes porting more interesting
</sarcasm>.

If you have intp and a bunch of explicitly-sized integer types, you
don't need an additional type that behaves like a long *except* for
backward compat.

The Go people got this right; they only have explicitly-sized integer
types and an int type the size of a pointer [1].

[1] http://golang.org/doc/go1.1#int



More information about the NumPy-Discussion mailing list