[Numpy-discussion] Int bitsize in python and c
robert.kern at gmail.com
Thu Mar 18 11:39:16 EDT 2010
On Thu, Mar 18, 2010 at 08:33, Martin Raspaud <martin.raspaud at smhi.se> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> I work on a 64bit machine with 64bits enable fedora on it.
> I just discovered that numpy.int on the python part are 64bits ints, while
> npy_int in the C api are 32bits ints.
Note that np.int is just Python's builtin int type (there only for
historical reasons). It corresponds to a C long. npy_int corresponds
to a C int.
> I can live with it, but it seems to be different on 32bit machines, hence I
> wonder what is the right way to do when retrieving an array from python to C.
> Here is what I use now:
> data_pyarray = (PyArrayObject *)PyArray_ContiguousFromObject(data_list,
> PyArray_INT, 1, 2);
> but that implies that I send np.int32 arrays to the C part.
> Should I use longs instead ?
Not necessarily; C longs are the cause of your problem. On some
platforms they are 64-bit, some they are 32-bit. Technically speaking,
C ints can vary from platform to platform, but they are typically
32-bits on all modern platforms running numpy.
Of course, numpy defaults to using a C long for its integer arrays,
just as Python does for its int type, so perhaps using a C long would
work best for you. It's platform dependent, but it matches the
platform dependent changes in numpy. It depends on what your needs
are. If you need a consistent size (perhaps you are writing bytes out
to a file), then always use the int32 or int64 specific types.
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
More information about the NumPy-Discussion