[Numpy-discussion] Proposal: Deprecate np.int, np.float, etc.?
nevion at gmail.com
Fri Jul 31 20:46:02 EDT 2015
On Fri, Jul 31, 2015 at 5:19 PM, Nick Papior <nickpapior at gmail.com> wrote:
> Kind regards Nick Papior
> On 31 Jul 2015 17:53, "Chris Barker" <chris.barker at noaa.gov> wrote:
> > On Thu, Jul 30, 2015 at 11:24 PM, Jason Newton <nevion at gmail.com> wrote:
> >> This really needs changing though. scientific researchers don't catch
> this subtlety and expect it to be just like the c and matlab types they
> know a little about.
> > well, C types are a %&$ nightmare as well! In fact, one of the biggest
> issues comes from cPython's use of a C "long" for an integer -- which is
> not clearly defined. If you are writing code that needs any kind of binary
> compatibility, cross platform compatibility, and particularly if you want
> to be abel to distribute pre-compiled binaries of extensions, etc, then
> you'd better use well-defined types.
There was some truth to this but if you, like the majority of scientific
researchers only produce code for x86 or x86_64 on windows and linux... as
long as you aren't treating pointers as int's, everything behaves in
accordance to general expectations. The standards did and still do allow
for a bit of flux but things like OpenCL [
] made this really strict so we stop writing ifdef's to deal with varying
bitwidths and just implement the algorithms - which is typically a
researcher’s top priority.
I'd say I use the strongly defined types (e.g. int/float32) whenever doing
protocol or communications work - it makes complete sense there. But often
for computation, especially when interfacing with c extensions it makes
more sense for the developer to use types/typenames that ought to match 1:1
with c in every case.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the NumPy-Discussion