Re: [Numpy-discussion] first impressions with numpy
Sebastian Haase wrote:
Thanks Tim, that's OK - I got the idea... BTW, is there a (policy) reason that you sent the first email just to me and not the mailing list !?
No. Just clumsy fingers. Probably the same reason the functions got all garbled!
I would really be more interested in comments to my first point ;-) I think it's important that numpy will not be to cryptic and only for "hackers", but nice to look at ... (hope you get what I mean ;-)
Well, I think it's probably a good idea and it sounds like Travis like the idea " for some of the builtin types". I suspect that's code for "not types for which it doesn't make sense, like recarrays".
I'm have developed an "image analysis algorithm development" platform (based on wxPython + PyShell) that more and more people in our lab are using (instead of Matlab !) and I changed the default sys.displayhook to print str(...) instead of repr(...) mainly to get .3 instead of .2999999999998 but seing int32 instead of
I agree that str should display something nicer. Repr should probably stay the same though.
Thanks for your great work ..
Oh, I'm not doing much in the way of great work. I'm mostly just causing Travis headaches. -tim
Sebastian
Tim Hochberg wrote:
Tim Hochberg wrote:
Sebastian Haase wrote:
Hi, I'm a long time user of numarray. Now I downloaded numpy for the first time - and am quite excited to maybe soon being able to use things like weave ! Thanks for all the good work !
[SNIP]
2) This is probably more numarray related: Why does numarray.asarray( numpy.array([1]) ) return a numpy array, not a numarray ??? This is even true for numarray.array( numpy.array([1]) ) !!
I expect that numarray is grabbing the output of __array__() and using that for asarray. What's happening in the second case is hard to know, but I bet it's some snafu with assuming that anything that isn't a numarray, but implements __array__ must be returning a new numarray. That's just a guess. FWIW, I'm using the following two functions to go back and forth between numarray and numpy. So far they seem to work and they're faster (or actually work in the case of asnumarray). You may want to rename them to suit your personal preferences. If you use them, please let me know if you find any problems.
Regards,
I see this got thoroughly mangled somehow. If you want a copy, let me know and I'll just send you it as a file.
-tim
Tim Hochberg wrote:
Sebastian Haase wrote:
Thanks Tim, that's OK - I got the idea... BTW, is there a (policy) reason that you sent the first email just to me and not the mailing list !?
No. Just clumsy fingers. Probably the same reason the functions got all garbled!
I would really be more interested in comments to my first point ;-) I think it's important that numpy will not be to cryptic and only for "hackers", but nice to look at ... (hope you get what I mean ;-)
Well, I think it's probably a good idea and it sounds like Travis like the idea " for some of the builtin types". I suspect that's code for "not types for which it doesn't make sense, like recarrays".
Tim, Could you elaborate on this please? Surely, it would be good for all functions and methods to have meaningful parameter lists and good doc strings. Colin W.
Colin J. Williams wrote:
Tim Hochberg wrote:
Sebastian Haase wrote:
Thanks Tim, that's OK - I got the idea... BTW, is there a (policy) reason that you sent the first email just to me and not the mailing list !?
No. Just clumsy fingers. Probably the same reason the functions got all garbled!
I would really be more interested in comments to my first point ;-) I think it's important that numpy will not be to cryptic and only for "hackers", but nice to look at ... (hope you get what I mean ;-)
Well, I think it's probably a good idea and it sounds like Travis like the idea " for some of the builtin types". I suspect that's code for "not types for which it doesn't make sense, like recarrays".
Tim,
Could you elaborate on this please? Surely, it would be good for all functions and methods to have meaningful parameter lists and good doc strings.
This isn't really about parameter lists and docstrings, it's about
__str__ and possibly __repr__. The basic issue is that the way dtypes
are displayed is powerful, but unfriendly. If I create an array of integers:
>>> a = arange(4)
>>> print repr(a.dtype), str(a.dtype)
dtype('
Tim Hochberg wrote: <snip>
This would work fine if repr were instead:
dtype([('x', float64), ('z', complex128)])
Anyway, this all seems reasonable to me at first glance. That said, I don't plan to work on this, I've got other fish to fry at the moment.
A new point: Please remind me (and probably others): when did it get decided to introduce 'complex128' to mean numarray's complex64 and the 'complex64' to mean numarray's complex32 ? I do understand the logic that 128 is really the bit-size of one (complex) element - but I also liked the old way, because: 1. e.g. in fft transforms, float32 would "go with" complex32 and float64 with complex64 2. complex128 is one character extra (longer) and also (alphabetically) now sorts before(!) complex64 These might just be my personal (idiotic ;-) comments - but I would appreciate some feedback/comments. Also: Is it now to late to (re-)start a discussion on this !? Thanks - Sebastian Haase
Sebastian Haase wrote:
Tim Hochberg wrote: <snip>
This would work fine if repr were instead:
dtype([('x', float64), ('z', complex128)])
Anyway, this all seems reasonable to me at first glance. That said, I don't plan to work on this, I've got other fish to fry at the moment.
A new point: Please remind me (and probably others): when did it get decided to introduce 'complex128' to mean numarray's complex64 and the 'complex64' to mean numarray's complex32 ?
It was last February (i.e. 2005) when I first started posting regarding the new NumPy. I claimed it was more consistent to use actual bit-widths. A few people, including Perry, indicated they weren't opposed to the change and so I went ahead with it. You can read relevant posts by searching on numpy-discussion@lists.sourceforge.net Discussions are always welcome. I suppose it's not too late to change something like this --- but it's getting there... -Travis
Hi, Could we start another poll on this !? I think I would vote +1 for complex32 & complex64 mostly just because of "that's what I'm used to" But I'm curious to hear what others "know to be in use" - e.g. Matlab or IDL ! - Thanks Sebastian Haase Travis Oliphant wrote:
Sebastian Haase wrote:
Tim Hochberg wrote: <snip>
This would work fine if repr were instead:
dtype([('x', float64), ('z', complex128)])
Anyway, this all seems reasonable to me at first glance. That said, I don't plan to work on this, I've got other fish to fry at the moment.
A new point: Please remind me (and probably others): when did it get decided to introduce 'complex128' to mean numarray's complex64 and the 'complex64' to mean numarray's complex32 ?
It was last February (i.e. 2005) when I first started posting regarding the new NumPy. I claimed it was more consistent to use actual bit-widths. A few people, including Perry, indicated they weren't opposed to the change and so I went ahead with it.
You can read relevant posts by searching on numpy-discussion@lists.sourceforge.net
Discussions are always welcome. I suppose it's not too late to change something like this --- but it's getting there...
-Travis
Sebastian Haase wrote:
Hi, Could we start another poll on this !?
Please, let's leave voting as a method of last resort.
I think I would vote +1 for complex32 & complex64 mostly just because of "that's what I'm used to"
But I'm curious to hear what others "know to be in use" - e.g. Matlab or IDL !
On the merits of the issue, I like the new scheme better. For whatever reason, I tend to remember it when coding. With Numeric, I would frequently second-guess myself and go to the prompt and tab-complete to look at all of the options and reason out the one I wanted. -- Robert Kern robert.kern@gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco
Robert Kern wrote:
Sebastian Haase wrote:
Hi, Could we start another poll on this !?
Please, let's leave voting as a method of last resort.
I think I would vote +1 for complex32 & complex64 mostly just because of "that's what I'm used to"
But I'm curious to hear what others "know to be in use" - e.g. Matlab or IDL !
On the merits of the issue, I like the new scheme better. For whatever reason, I tend to remember it when coding. With Numeric, I would frequently second-guess myself and go to the prompt and tab-complete to look at all of the options and reason out the one I wanted.
I can't bring myself to care. I almost always use dtype=complex and on the rare times I don't I can never remember what the scheme is regardless of which scheme it is / was / will be. On the other hand, if the scheme was Complex32x2 and Complex64x2, I could probably decipher what that was without looking it up. It is is a little ugly and weird I admit, but that probably wouldn't bother me. Regards, -tim
On the other hand, if the scheme was Complex32x2 and Complex64x2, I could probably decipher what that was without looking it up. It is is a little ugly and weird I admit, but that probably wouldn't bother me.
On consideration, I'm +1 on Tim's suggestion here, if any change is going to be made. At least it has the virtue of being relatively clear, if a bit ugly. Zach
On Tue, 4 Apr 2006, Robert Kern wrote:
Sebastian Haase wrote:
Hi, Could we start another poll on this !?
Please, let's leave voting as a method of last resort.
I think I would vote +1 for complex32 & complex64 mostly just because of "that's what I'm used to"
But I'm curious to hear what others "know to be in use" - e.g. Matlab or IDL !
On the merits of the issue, I like the new scheme better. For whatever reason, I tend to remember it when coding. With Numeric, I would frequently second-guess myself and go to the prompt and tab-complete to look at all of the options and reason out the one I wanted.
In order to get an opionion on the subject:
How would one presently find out about
the meaning of complex64 and complex128?
The following attempt does not help:
In [1]:import numpy
In [2]:numpy.complex64?
Type: type
Base Class:
A Dimarts 04 Abril 2006 07:40, Robert Kern va escriure:
Sebastian Haase wrote:
I think I would vote +1 for complex32 & complex64 mostly just because of "that's what I'm used to"
But I'm curious to hear what others "know to be in use" - e.g. Matlab or IDL !
On the merits of the issue, I like the new scheme better. For whatever reason, I tend to remember it when coding. With Numeric, I would frequently second-guess myself and go to the prompt and tab-complete to look at all of the options and reason out the one I wanted.
I agree with Robert. From the very beginning NumPy design has been very consequent with typeEXTENT_IN_BITS mapping (even for unicode), and if we go back to numarray (complex32/complex64) convention, this would be the only exception to this rule. Perhaps I'm a bit biased by being a developer more interested in type 'sizes' that in 'precision' issues, but I'd definitely prefer a completely consistent approach for this matter. So +1 for complex64 & complex128 Cheers, --
0,0< Francesc Altet http://www.carabos.com/ V V Cárabos Coop. V. Enjoy Data "-"
Sebastian Haase wrote:
Hi, Could we start another poll on this !?
I think I would vote +1 for complex32 & complex64 mostly just because of "that's what I'm used to"
+1 Most people look to the number to give a clue as to the precision of the value. Colin W.
But I'm curious to hear what others "know to be in use" - e.g. Matlab or IDL !
- Thanks Sebastian Haase
Travis Oliphant wrote:
Sebastian Haase wrote:
Tim Hochberg wrote: <snip>
This would work fine if repr were instead:
dtype([('x', float64), ('z', complex128)])
Anyway, this all seems reasonable to me at first glance. That said, I don't plan to work on this, I've got other fish to fry at the moment.
A new point: Please remind me (and probably others): when did it get decided to introduce 'complex128' to mean numarray's complex64 and the 'complex64' to mean numarray's complex32 ?
It was last February (i.e. 2005) when I first started posting regarding the new NumPy. I claimed it was more consistent to use actual bit-widths. A few people, including Perry, indicated they weren't opposed to the change and so I went ahead with it.
You can read relevant posts by searching on numpy-discussion@lists.sourceforge.net
Discussions are always welcome. I suppose it's not too late to change something like this --- but it's getting there...
-Travis
------------------------------------------------------- This SF.Net email is sponsored by xPML, a groundbreaking scripting language that extends applications into web and mobile media. Attend the live webcast and join the prime developer group breaking into this new coding territory! http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642 _______________________________________________ Numpy-discussion mailing list Numpy-discussion@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion
I can't get worked up over this one way or the other: complex128 make sense
if I count bits, complex64 makes sense if I note precision; I just have to
remember the numpy convention. One could argue that complex64 is the more
conventional choice and so has the virtue of least surprise, but I don't
think it is terribly difficult to become accustomed to using complex128 in
its place. I suppose this is one of those programmer's vs user's point of
view thingees. For the guy writing general low level numpy code what matters
is the length of the type, how many bytes have to be moved and so on, and
from the other point of view what counts is the precision of the arithmetic.
Chuck
On 4/4/06, Colin J. Williams
Sebastian Haase wrote:
Hi, Could we start another poll on this !?
I think I would vote +1 for complex32 & complex64 mostly just because of "that's what I'm used to"
+1 Most people look to the number to give a clue as to the precision of the value.
Colin W.
But I'm curious to hear what others "know to be in use" - e.g. Matlab or IDL !
- Thanks Sebastian Haase
Travis Oliphant wrote:
Sebastian Haase wrote:
Tim Hochberg wrote: <snip>
This would work fine if repr were instead:
dtype([('x', float64), ('z', complex128)])
Anyway, this all seems reasonable to me at first glance. That said, I don't plan to work on this, I've got other fish to fry at the moment.
A new point: Please remind me (and probably others): when did it get decided to introduce 'complex128' to mean numarray's complex64 and the 'complex64' to mean numarray's complex32 ?
It was last February (i.e. 2005) when I first started posting regarding the new NumPy. I claimed it was more consistent to use actual bit-widths. A few people, including Perry, indicated they weren't opposed to the change and so I went ahead with it.
You can read relevant posts by searching on numpy-discussion@lists.sourceforge.net
Discussions are always welcome. I suppose it's not too late to change something like this --- but it's getting there...
-Travis
------------------------------------------------------- This SF.Net email is sponsored by xPML, a groundbreaking scripting language that extends applications into web and mobile media. Attend the live webcast and join the prime developer group breaking into this new coding territory! http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642 _______________________________________________ Numpy-discussion mailing list Numpy-discussion@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion
------------------------------------------------------- This SF.Net email is sponsored by xPML, a groundbreaking scripting language that extends applications into web and mobile media. Attend the live webcast and join the prime developer group breaking into this new coding territory! http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642 _______________________________________________ Numpy-discussion mailing list Numpy-discussion@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion
On Tuesday 04 April 2006 08:09, Charles R Harris wrote:
I can't get worked up over this one way or the other: complex128 make sense if I count bits, complex64 makes sense if I note precision; I just have to remember the numpy convention. One could argue that complex64 is the more conventional choice and so has the virtue of least surprise, but I don't think it is terribly difficult to become accustomed to using complex128 in its place. I suppose this is one of those programmer's vs user's point of view thingees. For the guy writing general low level numpy code what matters is the length of the type, how many bytes have to be moved and so on, and from the other point of view what counts is the precision of the arithmetic.
I kind of like your comparison of programmer vs user ;-) And so I was "hoping" that numpy (and scipy !!) is intended for the users - like supposedly IDL and Matlab are... No one likes my "backwards compatibility" argument !? Thanks - Sebastian Haase PS: I understand that voting is only for a last resort - some people, always use na.Complex and na.Float and don't care - BUT I use single precision all the time because my image data is already getting to large. So I have to look at this every day, and as Travis pointed out, now is about the last chance to possibly change complex128 to complex64 ...
Chuck
On 4/4/06, Colin J. Williams
wrote: Sebastian Haase wrote:
Hi, Could we start another poll on this !?
I think I would vote +1 for complex32 & complex64 mostly just because of "that's what I'm used to"
+1 Most people look to the number to give a clue as to the precision of the value.
Colin W.
But I'm curious to hear what others "know to be in use" - e.g. Matlab or IDL !
- Thanks Sebastian Haase
Travis Oliphant wrote:
Sebastian Haase wrote:
Tim Hochberg wrote: <snip>
This would work fine if repr were instead:
dtype([('x', float64), ('z', complex128)])
Anyway, this all seems reasonable to me at first glance. That said, I don't plan to work on this, I've got other fish to fry at the moment.
A new point: Please remind me (and probably others): when did it get decided to introduce 'complex128' to mean numarray's complex64 and the 'complex64' to mean numarray's complex32 ?
It was last February (i.e. 2005) when I first started posting regarding the new NumPy. I claimed it was more consistent to use actual bit-widths. A few people, including Perry, indicated they weren't opposed to the change and so I went ahead with it.
You can read relevant posts by searching on numpy-discussion@lists.sourceforge.net
Discussions are always welcome. I suppose it's not too late to change something like this --- but it's getting there...
-Travis
Tim Hochberg wrote: <snip>
This would work fine if repr were instead:
dtype([('x', float64), ('z', complex128)])
Anyway, this all seems reasonable to me at first glance. That said, I don't plan to work on this, I've got other fish to fry at the moment.
A new point: Please remind me (and probably others): when did it get decided to introduce 'complex128' to mean numarray's complex64 and the 'complex64' to mean numarray's complex32 ? I do understand the logic that 128 is really the bit-size of one (complex) element - but I also liked the old way, because: 1. e.g. in fft transforms, float32 would "go with" complex32 and float64 with complex64 2. complex128 is one character extra (longer) and also (alphabetically) now sorts before(!) complex64 3 Mostly of course: this new naming will confuse all my code and introduce hard to find bugs - when I see complex64 I will "think" the old way for quite some time ... These might just be my personal (idiotic ;-) comments - but I would appreciate some feedback/comments. Also: Is it now to late to (re-)start a discussion on this !? Thanks - Sebastian Haase
participants (9)
-
Arnd Baecker
-
Charles R Harris
-
Colin J. Williams
-
Francesc Altet
-
Robert Kern
-
Sebastian Haase
-
Tim Hochberg
-
Travis Oliphant
-
Zachary Pincus