
Am 26.10.2011 12:51, schrieb Neil Yager:
I do not think that using an np.arange(n) is a reasonable/common to do by the way. What is the expected behavior? I've seen it used for demo/testing for quickly creating an array with a range of values (it is being used in a unit test). In the context of this discussion, it is just an example of a way a user may end up with an array of int32s without really thinking about it, thereby getting themselves into trouble.
So what do you suggest on the int front?
The core issue is to make sure that users know the assumed range for floats. You're right. Maybe there is no way to avoid having the user create arrays with unexpected types. I think if we check the ranges and throw errors, the users should get the idea.
Cheers, Andy

On Wed, Oct 26, 2011 at 11:19, Andreas Mueller <amueller@ais.uni-bonn.de> wrote:
You're right. Maybe there is no way to avoid having the user create arrays with unexpected types. I think if we check the ranges and throw errors, the users should get the idea.
Where, besides file input and output, does the scikit have algorithmic assumptions about images having a particular data format or range? How many of these can be wrapped in such a way that there are no assumptions about input range (i.e., by prescaling min/max to [0,1] and postscaling back to the original range)? This has been an ongoing problem for us in CellProfiler. We usually assume that images are [0,1], but try to avoid making it a hard assumption. There are times when images are completely ouside this range, such as when we compute an illumination correction image, which we put into the range [1,X] so that when a [0,1] image is divided by the illumination correction, it remains in [0,1]. Ray Jones
participants (2)
-
Andreas Mueller
-
Thouis Jones