![](https://secure.gravatar.com/avatar/96dd777e397ab128fedab46af97a3a4a.jpg?s=120&d=mm&r=g)
On Jan 26, 2008 12:39 AM, Tom Johnson <tjhnson@gmail.com> wrote:
Numpy uses the standard C functions, so can't represent very large integers exactly. However, given the precision of the log function, it might be reasonable to truncate the digits and write the Python long as a float before conversion. That's what Python does. In [6]: import math In [7]: math.log(10454852688145851272L) Out[7]: 43.793597916587672 In [8]: float(10454852688145851272L) Out[8]: 1.045485268814585e+19 Chuck
![](https://secure.gravatar.com/avatar/40489da22d2dc0cc12596420bb810463.jpg?s=120&d=mm&r=g)
Tom Johnson wrote:
The problem is that the latter long integer is too big to fit into an int64 (long long) and so it converts it to an object array. The default behavior of log on object arrays is to look for a method on each element of the array called log and call that. Your best bet is to convert to double before calling log log(float(10454852688145851272L)) -Travis O.
![](https://secure.gravatar.com/avatar/78979acd7771e6b6ba55a8923b5793f3.jpg?s=120&d=mm&r=g)
On Sat, Jan 26, 2008 at 5:49 AM, Travis E. Oliphant <oliphant@enthought.com> wrote:
Related, I understand that problem which occurs below...
A couple points.... 1) With log, we get an AttributeError...with multiplication, we get a TypeError. I know the mechanism which causes the problem is different but the fundamental problem (too large of longs) is the same in both cases. Can this be improved upon? 2) The extra digits from python floats are nice....can numpy have these as well? 3) I think it is safe to say that many people cannot know ahead of time if their longs will be larger than 64-bit. This whole situation seems unstable to me...code that seems to be working will work, and then when the longs (from python) get too large we get a variety of different exceptions. So, I wonder aloud: Is this being handled is the nicest/preferred way? I'd be happy if my extremely large longs were automatically converted to numpy.float64_....even if we don't have as many significant digits as the equivalent pure python result. At least with this method, I will not have code "randomly" breaking. Either that, or am I required to be extremely careful about mixing types.
![](https://secure.gravatar.com/avatar/96dd777e397ab128fedab46af97a3a4a.jpg?s=120&d=mm&r=g)
On Jan 26, 2008 12:39 AM, Tom Johnson <tjhnson@gmail.com> wrote:
Numpy uses the standard C functions, so can't represent very large integers exactly. However, given the precision of the log function, it might be reasonable to truncate the digits and write the Python long as a float before conversion. That's what Python does. In [6]: import math In [7]: math.log(10454852688145851272L) Out[7]: 43.793597916587672 In [8]: float(10454852688145851272L) Out[8]: 1.045485268814585e+19 Chuck
![](https://secure.gravatar.com/avatar/40489da22d2dc0cc12596420bb810463.jpg?s=120&d=mm&r=g)
Tom Johnson wrote:
The problem is that the latter long integer is too big to fit into an int64 (long long) and so it converts it to an object array. The default behavior of log on object arrays is to look for a method on each element of the array called log and call that. Your best bet is to convert to double before calling log log(float(10454852688145851272L)) -Travis O.
![](https://secure.gravatar.com/avatar/78979acd7771e6b6ba55a8923b5793f3.jpg?s=120&d=mm&r=g)
On Sat, Jan 26, 2008 at 5:49 AM, Travis E. Oliphant <oliphant@enthought.com> wrote:
Related, I understand that problem which occurs below...
A couple points.... 1) With log, we get an AttributeError...with multiplication, we get a TypeError. I know the mechanism which causes the problem is different but the fundamental problem (too large of longs) is the same in both cases. Can this be improved upon? 2) The extra digits from python floats are nice....can numpy have these as well? 3) I think it is safe to say that many people cannot know ahead of time if their longs will be larger than 64-bit. This whole situation seems unstable to me...code that seems to be working will work, and then when the longs (from python) get too large we get a variety of different exceptions. So, I wonder aloud: Is this being handled is the nicest/preferred way? I'd be happy if my extremely large longs were automatically converted to numpy.float64_....even if we don't have as many significant digits as the equivalent pure python result. At least with this method, I will not have code "randomly" breaking. Either that, or am I required to be extremely careful about mixing types.
participants (3)
-
Charles R Harris
-
Tom Johnson
-
Travis E. Oliphant