
On Tue, Jun 5, 2012 at 8:34 AM, Nathaniel Smith njs@pobox.com wrote:
I don't think that would work, because looking more closely, I don't think they're actually doing anything like what __array_interface__/PEP3118 are designed for. They just have some custom class ("sage.rings.real_mpfr.RealLiteral", I guess an arbitrary precision floating point of some sort?), and they want instances that are passed to np.array() to be automatically coerced to another type (float64) by default. But there's no buffer sharing or anything like that going on at all. Mike, does that sound right?
Yes, there's no buffer sharing going on at all.
This automagic coercion seems... in very dubious taste to me. (Why does creating an array object imply that you want to throw away precision?
The __array_interface__ attribute is a property which depends on the precision of the ring. If it floats have enough precision, you just get floats; otherwise you get objects.
You can already throw away precision explicitly by doing np.array(f, dtype=float).) But if this automatic coercion feature is useful, then wouldn't it be better to have a different interface instead of kluging it into __array_interface__, like we should check for an attribute called __numpy_preferred_dtype__ or something?
It isn't just the array() calls which end up getting problems. For example, in 1.5.x
sage: f = 10; type(f) <type 'sage.rings.integer.Integer'> sage: numpy.arange(f) array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) #int64
while in 1.6.x
sage: numpy.arange(f) array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype=object)
We also see problems with calls like
sage: scipy.stats.uniform(0,15).ppf([0.5,0.7]) array([ 7.5, 10.5])
which work in 1.5.x, but fail with a traceback "TypeError: array cannot be safely cast to required type" in 1.6.x.
--Mike