Perry Greenfield wrote:
On Jan 18, 2006, at 6:21 PM, Fernando Perez wrote:
Really :-). I remember that conversation and wondered if it had something to do with that. (And I remember Paul Dubois talking to me about similar ideas). I think it is worth trying (and has been I see, though I would have expected perhaps even a greater speed improvement; somehow I think it should not take a lot of time if you don't need all the type, shape and striding flexibility). It just needs someone to do it.
Maybe putting David's code into the sandbox would be a good starting point.
new then either. I have to believe that if you allowed only Float64 (and perhaps a complex variant) and used other restrictions then it would be much faster for small arrays. One would think it would be much easier to implement than Numeric/numarray/numpy... I've always thought that those looking for really fast small array performance would be better served by something like this. But you'd really have to fight off feature creep. ("This almost meets my needs. If it could only do xxx")
Couldn't that last issue be well dealt with by the fact that today's numpy is fairly subclassing-friendly? (which, if I remember correctly, wasn't quite the case with at least old Numeric).
Does that help? You aren't talking about the fast array subclassing numpy are you? I'm not sure what you mean here.
What I meant was that by having good subclassing functionality, it's easier to ward off requests for every feature under the sun. It's much easier to say: 'this basic object provides a very small, core set of array features where the focus is on raw speed rather than fancy features; if you need extra features, subclass it and add them yourself' when the subclassing is actually reasonably easy. Note that I haven't actually used array subclassing myself (haven't needed it), so I may be mistaken in my comments here, it's just an intuition. Cheers, f