
On Mon, Jun 25, 2012 at 9:50 PM, Travis Oliphant <travis@continuum.io> wrote:
On Jun 25, 2012, at 7:53 PM, josef.pktd@gmail.com wrote:
On Mon, Jun 25, 2012 at 8:25 PM, <josef.pktd@gmail.com> wrote:
On Mon, Jun 25, 2012 at 8:10 PM, Travis Oliphant <travis@continuum.io> wrote:
You are still missing the point that there was already a choice that was made in the previous class --- made in Numeric actually.
You made a change to that. It is the change that is 'gratuitous'. The pain and unnecessary overhead of having two competing standards is the problem --- not whether one is 'right' or not. That is a different discussion entirely.
I remember there was a discussion about the order of the coefficients on the mailing list and all in favor of the new order, IIRC. I cannot find the thread. I know I was.
At least I'm switching pretty much to the new polynomial classes, and don't really care about the inherited choice before that any more.
So, I'm pretty much in favor of updating, if new choices are more convenient and more familiar to new users.
just to add a bit more information, given the existence of both poly's
nobody had to rewrite flipping order in scipy.signal.residuez b, a = map(asarray, (b, a)) gain = a[0] brev, arev = b[::-1], a[::-1] krev, brev = polydiv(brev, arev) if krev == []: k = [] else: k = krev[::-1] b = brev[::-1]
while my arma_process class can start at the same time with def __init__(self, ar, ma, nobs=None): self.ar = np.asarray(ar) self.ma = np.asarray(ma) self.arpoly = np.polynomial.Polynomial(self.ar) self.mapoly = np.polynomial.Polynomial(self.ma)
That's a nice argument for a different convention, really it is. It's not enough for changing a convention that already exists. Now, the polynomial object could store coefficients in this order, but allow construction with the coefficients in the standard convention order. That would have been a fine compromise from my perspective.
I'm much happier with the current solution. As long as I stick with the np.polynomial classes, I don't have to *think* about coefficient order. With a hybrid I would always have to worry about whether this animal is facing front or back. I wouldn't mind if the old order is eventually deprecated and dropped. (Another example: NIST polynomial follow the new order, 2nd section http://jpktd.blogspot.ca/2012/03/numerical-accuracy-in-linear-least.html no [::-1] in the second version.)
As a downstream user of numpy and observer of the mailing list for a few years, I think the gradual improvements have gone down pretty well. At least I haven't seen any mayor complaints on the mailing list.
You are an *active* user of NumPy. Your perspective is valuable, but it is one of many perspectives in the user community. What is missing in this discussion is the 100's of thousands of users of NumPy who never comment on this mailing list and won't. There are many that have not moved from 1.5.1 yet. I hope your optimism is correct about how difficult it will be to upgrade for them. As long as I hold any influence at all on the NumPy project, I will argue and fight on behalf of those users to the best that I can understand their perspective.
oops, my working version
np.__version__ '1.5.1'
I'm testing and maintaining statsmodels compatibility from numpy 1.4.1 and scipy 0.7.2 to the current released versions (with a compat directory). statsmodels dropped numpy 1.3 support, because I didn't want to give up using numpy.polynomial. Most of the 100,000s of numpy users that never show up on the mailing list won't worry much about most changes, because package managers and binary builders and developers of application packages take care of most of it. When I use matplotlib, I don't care whether it uses masked arrays, or other array types internally (and rely on Benjamin and others to represent matplotlib usage/users). Wes is recommending users to use the pandas API to insulate them from changes in numpy's datetimes.
For me, the big problem was numpy 1.4.0 where several packages where not available because of binary compatibility, NaN's didn't concern me much, current incomplete transition to new MinGW and gcc is currently a bit of a problem.
It is *much*, *much* easier to create binaries of downstream packages than to re-write APIs. I still think we would be better off to remove the promise of ABI compatibility in every .X release (perhaps we hold ABI compatibility for 2 releases). However, we should preserve API compatibility for every release.
freeze the API wherever it got by "historical accident"?
Purely as an observer, my impression was also that the internal numpy c source cleanup, started by David C., I guess, didn't cause any big problems that would have created lots of complaints on the numpy mailing list.
David C spent a lot of time ensuring his changes did not alter the compiling experience or the run-time experience of users of NumPy. This was greatly appreciated. Lack of complaints on the mailing list is not the metric we should be using. Most users will never comment on this list --- especially given how hard we've made it for people to feel like they will be listened to.
I think for some things, questions and complaints on the mailing list or stackoverflow is a very good metric. My reason to appreciate David's work, is reflected in that the number of installation issues on Windows has disappeared from the mailing list. I just easy_installed numpy into a virtualenv without any problems at all (it just worked), which was the last issue on Windows that I know of (last seen on stackoverflow). easy_installing scipy into a virtualenv almost worked (needed some help).
We have to think about the implications of our changes on existing users.
Yes, Josef
-Travis
Josef
Josef
-- Travis Oliphant (on a mobile) 512-826-7480
On Jun 25, 2012, at 7:01 PM, Charles R Harris <charlesr.harris@gmail.com> wrote:
On Mon, Jun 25, 2012 at 4:21 PM, Perry Greenfield <perry@stsci.edu> wrote:
On Jun 25, 2012, at 3:25 PM, Charles R Harris wrote:
On Mon, Jun 25, 2012 at 11:56 AM, Perry Greenfield <perry@stsci.edu> wrote:
It's hard to generalize that much here. There are some areas in what you say is true, particularly if whole industries rely on libraries that have much time involved in developing them, and for which it is particularly difficult to break away. But there are plenty of other areas where it isn't that hard.
I'd characterize the process a bit differently. I would agree that it is pretty hard to get someone who has been using matlab or IDL for many years to transition. That doesn't happen very often (if it does, it's because all the other people they work with are using a different tool and they are forced to). I think we are targeting the younger people; those that do not have a lot of experience tied up in matlab or IDL. For example, IDL is very well established in astronomy, and we've seen few make that switch if they already have been using IDL for a while. But we are seeing many more younger astronomers choose Python over IDL these days.
I didn't bring up the Astronomy experience, but I think that is a special case because it is a fairly small area and to some extent you had the advantage of a supported center, STSci, maintaining some software. There are also a lot of amateurs who can appreciate the low costs and simplicity of Python.
The software engineers use tends to be set early, in college or in their first jobs. I suspect that these days professional astronomers spend a number of years in graduate school where they have time to experiment a bit. That is a nice luxury to have.
Sure. But it's not unusual for an invasive technology (that's us) to take root in certain niches before spreading more widely.
Another way of looking at such things is: is what we are seeking to replace that much worse? If the gains are marginal, then it is very hard to displace. But if there are significant advantages, eventually they will win through. I tend to think Python and the scientific stack does offer the potential for great advantages over IDL or matlab. But that doesn't make it easy.
I didn't say we couldn't make inroads. The original proposition was that we needed a polynomial class compatible with Matlab. I didn't think compatibility with Matlab mattered so much in that case because not many people switch, as you have agreed is the case, and those who start fresh, or are the adventurous sort, can adapt without a problem. In other words, IMHO, it wasn't a pressing issue and could be decided on the merits of the interface, which I thought of in terms of series approximation. In particular, it wasn't a 'gratuitous' choice as I had good reasons to do things the way I did.
Chuck
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion