On Fri, Sep 20, 2019 at 11:44:17PM -0000, Richard Higginbotham wrote:
Let me expand on the reasons for my post here. It seems like motivation has turned into a bit of a bike-shead.
I don't think that anyone is questioning your motivation. (Although one of your recent replies to Andrew seems to be casting some pretty mean-spirited aspersions on *his* motivation: "[Andrew] would go through any hurdle to thwart a consideration of my proposal". And honestly, your personal motivation isn't really important. If you are doing it for the fame and money, you'll probably be disappointed. If you are doing it to give back to the Python community, that's great, but the community and the core devs have no obligation to accept it into the standard library. You can always put it on PyPI, you don't need to ask anyone's permission, just go ahead and do it. The core devs have a duty to curate the standard library. They cannot accept everything proposed, regardless of the author's feelings or motivations or how useful it is to them. Also their own personal selfish motivation that they are over-worked and under-appreciated as it is without taking on the burden of *yet another* library to maintain if it isn't justified. Part of the process for justifying that library or set of algorithms is to run the gauntlet of Python-Ideas, and for good or ill, most of the people here tend to be pretty conservative in technology matters, possibly even more conservative than the core-devs. The people here on Python-Ideas are a subset of the Python community, if you can't convince us that your idea is good, you probably aren't going to get much interest from the rest of the community either. Don't take it personally, it's just the nature of the Python community. If you want a community more open to change, you might consider the Node.js or PHP communities, or so I'm lead to believe. [...]
It used to require containers with 100k or more elements to see a speed improvement over some other unnamed methods. These days its closer to 50 million with the only other contender being set (hash based) operations at least that I know of.
Am I misunderstanding you? You seem to be saying that your algorithm has become *slower*. In the past, your algorithm beat other unnamed algorithms for moderate sized lists of 100K elements, but now your algorithm doesn't beat them unless the lists are large, 50 million elements or so. So in relative terms, you seem to be saying that your algorithm is now 500 times slower relative to alternatives than it used to be. I don't think that's what you mean, so can you clarify?
I think there is a reasonable chance that if it were converted to C it would be comparable to if not better than the Set operations for people who wanted lists for what ever reason, even with much smaller data sets.
Possibly. As we have seen from this thread, the benefits of locality can sometimes make up for a less sophisticated algorithm. If I've learned anything from this thread, it is that relying on Big Oh to predict performance is even less useful today than it used to be. [...]
I'm most concerned with getting feedback from the list to determine if it would fit any other use cases. If there is some kind of need for it as a general function.
You're on the wrong forum for those questions. Here, we are most concerned with whether or not the proposal is a good fit for the language and stdlib, and the person making the proposal is expected to have done their homework and determined other use-cases *first*, or at least as part of the process. If you want to maximise the number of people seeing your proposal, which will in turn increase your chances of getting feedback of potential use-cases, you might try posting it on somewhere like Reddit first. -- Steven