
On Mon, Jul 24, 2023 at 10:04 AM Chris Angelico <rosuav@gmail.com> wrote: can you tell me what this vetting procedure proves that isn't already proven by mere popularity itself?
I think that's what this thread is trying to discuss. Do I have the exact perfect implementation? No. But I imagine it to be akin to peer-review. I can't prove this, but I think it adds signal that complements popularity.
And are you also saying that packages should be *removed* from this curated list?
Yes, absolutely. If packages fall out of maintenance, are deprecated, or end-of-life, they should no longer be this curated list. I imagine the mechanism would be a series of snapshots of what the state of the list is at a particular point in time. What is the mechanism for determining the trigger to remove a package? No clue right now. There are a lot of problems - most of them social - that need to be sorted out to make a *useful* curated package list a reality. I don't claim to have the answers, but I'm willing to participate in discussion to find them.
More robust in what way?
It's curated by people I trust more than the average bear.
If not, how is it different from "yet another collection"?
We are discussing details about it here instead of just posting what we think should be there on github right now. I'm putting a bit of trust in this group to find a way to do that. I do think that separating the opinions of experts (however we choose to define that group - but I hope you agree that it should be possible to find *some *reasonable definition) as an auxiliary re-ranking on top of popularity is a good differentiator. On Tue, Jul 25, 2023 at 1:31 AM Stephen J. Turnbull < turnbull.stephen.fw@u.tsukuba.ac.jp> wrote:
George Fischhof writes:
[For heaven's sake, trim! You expressed your ideas very clearly, the quote adds little to them.]
it has got to my mind that even just grouping similar / same goal packages could help the current situation.
This is a good idea. I doubt it reduces the problem compared to the review site or the curation very much: some poor rodent(s) still gotta put the dinger on the feline.
However, in designing those pages, we could explicitly ask for names of similar packages and recommendations for use cases where an alternative package might be preferred, and provide links to the review pages for those packages that are mentioned in the response. We can also provide suggestions based on comparisons other users have made. (Hopefully there won't be too many comparisons like "this package is the numpy of its category" -- that's hard to parse!)
Additionally perhaps the users could give relative valuation,
Not sure asking for rankings is a great idea, globally valid rankings are rare -- ask any heavy numpy user who occasionally uses the sum builtin on lists.
for example there are A, B, C, D similar packages, users could say: I tried out A and B, and found that A is better then B, and could have some valuation categories: simple, easy, powerful etc. This would show for example that package A is simple, but B is more powerful
These tags would be useful. I think the explanation needs to be considered carefully, because absolutes don't really exist, and if you're comparing to the class, you want to know which packages the reviewer is comparing to. I'm not sure many users would go to the trouble of providing full rankings, even for the packages they've mentioned. Worth a try though!
Steve
_______________________________________________ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/MGYK3E... Code of Conduct: http://python.org/psf/codeofconduct/
-- -Dr. Jon Crall (him)