George Fischhof writes: [For heaven's sake, trim! You expressed your ideas very clearly, the quote adds little to them.]
it has got to my mind that even just grouping similar / same goal packages could help the current situation.
This is a good idea. I doubt it reduces the problem compared to the review site or the curation very much: some poor rodent(s) still gotta put the dinger on the feline. However, in designing those pages, we could explicitly ask for names of similar packages and recommendations for use cases where an alternative package might be preferred, and provide links to the review pages for those packages that are mentioned in the response. We can also provide suggestions based on comparisons other users have made. (Hopefully there won't be too many comparisons like "this package is the numpy of its category" -- that's hard to parse!)
Additionally perhaps the users could give relative valuation,
Not sure asking for rankings is a great idea, globally valid rankings are rare -- ask any heavy numpy user who occasionally uses the sum builtin on lists.
for example there are A, B, C, D similar packages, users could say: I tried out A and B, and found that A is better then B, and could have some valuation categories: simple, easy, powerful etc. This would show for example that package A is simple, but B is more powerful
These tags would be useful. I think the explanation needs to be considered carefully, because absolutes don't really exist, and if you're comparing to the class, you want to know which packages the reviewer is comparing to. I'm not sure many users would go to the trouble of providing full rankings, even for the packages they've mentioned. Worth a try though! Steve