On Aug 29, 2019, at 00:58, Paul Moore
If you assume everything should be handled by general mechanisms, you end up at the Lisp/Haskell end of the spectrum. If you decide that the language defines the limits, you are at the C end.
And if you don’t make either assumption, but instead judge each case on its own merits, you end up with a language which is better than languages at either extreme. There are plenty of cases where Python generalizes beyond most languages (how many languages use the same feature for async functions and sequence iteration? or get metaclasses for free by having only one “kind” and then defining both construction and class definitions as type calls?), and plenty where It doesn’t generalize as much as most languages, and its best features are found all across that spectrum. You can’t avoid tradeoffs by trying to come up with a rule that makes language decisions automatically. (If you could, why would this list even exist?) The closest thing you can get to that is the vague and self-contradictory and facetious but still useful Zen. If you really did try to zealously pick one side or the other, always avoiding general solutions whenever a hardcoded solution is simpler no matter what, the best-case scenario would be something like Go, where a big ecosystem of codegen tools defeats your attempt to be zealous and makes your language actually usable despite your own efforts until soon you start using those tools even in the stdlib. Also, I’m not sure the spectrum is nearly as well defined as you imply in the first place. It’s hard to find a large C project that doesn’t use the hell out of preprocessor macros to effectively create custom syntax for things like error handling and looping over collections (not to mention M4 macros to autoconf the code so it’s actually portable instead of just theoretically portable), and meanwhile Haskell’s syntax is chock full of special-purpose features you couldn’t build yourself (would anyone even use the language without, say, do blocks?).