Feature request: String-inferred names
steve at REMOVE-THIS-cybersource.com.au
Sun Nov 29 02:23:18 CET 2009
On Sat, 28 Nov 2009 03:38:38 -0800, The Music Guy wrote:
> Please listen. In all the time I've spent in the coding community
> (that's at least 7 years) and especially since I started paying
> attention to the Python community (2 years), I have noticed a trend:
> When one coder does something that another cannot understand, frequently
> the other will assume the former is not only doing things wrong, but is
> doing them _blatantly_ wrong.
That's because most of the time that assumption is correct.
50% of all coders are worse than average -- and in fact since the
distribution of coding skill is unlikely to be normally distributed, it
may very well be true that *more* than 50% of all coders are worse than
average. Which means that, when faced with a coder you know nothing about
except that he is trying to do something you can't understand, merely by
playing the odds you have at least a 50:50 chance that he's doing
The coders who hang around forums casting judgement are more likely like
to better than average -- they're not just code monkeys putting in their
eight hours days cranking out boilerplate code. They usually know what
they're doing. They are skillful and knowledgeable -- those who aren't
don't last long: they either leave, or they learn and become
knowledgeable. So when people on forums suggest something is a bad idea,
statistics is on their side, to say nothing of reasoned judgement.
Most new ideas are bad ideas. Pure statistics again -- if an idea is any
good, somebody has probably already had it, and it has become a standard
tool that everyone uses. Breaking code up into subroutines is such a good
idea, and nearly every programming language has some way of breaking code
up into subroutines. But bad ideas keep coming up over and over again, as
they're thought of, then rejected, then somebody else comes along and
thinks of it again. The same old bad ideas keep being raised again and
> I have caught myself making that very
> assumption many times in the past, and I've tried hard to build up an
> immunity against the impulse to make that assumption.
Doing so reduces your chances of making uncommon false negatives at the
cost of increasing your chances of making common false positives.
Personally, I think it's a bad strategy, but that depends on just how
much effort you put into giving every new idea a fair shake.
> At this point, I
> don't even believe in such a thing as a universal "wrong way" and a
> "right way" to code that applies to every circumstance.
Well duh :)
> The way to solve a problem depends on the problem.
Well duh again :D
But there are certain "standard", common, types of problems. Most
problems are just variations on other problems. Very few problems are
> When it comes to coding, there is not
> an absolute "right" way or "wrong" way--unless we're talking about, say,
> stealing closed source code without permission, or deliberately coding
> in a way that will cause problems for the end user (like causing memory
> clogs or buffer overflows and whatnot).
There is no need for "deliberately" in that sentence. If you cause
problems like buffer overflows, you're doing it wrong. It's wrong whether
you did so maliciously or through ignorance. Even if you document that it
is subject to this failure mode ("don't do this"), it's still wrong.
Sometimes we do the wrong thing because sometimes it's more important to
get a 99% solution today than a 100% solution next month, but it's still
> All of this can be determined through common sense. And yet I continue
> to see the attitude of "my solution is the ONLY solution to your
> problem, and it doesn't matter if I don't even actually understand the
> problem." Not everyone does this, but it is a frequent enough occurence
> to be worth noting. If I had to pull a number out of my magic bag, I
> would say 4 out of 10 resposes have at least a hint of this attitude,
> and 2.5/10 where it is very obvious.
You say that as if it's a bad thing. I think it's a good thing -- most
solutions are tried and tested, and conservatively sticking to solutions
that are known to work is a successful solution:
* if you write code which took a lot of effort to understand, it will
take a lot of effort to maintain;
* if you write code which only the creator understands, then if something
happens to the creator, nobody will understand the code;
* debugging is harder than writing code in the first place, so if you
write the cleverest code you possibly can, then you aren't clever enough
to debug it.
> ...and I have one last thing to say. I feel very strongly that
> metaclassing is a fiercely underestimated and largely untapped source of
> good coding solutions.
There's a reason for that -- metaclasses are *clever*. Good solutions
should be dumb enough that even the 50% of below-average coders can use
them. Metaclasses are hard, and because they're hard, they're
underutilized, and consequently people are unfamiliar with them. Because
they're unfamiliar, that makes them even harder to understand, and so
people avoid metaclasses -- a vicious circle, as you pointed out.
More information about the Python-list