tough-to-explain Python
Steven D'Aprano
steve at REMOVE-THIS-cybersource.com.au
Thu Jul 9 14:10:16 EDT 2009
On Wed, 08 Jul 2009 22:05:57 -0700, Simon Forman wrote:
>> persistent idea "out there" that programming is a very accessible
>> skill, like cooking or gardening, anyone can do it, and even profit
>> from it, monetarily or otherwise, etc., and to some extent I am
>
> Programming is not like any other human activity.
In practice? In principle? Programming in principle is not the same as it
is performed in practice.
But in either case, programming requires both the logical reasoning of
mathematics and the creativity of the arts. Funnily enough,
mathematicians will tell you that mathematics requires the same, and so
will the best artists. I think mathematicians, engineers, artists, even
great chefs, will pour scorn on your claim that programming is not like
any other human activity.
[...]
> He talks about how "when all is said and done, the only thing computers
> can do for us is to manipulate symbols and produce results of such
> manipulations" and he emphasises the "uninterpreted" nature of
> mechanical symbol manipulation, i.e. that the machine is doing it
> mindlessly.
"Manipulate symbols" is so abstract as to be pointless. By that
reasoning, I can build a "computer" consisting of a box open at the top.
I represent a symbol by an object (say, a helium-filled balloon, or a
stone), instead of a pattern of bits. I manipulate the symbol by holding
the object over the box and letting go. If it flies up into the sky, that
represents the symbol "Love is War", if it falls into the box, it
represents the symbol "Strength is Blue", and if it just floats there, it
represents "Cheddar Cheese". This is a deterministic, analog computer
which manipulates symbols. Great.
And utterly, utterly useless. So what is my computer lacking that real
computers have? When you have answered that question, you'll see why
Dijkstra's claim is under-specified.
> Dijkstra[1]: "It is true that the student that has never manipulated
> uninterpreted formulae quickly realizes that he is confronted with
> something totally unlike anything he has ever seen before. But
> fortunately, the rules of manipulation are in this case so few and
> simple that very soon thereafter he makes the exciting discovery that he
> is beginning to master the use of a tool that, in all its simplicity,
> gives him a power that far surpasses his wildest dreams." [1]
What is an uninterpreted formula? If you don't interpret it, how can you
distinguish it from random noise?
>> but maybe this idea is not entirely true... Maybe, to get past the
>> most amateurish level, one has to, one way or another, come
>> face-to-face with bits, compilers, algorithms, and all the rest that
>> real computer scientists learn about in their formal training...
>>
>> kj
>
> If you're never exposed to that constellation of concepts that underpins
> "mechanical symbol manipulation" you are adrift in a sea ("c", ha ha) of
> abstractions.
>
> However, if you /are/ exposed to the "so few and simple" rules of
> manipulation the gates (no pun intended) to the kingdom are thrown wide.
What nonsense. The keys to the kingdom are the abstractions.
Here's an exercise for you, if you dare. It's not that difficult to
remove the abstraction from integer addition, to explain it in terms of
bit manipulation. If you prefer, you can write a Turing Machine instead.
Now try doing the same thing for Quicksort. Don't worry, we'll wait.
Once you've done that, let's see you implement a practical, scalable, non-
toy webserver as a Turing Machine.
No, it's not the fundamental operations that open the doors to the
kingdom. It's the abstractions. The history of computing is to gain more
and more power by leveraging better and better abstractions.
> On Jul 8, 9:10 am, Steven D'Aprano <st... at REMOVE-THIS-
> cybersource.com.au> wrote:
>> There is some evidence that 30-60% of people simply cannot learn to
>> program, no matter how you teach them:
>>
>> http://www.codinghorror.com/blog/archives/000635.html
>> http://www.cs.mdx.ac.uk/research/PhDArea/saeed/
...
> I don't buy it:
I'm not entirely convinced myself. It's plausible -- not everyone has got
what it takes to be musically talented, or a great athlete, or a skilled
surgeon, or a charismatic leader. Why should programming be the one high-
level intellectual skill that everybody is good at? Seems mighty
implausible to me.
But I'm not sure that the distribution of skill is *fundamentally* bi-
modal. Maybe it is, maybe it isn't.
> I believe strongly that any normal person can learn to
> program, to manipulate symbols to create formulae that guide the machine
> in its uninterpreted symbol manipulation.
Most people don't manipulate abstract symbols *at all* -- even something
as simple as:
"if x is to z as half y is to twice z, then x is the same as ____ y"
(fill in the blank)
will perplex most people.
> I think that "first hump" people can become "second hump" people but
> that it requires teaching them the foundations first, not confronting
> them with such incredible novelties as "a = b" and saying in effect,
> "here you go buddy, sink or swim."
The weakness of the paper is not that "a=b" is a novelty, but that it's
dangerously familiar. All their students will have already seen "a=b" in
the context of many years of mathematics. But that's not what it means in
this context. So the risk is, they will be interpreting the symbols "a=b"
in terms of mathematical equality, and getting confused.
> Quoting Dijkstra again [1]: "Before we part, I would like to invite you
> to consider the following way of doing justice to computing's radical
> novelty in an introductory programming course.
>
> "On the one hand, we teach what looks like the predicate calculus, but
> we do it very differently from the philosophers. In order to train the
> novice programmer in the manipulation of uninterpreted formulae, we
> teach it more as boolean algebra, familiarizing the student with all
> algebraic properties of the logical connectives. To further sever the
> links to intuition, we rename the values {true, false} of the boolean
> domain as {black, white}.
This is supposed to be a good thing?
Oh my. It must be nice in that ivory tower, never having to deal with
anyone who hasn't already been winnowed by years of study and self-
selection before taking on a comp sci course.
I don't think Dijkstra would have taken that suggestion seriously if he
had to teach school kids instead of college adults.
...
> (Did you read that last paragraph and think, "Well how the heck else are
> you supposed to program a computer if not in a computer language?"? If
> so, well, that is kind of my point.)
No. What I thought was, how the hell are you supposed to manipulate
symbols without a language to describe what sort of manipulations you can
do?
>> > Maybe, to
>> > get past the most amateurish level, one has to, one way or another,
>> > come face-to-face with bits, compilers, algorithms, and all the rest
>> > that real computer scientists learn about in their formal training...
>>
>> The "No True Scotsman" fallacy.
>>
>> There's nothing amateurish about building software applications that
>> work, with well-designed interfaces and a minimum of bugs, even if
>> you've never heard of Turing Machines.
>>
>> --
>> Steven
>
> I beg to differ. I recall a conversation with a co-worker who had
> "learned" to program using PHP. Another co-worker and I were trying to
> convince him that there was a good reason to differentiate between hash
> tables and arrays. He didn't even know that they were different
> "things".
I shall manfully resist the urge to make sarcastic comments about
ignorant PHP developers, and instead refer you to my earlier reply to you
about cognitive biases. For extra points, can you identify all the
fallacious reasoning in that paragraph of yours?
> I remember telling him, "between the metal and the desktop there is
> nothing but layers of abstraction. We use different names precisely
> because of different behaviours."
But those abstractions just get in the way, according to you. He should
be programming in the bare metal, doing pure symbol manipulation without
language.
> He made "well-designed interfaces", but "amateurish" is about the nicest
> thing I would have called him.
Well, perhaps this is a failure of his profession, in that there isn't
enough specialisation. If he is skilled with making interfaces, and
unskilled with programming the backend, then he should work where his
strength lies, instead of being insulted by somebody who has an entirely
unjustified opinion about what "real programming" is.
If programming is symbol manipulation, then you should remember that the
user interface is also symbol manipulation, and it is a MUCH harder
problem than databases, sorting, searching, and all the other problems
you learn about in academia. The user interface has to communicate over a
rich but noisy channel using multiple under-specified protocols to a
couple of pounds of meat which processes information using buggy
heuristics evolved over millions of years to find the ripe fruit, avoid
being eaten, and have sex. If you think getting XML was hard, that's
*nothing* compared to user interfaces.
The fact that even bad UIs work at all is a credit to the heuristics,
bugs and all, in the meat.
> As for "a minimum of bugs"... sigh. The "minimum of bugs" is zero, if
> you derive your "uninterpreted formulae" /correctly/.
And the secret of getting rich on the stock market is to buy low, sell
high. Very true, and very pointless. How do you derive the formula
correctly?
Do you have an algorithm to do so? If so, how do you know that algorithm
is correct?
> Deriving provably
> correct "programs" should be what computer science and computer
> education are all about
That's impossible. Not just impractical, or hard, or subject to hardware
failures, but impossible.
I really shouldn't need to raise this to anyone with pretensions of being
a Computer Scientist and not just a programmer, but... try proving an
arbitrary program will *halt*.
If you can't do that, then what makes you think you can prove all useful,
necessary programs are correct?
> Again with Dijkstra[3]: "The prime paradigma of the pragmatic designer
> is known as "poor man's induction", i.e. he believes in his design as
> long as "it works", i.e. until faced with evidence to the contrary. (He
> will then "fix the design".)
Yes, that is the only way it can be.
> The scientific designer, however, believes
> in his design because he understands why it will work under all
> circumstances.
Really? Under *all* circumstances?
Memory failures? Stray cosmic radiation flipping bits? Noise in the power
supply?
What's that you say? "That's cheating, the software designer can't be
expected to deal with the actual reality of computers!!! We work in an
abstract realm where computers never run out of memory, swapping never
occurs, and the floating point unit is always correct!"
Fail enough. I don't want to make it too hard for you.
Okay, so let's compare the mere "pragmatic designer" with his provisional
expectation that his code is correct, and Dijkstra's "scientific
designer". He understands his code is correct. Wonderful!
Er, *how exactly* does he reach that understanding?
He reasons about the logic of the program. Great! I can do that. See,
here's a proof that binary search is correct. How easy this is.
Now try it for some non-trivial piece of code. Say, the Linux kernel.
This may take a while...
What guarantee do we have that the scientific designer's reasoning was
correct? I mean, sure he is convinced he has reasoned correctly, but he
would, wouldn't he? If he was unsure, he'd keep reasoning (or ask for
help), until he was sure. But being sure doesn't make him correct. It
just makes him sure.
So how do you verify the verification?
And speaking of binary search:
[quote]
I was shocked to learn that the binary search program that Bentley PROVED
CORRECT and SUBSEQUENTLY TESTED [emphasis added] in Chapter 5 of
"Programming Pearls" contains a bug. Once I tell you what the it is, you
will understand why it escaped detection for two decades.
[end quote]
http://northernplanets.blogspot.com/2006/07/nearly-all-binary-searches-
are-broken.html
or http://tinyurl.com/nco6yv
Perhaps the "scientific designer" should be a little less arrogant, and
take note of the lesson of real science: all knowledge is provisional and
subject to correction and falsification.
Which really makes the "scientific designer" just a slightly more clever
and effective "pragmatic designer", doesn't it?
> The transition from pragmatic to scientific design would
> indeed be a drastic change within the computer industry."
Given Dijkstra's definition of "scientific design", it certainly would.
It would be as drastic a change as the discovery of Immovable Objects and
Irresistible Forces would be to engineering, or the invention of a
Universal Solvent for chemistry, or any other impossibility.
> "Obviously no errors" is the goal to strive for, and I am comfortable
> calling anyone an amateur who prefers "no obvious errors."
It's not a matter of preferring no obvious errors, as understanding the
limitations of reality. You simply can't do better than no obvious errors
(although of course you can do worse) -- the only question is what you
call "obvious".
--
Steven
More information about the Python-list
mailing list