How coding in Python is bad for you
rosuav at gmail.com
Mon Jan 23 21:38:46 EST 2017
On Tue, Jan 24, 2017 at 12:47 PM, BartC <bc at freeuk.com> wrote:
> On 24/01/2017 00:56, Chris Angelico wrote:
>> On Tue, Jan 24, 2017 at 11:44 AM, BartC <bc at freeuk.com> wrote:
>>> With C++ or Java, it's possible to tell the indentation is wrong (because
>>> the extra redundancy of having the indentation /and/ block delimiters).
>>> That's a bit harder in Python making source more fragile.
>> With C++ or Java, it's possible to tell that the braces are misplaced
>> (because of the extra redundancy). When that happens, the compiler
>> won't tell you; it'll just silently do the wrong thing. In Python,
>> that can't happen. Python is therefore more robust.
>> What's to say which is correct?
> Take the same code with block
> delimiters, and take out that same indent:
> if 0 then
> print ("one")
> print ("two")
> print ("three")
> It still compiles, it still runs, and still shows the correct "three" as
My point is that you *assume* that showing just "three" is the correct
behaviour. Why? Why do you automatically assume that the indentation
is wrong and the endif is correct? All you have is that the two
That's the sort of (presumably unintentional) bias that comes from
your particular history of programming languages. Some languages teach
you that "foo" and "Foo" are the same name. Others teach you that the
integer 1 and the string "1" aren't materially different. Still others
leave you believing that bytes and characters are perfectly
equivalent. All those assumptions are proven wrong when you switch to
a language that isn't like that, and indentation is the same.
So what's to say which is more important? Surely it's nothing more
than your preconceived ideas, and could easily be different if you had
More information about the Python-list