Perl is worse!

Martijn Faassen m.faassen at vet.uu.nl
Thu Jul 27 16:24:38 EDT 2000


Steve Lamb <grey at despair.rpglink.com> wrote:
> On 27 Jul 2000 17:56:55 GMT, Martijn Faassen <m.faassen at vet.uu.nl> wrote:
>>Hmm. You seem to expect that strings and other things will be automatically
>>coerced to integers whenever a function expects an integer. I think that's
>>pretty scary; when I pass in the wrong thing to a function I *want* it to
>>complain, as it's very likely I did it by accident. 

>     The problem there is you're thinking in terms of separate data types.  I
> prefer the freedom of thinking in /DATA/.  Not types, just data.  1.  Is that
> a string?  Is it an integer?  Is it a floating point number?  Yes, it is all
> three.

In the end if you want to add your data together, you need to know what
the result will be. If you want to substract your data, you also need
to know. What if your data is "foo" and you add them together? What if
the user gave some input and you don't *know* what the user entered?

I think in your case you usually need to remember what type your data
is *anyway*, and what's worse you need to know what will happen to your
data for various cases:

"1" + "2" (3 or "3") 
"2" + 2  # 4 I presume (or "4")
"foo" + "bar" # ? (I tried; it gives you 0)
"foo" + "3" # ? (gives you 3..)

This may become quite unobvious in various cases, and may happen by
accident, right?

Though I understand the case for genericity, I don't really see it here.
 
[snip]
>>I don't want things to work by accident. Perhaps you're missing that this
>>'problem' is also a 'feature'; you get less 'weird' bugs due to the fact that
>>your program keeps running long after your data got coerce-mangled to death.
>>Python would tend to throw exceptions as soon as it went wrong.

>     I've never gotten 'weird' bugs from Perl taking the data type from the
> context of the operation at hand.  Not even in my early days of Perl.  I won't
> say there weren't got'chas that got me, but it was quite liberating not having
> to worry about thinking in terms of how the computer sees the data and,
> instead, having the computer see the data as /I/ see it.  

But what if that makes no sense? What if I entered 'foo' in the config
file where I needed to enter a number? Won't you get a weird bug now?

>     Furthermore, I have already gotten my first 'wierd' Python bug and it is
> wierder, by far, than anything I hit in Perl.  Even when I was debugging the
> code (love the debugger!) I couldn't understand what was going on.  What's
> more, in the context of the script, it shouldn't have happened.

> def foo():
>   print "foo"
> foo

This script won't do anything.

>>>> def foo():
> ...   print "foo"
> ... 
>>>> foo
>>>> <function foo at 806ef00>

>     Sure, works nice in interactive mode.  You'd think when I left that in my
> script that Python would have tossed it's cookies.

I can see where this could be confusing, but your 'foo' there is just like
saying:

2

or 

"foo"

That won't do anything either. 'foo' is a function object.

I imagine the reason Python doesn't complain here is that it can treat
expressions as statements, and that sometimes this makes sense, such
as when there's a function call:

foo() # is an expression too

> No.  All I saw was a
> statement the parser took and the debugger was showing no call being made even
> when stepping through the code.

That's because nothing's being called, right? The expression is being
evaluated. Interactive mode gives you the string representation of the
result of the expression, but running a script won't.

>  Newbie mistake, sure, but certainly one of
> those that if we want to be strict about what goes where should have been
> caught.

I'm not claiming we want to be strict about things at all times. If
I thought that I wouldn't be using Python. I was claiming that 'guessing'
what a line of code is supposed to mean can be bad. I mean, really:

"foo1" + 1 # 1
"1foo" + 1 # 2
"1" + 1 # 2
" 1" + 1 # 2
"\n1" + 1 # 2
"1*2" + 1 # 2

Are you sure this never gives you any odd output?

>>So perhaps you're missing the important tradeoff here.

>     What, strict checking versus TMTOWTDI?

No, giving up in the face of ambiguity versus guessing. Ambiguity tends
to arise when I'm doing something wrong; if I did it right it wouldn't
be ambiguous code. Besides, I wouldn't want to read such code even
if it did the right thing; confusing.

In Python there are tons of ways to do it too. :)

>  If we're going to have the strict
> checking why, then, don't we have typing up front instead of the half-typing
> we have now?

Because I wasn't arguing for 'strict checking'; I was arguing not checking
until the *last moment*, and if that makes no sense, give up.
(unless you catch the exception, of course)

>  Python gets the initial type from context but won't switch types
> based on context?  Might as well force people back into declaring variables
> in certain types since that is implicitly what we have now.

Not in OO code:

class Foo:
    def doit(self):
        print "foo"

class Bar:
    def doit(self):
        print "bar"

foo = Foo(); bar = Bar()
foo.doit()
bar.doit()

That works, as the check at the last moment could figure it out; there was
such a method, so it executes it. Now if 'doit()' was absent and the system
started guessing, we'd be in trouble, right?

Also not in generic code:

def add(a, b):
    return a + b

Works for integers as well as for strings. Unless you pass in a string
and an integer. If you wanted to support that, you need to tell it
explicitly:

def add(a, b):
    return int(a) + int(b)

Though this does something else in case two strings are passed in. I'd make
the conversion to integers the responsibility of the caller, myself.

Also not even between numbers:

1 + 1.0
1L + 1
1.0 + 1L

>>All right, for this particular case I suppose one could get used to
>>this kind of coercing. But in general? I'd need to *remember* it all,
>>right?

>     Nope.  You stuff a number into a scaler it is a number but can be used as
> a string.

So I don't need to know about what happens when I do "1foo" + 1?
Or "foo1" + 1? Or "foo1" - 1? Presumably I'd need to check my user
input *anyway*, right?

>  Not that hard to remember.  Much easier, in fact, that having to
> remember to declare everything from the onset or declare it each time you
> need to use it in a particular context much less having to put in checks
> in case you're trying to declare something into something which it cannot
> be morphed into.

But in Python you don't need to declare at all. You just need to explicitly
say sometimes you want something changed into something else. If you
want an integer, change your string into an integer. Do it early on if
you don't want to do it everywhere and work with integers from then on.

>     Simply stated, main reason I don't do C, don't do Java, and dropped Pascal
> so long ago, the frustration of not being able to take data of one "type" and
> use it in a different context without jumping through 20 different loops
> depending on the particular case.  To me it isn't char, int, longint, unsigned
> or signed, strings or whatever, it is data.  Lists and hashes (and in Python,
> tuples) are just structures of data.  What "type" that data is is completely
> subjective and based on the context of the operation.  To me, a language that
> understands that has the feature, not the language that refuses to.

The funny thing is that in Python this happens, and I'd agree with you,
but it tends to happen more often for class instances, not so much
for built-in primitives. Instances don't have a type, and they continue
working as long as they support the operations called on them. The thing
is, they don't by default support all operations, and if an operation
is not supported, Python stops. For built-in types such as integers
and strings, Python does not support the operation of adding them
together (but it could, you could make a class for this; though it'd
be a pain and overkill in this case).

We aren't so far apart in that sense; we just disagree about how many
operations the basic datatypes should support. I personally never had
trouble with having to call int() excessively, for instance. If you
find you're doing that a lot in your code, something may be odd about the
design of your code.

Note that Python does do this type of automatic conversion in some cases.
'print' can for instance print just about anything.

Note also that in your case, the only thing you're complaining about is
that Python treats numbers and strings differently; Python doesn't care much
whether your number is an integer or a float or a long integer, for instance.
It doesn't distinguish between characters and strings either. The only
addition to your type pantheon would seem to be the string/number
dichotomy and the list/tuple dichotomy, nothing else. :)

Regards,

Martijn
-- 
History of the 20th Century: WW1, WW2, WW3?
No, WWW -- Could we be going in the right direction?



More information about the Python-list mailing list