On Wed, 23 Sep 2009 12:25:32 am Masklinn wrote:
On 22 Sep 2009, at 15:16 , Steven D'Aprano wrote:
On Tue, 22 Sep 2009 01:05:41 pm Mathias Panzenböck wrote:
I don't think this is a valid test to determine how a language is typed. Ok, C++ is more or less weakly typed (for other reasons), but I think you could write something similar in C#, too. And C# is strongly typed.
Weak and strong typing are a matter of degree -- there's no definitive test for "weak" vs "strong" typing, only degrees of weakness. The classic test is whether you can add strings and ints together, but of course that's only one possible test out of many.
And it's a pretty bad one to boot: both Java and C# allow adding strings and ints (whether it's `3 + "5"` or `"5" + 3`) (in fact they allow adding *anything* to a string), but the operation is well defined: convert any non-string involved to a string (via #toString ()/.ToString()) and concatenate.
I don't see why you say it's a bad test. To me, it's a good test, and Java and C# pass it. If you're only criterion is that an operation is well-defined, then "weak typing" becomes meaningless: I could define addition of dicts and strings to be the sum of the length of the dict with the number of hyphens in the string, and declare that {'foo':'a', 'bar':'b'} + "12-34-56-78-90" returns 6, and by your criterion my language would be strongly typed. I think that makes a mockery of the whole concept. [...]
See above: C#, much like Java, also allows concatenating anything to strings, and implicitly convert non-strings to strings.
Which hints that (at least when it comes to strings) C# and Java are weakly typed and that people consider them "strongly typed" is a triumph of marketing over reality. For any typed language, you ask how the language deals with operations on mixed types. If you have to explicitly convert the values to a common type, the language is strongly typed. If the language implicitly does the conversion for you, it is weakly typed. If, as is usually the case, the language does some conversions automatically, and others requires you to do explicitly, then the language combines some strong typing with some weak typing, and it becomes a matter of degree whether you call it strongly or weakly typed. If we insist in sticking every language in a binary state of "weak" or "strong", then we need a heuristic to decide. The heuristic in common use is to allow implicit numeric coercions in strongly typed languages, but usually not to allow implicit string to int conversions. This heuristic is not arbitrary. Automatically converting ints to floats is mathematically reasonable, because we consider e.g. 3 and 3.0 to be the same number. But automatically converting strings to ints is somewhat dubious. Although the intent of "1"+1 may be obvious, what is the intent of "foo"+1? Should "foo" convert to 0, or should "foo" be treated equivalent to a 24-bit integer, or what? There's no objective reason for preferring one behaviour over another. The same applies for "1"&123 -- should we treat the int 123 as the string "123", or the string "\01\02\03", or what? Of course a language designer is free to arbitrarily choose one behaviour over the other, but that is weak typing. "Strong typing" carries connotations of strength and security, and "weak typing" sounds soft and risky, but neither of these are necessarily correct. Perl is weakly typed but it's very hard to cause it to crash, while C++ is strongly typed but easy to cause it to core-dump. Neither strategy is objectively better than the other in *all* cases, although in my opinion strong typing is probably *usually* better. -- Steven D'Aprano