
Mike Meyer writes:
On Thu, 24 Sep 2009 20:06:22 -0400 Gerald Britton <gerald.britton@gmail.com> wrote:
I think that the idea that there is a continuum from weak typing to strong typing is useful.
I think it's fundamentally broken, at least as badly as the notion of a political spectrum from liberal to conservative. The problem with both of those is that there's more than one axis involved.
The notion that you can't order multidimensional sets (where each dimension is ordered) is simply wrong. You do it every day when you decide to have "a Big Mac with coffee" instead of "a QuarterPounder with a vanilla shake". It is always possible to approximately reduce a multidimensional set to a one-dimensional "spectrum" by use of a mechanical procedure called principal components analysis (the "direction" of the spectrum is the principal eigenvector, in fact). This procedure provides measures of the quality of the approximation as well (eg, the ratio of the principal eigenvalue to the second eigenvalue). The question here then is simply "what is the quality of the approximation, and are there structural shifts to account for?"
Just as people can have a liberal position on one issue while having a conservative position on another, languages can have some features that give them "weak typing" and others that give them "strong typing".
They can take such positions, but historically the correlations were generally high. What has happened in politics in many countries is that there has been a structural realignment such that the component axis traditionally labeled "liberal to conservative" is no longer so much stronger than other components of variation. That doesn't mean that the traditional axis was never useful, nor that a new principal axis hasn't been established (although I don't think it has been established yet in American politics).
Axis so far: declarations: yes/no/optional. Variables have types: (yes/no/optional). Implicit conversion: yes/no, with a different answer possible for every operand and tuple of operators types in the language.
My personal resolution of strong vs. weak typing is that it's useful to help explain which languages I like (strongly typed ones) vs. those I don't. In this, only the implicit conversion axis matters much. Whether variables have types, or declarations are needed, are implementation details related to when type checking takes place (and thus compile-time vs. runtime efficiency), and the tradeoff between translator complexity and burden on the developer to specify things. There are also issues of discoverability and readability which may make it desirable to be somewhat explicit even though a high degree of translator complexity is acceptable to me.
It's probably possible to devise some sort of metric to be able to place a given language on the weak-strong scale.
I don't think it's possible, because your scale is really a space.
It's always possible. Proving that is why Georg Cantor is so famous. The question is whether it's compatible with "what people think", and the answer is "if you must cover all corner cases, no" (ditto, Kenneth Arrow). Can you achieve something usable? I don't know, which is why I'd like to see Mathias's professor's slide!
Where would Python fall? Probably towards the weak end. Is that bad? No way!
Like I said, it's usually considered to be near the strong end, because it does few implicit conversion.
+1 Furthermore, Python's builtin implicit conversions (with the exception of the nearly universal conversion to Boolean, which is a special case anyway) are mostly natural embeddings. Even in cases like "range(10.0)" Python refuses to guess.