[Python-Dev] re: syntax - "Aren't tuples redundant?"

Tim Peters tim_one@email.msn.com
Sat, 5 Feb 2000 03:01:34 -0500


[Greg Wilson]
> (Hope no-one minds me keeping this thread alive --- as I said in my first
> reply to Tim Peters, there's either something very fundamental here, or a
> "just-so" story...)

Fundamental but not obvious, and possibly a matter of taste.  I can only
repeat myself at this point, but I'll try minor rewording <wink>:  I write
my code deliberately to follow the guidelines I mentioned (tuples for fixed
heterogenous products, lists for indefinite homogenous sequences).  Perhaps
I see it that way because I love Haskell too, where those "guidelines" are
absolute requirements (btw, is Haskell being silly here too in your view?).
In Python, I find that following them voluntarily is a truly effective aid
to both reasoning and clarity.  Give it a try!

The distinction between ints and floats is much more a "just so" story to
me:  your students never questioned it because their previous languages
(Fortran and C++ and ...) told them the same story.  Now they suck on it for
comfort <wink>.  But, e.g., Perl got along fine for years without a distinct
"int" type, and added one (well, added a funky "use int" pragma) purely for
optimization.  At the language level there's really little sense to this
distinction -- it's "play nice with the guts of the machine" cruft.

Now given Python's use to script various C interfaces more or less directly,
I'd actually be loathe to see Python give up the distinction entirely.  But,
if you think about it *hard* (accept my "start from ground zero"
invitation), I expect you'll find there's far less justification for it than
you may currently believe.  Heck, floating point is even faster than ints on
some platforms <wink>.

> ...
>   If tuples didn't already exist, would anyone ask for them to
>   to be added to the language today?

I probably would, because I grew to like the distinction so much in Haskell,
and would *expect* the Haskell benefits to carry over to Python as well.

Note that I've never made the "dict key" argument here, because I don't
think it's fundamental.  However, if you hate tuples you're going to have to
come up with a reasonable alternative (if that's the deepest use you can see
for them now, fine, then at least address it for real *at* that level ...).

> Show them [1, "two"] and they (a) understand it, and (b) think
> it's cool; show them (1, "two") as well and they become confused.

So don't show people [1, "two"] at first <0.5 wink>.

> ...
> I've never had any trouble explaining int vs. float to students at any
> level; I've also never had any trouble explaining int vs. long (memory
> vs. accuracy).

That last tradeoff is an artifact of the current implementation; there's no
fundamental reason for this tension.  Python already has different concrete
implementations of a single "integer" interface, and essentially the only
things needed to integrate int and long fully are changing the literal
parsers to ignore "L", and changing the guts of the "if (overflow) {}" bits
of intobject.c to return a long instead of raising an exception (a nice
refinement would be also to change the guts of longobject.c to return an int
for "small" longs).  Note that, e.g., high end HP calculators use about a
dozen(!) different internal representations for its one visible "number"
type (to save precious space), and users aren't even aware of this.  It's an
old implementation trick.

and-a-good-one-ly y'rs  - tim