RE: [Python-Dev] re: syntax - "Aren't tuples redundant?"

I'm fwd'ing this back to the list, because disagreement is more valuable to share than agreement. The C++, Fortran, Pascal or Java programmer can't *spell* the list [1, "two"] in their languages without playing casting tricks. Of course the elements are of different types, and for that very reason it's better to use a tuple here instead(especially if it's always of length two!). "Different data structures for different purposes" is as Pythonic as "different syntax for different purposes", and paying attention to this can improve your (that's the generic "your") Python programming life. I don't care if it's not immediately obvious to new users (or even to you <wink>). Start from ground zero and try to explain why Python has both ints and floats: there is no *obvious* reason (even less so for having both ints and longs, btw). Python wouldn't fall apart without tuples, but I'd miss them a lot (e.g., in another vein, in an imperative language using immutable objects when possible can greatly aid reasoning about code ("OK, they pass a tuple here, so I don't have to worry at all about the callee mutating it") ...). Python itself violates my "guidelines" in using a tuple to catch a vrbl number of arguments, & I believe that's a minor design flaw (it should use a list for this instead -- although you can't satisfy both "homogenous" & "fixed length" at the same time here). WRT Scheme, I don't believe it's a reasonable choice to teach newbies unless they're CompSci majors; it would certainly be better if they had a language that didn't need to be "layered". DrScheme effectively *did* "re-design their language" by introducing subsets. Whether this is successful for Mathias you'll have to argue with him; I don't see cause to believe SP/k is relevant (neither the "subset" form of Fortran77 -- it was aiming at an entirely different thing).

(Hope no-one minds me keeping this thread alive --- as I said in my first reply to Tim Peters, there's either something very fundamental here, or a "just-so" story...)
Greg Wilson wrote: The fact that their current language doesn't allow this is irrelevant to the argument. Show them [1, "two"] and they (a) understand it, and (b) think it's cool; show them (1, "two") as well and they become confused.
Greg Wilson wrote: But *why* is it better? Or to put it another way: If tuples didn't already exist, would anyone ask for them to to be added to the language today?
Greg Wilson wrote: Analogic reasoning makes me nervous, as it is most often used to transfuse legitimacy from the defensible to the suspect.
Greg Wilson wrote: I've never had any trouble explaining int vs. float to students at any level; I've also never had any trouble explaining int vs. long (memory vs. accuracy). Thanks for your reply, Greg

gvwilson@nevex.com wrote:
(Hope no-one minds me keeping this thread alive [...])
Ditto. GV> Show them [1, "two"] and they (a) understand it, and (b) think GV> it's cool; show them (1, "two") as well and they become confused. Because they mean the same thing, I suppose? GV> If tuples didn't already exist, would anyone ask for them to GV> to be added to the language today? Why indeed? They are more space-efficient, and they are immutable, but those are both purely technical reasons. The first reason is likely to become less important (silicon gets faster), the second one *could* be solved by forcing keys to have a refcount of 1 - this means copying when needed, both on creation and on access. That's pretty awkward, copy on write would help a lot, but I doubt that Python can be made to do that (tuples neatly prevent circular immutable structures, btw). GV> I've never had any trouble explaining int vs. float to students at Because ints and floats differ in meaning? GV> any level; I've also never had any trouble explaining int vs. long GV> (memory vs. accuracy). That's interesting. Tuples vs. lists are a similar tradeoff, though both memory-savings and immutability are CS-type issues, whereas non- programmers are more likely to consider accuracy a meaningful tradeoff? -- Jean-Claude

Hi, Jean-Claude; thanks for your mail.
Redundancy seems to confuse people (did someone say "Perl" or "PL/1"?)
Agreed --- I could understand having tuples as an internal data structure, but do not understand why they are exposed to users. If Python had been a type-checked, type-inferenced langauge from the beginning, I guess I could see it... As for the immutability:
GV> I've never had any trouble explaining int vs. float to students at Because ints and floats differ in meaning?
People are taught "whole numbers" vs. "fractions" at an early age.
I just show them the range of values that can be represented in 8, 16, 32, or 64 bits (for ints); 'float' vs. 'double' follows naturally from that. Again, I've never had any trouble with this one... Interestingly, I've also never had trouble with strings being immutable. I point out to people that the number 123 isn't mutable, and that supposedly-mutable strings in other languages are really allocating/deallocating memory behind your back; everybody nods, and we carry on. Greg

Greg Wilson wrote:
Because ints and floats differ in meaning? People are taught "whole numbers" vs. "fractions" at an early age.
sure, but are they taught that what looks like fractions are treated as integers, and that small integers don't auto- matically become large integers when necessary? ;-)
interesting indeed. seems to me as if the best way to avoid list/tuple confusion is to start by introducing the basic types (numbers, strings, tuples, which are all immutable), *before* explaining that python also supports mutable container types (lists, dictionaries, instances, etc)? </F>

[Greg Wilson]
Fundamental but not obvious, and possibly a matter of taste. I can only repeat myself at this point, but I'll try minor rewording <wink>: I write my code deliberately to follow the guidelines I mentioned (tuples for fixed heterogenous products, lists for indefinite homogenous sequences). Perhaps I see it that way because I love Haskell too, where those "guidelines" are absolute requirements (btw, is Haskell being silly here too in your view?). In Python, I find that following them voluntarily is a truly effective aid to both reasoning and clarity. Give it a try! The distinction between ints and floats is much more a "just so" story to me: your students never questioned it because their previous languages (Fortran and C++ and ...) told them the same story. Now they suck on it for comfort <wink>. But, e.g., Perl got along fine for years without a distinct "int" type, and added one (well, added a funky "use int" pragma) purely for optimization. At the language level there's really little sense to this distinction -- it's "play nice with the guts of the machine" cruft. Now given Python's use to script various C interfaces more or less directly, I'd actually be loathe to see Python give up the distinction entirely. But, if you think about it *hard* (accept my "start from ground zero" invitation), I expect you'll find there's far less justification for it than you may currently believe. Heck, floating point is even faster than ints on some platforms <wink>.
I probably would, because I grew to like the distinction so much in Haskell, and would *expect* the Haskell benefits to carry over to Python as well. Note that I've never made the "dict key" argument here, because I don't think it's fundamental. However, if you hate tuples you're going to have to come up with a reasonable alternative (if that's the deepest use you can see for them now, fine, then at least address it for real *at* that level ...).
Show them [1, "two"] and they (a) understand it, and (b) think it's cool; show them (1, "two") as well and they become confused.
So don't show people [1, "two"] at first <0.5 wink>.
That last tradeoff is an artifact of the current implementation; there's no fundamental reason for this tension. Python already has different concrete implementations of a single "integer" interface, and essentially the only things needed to integrate int and long fully are changing the literal parsers to ignore "L", and changing the guts of the "if (overflow) {}" bits of intobject.c to return a long instead of raising an exception (a nice refinement would be also to change the guts of longobject.c to return an int for "small" longs). Note that, e.g., high end HP calculators use about a dozen(!) different internal representations for its one visible "number" type (to save precious space), and users aren't even aware of this. It's an old implementation trick. and-a-good-one-ly y'rs - tim

[Question for Guido at the bottom of the message...]
I'm a big fan of Haskell; if Python enforced the distinction you've made, I would probably never have questioned it. However: 1) As long as it's just a convention, that only a handful of people strictly conform to, it's a pedagogic wart --- every Python book or tutorial I've read spends at least a paragraph or two justifying tuples' existence. 2) I tried in my first class at LANL to say "tuples are like records". One guy put his hand up and said, "Records that you access using integer indices instead of names? *laugh* Well, it's good to see that Fortran-66 is alive and well!" *general laughter* The point is a serious one --- Pascal taught us to give meaningful names to the fields in structures, and then tuples take us back to "oh, I think the day of the month is in the fourth field --- or is it the fifth?"
That's part of it --- but again, I think the Logo community found that novice non-programmers understood "whole numbers vs. fractions" without any trouble. Don't remember if rounding (assigning float to int) was a problem or not; I'll ask Brian H.
If tuples didn't already exist, would anyone ask for them to to be added to the language today?
Given enforced typing (fixed-length heterogeneous vs. variable-length homogeneous), I'd agree. Guido, if you're still on this thread, can you please tell us about the history here --- were list and tuples both put into the language with this kind of distinction in mind? Thanks for your patience, Greg

Greg Wilson wrote:
When this comes up from newbies on the list (which is *much* less often than a number of other so-called warts), I explain the difference, and then say, "If you don't know which to use, use a list. One day it will become obvious." Now, experience (not a priori reasoning) tells me that this is safe: x, y = tpl and this is not: x, y = lst There's not much use in arguing about it, because both require trust in the programmer. It's just the in the first case, you *can* trust the programmer, and in the second you *can't*. Even when the programmer is yourself. The fact that you don't like "zen" arguments doesn't mean you have to make them. Don't defend it at all. Just point out that most Python programmers consider tuples very valuable and move on. In general, it's very hard to "defend" Python on theoretical grounds. The newsgroup is littered with posts from OO cultists berating Python for it's primitive object model. They either move on, or shut up once they realize that Python's object model is a lot cleaner *in practice* than the theoretically correct crap they cut their teeth on. (What astounds me is the number of functional programmers who are sure that Python is modeled after a functional language.) - Gordon

Gordon McMillan <gmcm@hypernet.com>:
Speaking as a functional programmer, it's always been quite clear to me that Python was *not* modeled after a functional language. OTOH, it resembles one sometimes because here are certain functional notions that *must* be covered by any language with a sufficiently broad expressive range -- and Guido was certainly trying for broad expressive range. -- <a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a> "A system of licensing and registration is the perfect device to deny gun ownership to the bourgeoisie." -- Vladimir Ilyich Lenin

From: Gordon McMillan <gmcm@hypernet.com>
Doesn't really work with argumentative, opinionated and poorly informed students, which are majority in the computer field =).
In general, it's very hard to "defend" Python on theoretical grounds.
The point is not theoretical grounds, but, at least in my case, backing up the claim that Python has an elegant, spare design. Tuples show up as challenging that claim, as do some of the other warts on AMK's pages. I expect some of those to go away naturally (e.g. apply(Base.__init__, (self,)+args, kw) will mutate naturally to: Base.__init__(self, *args, **kw) but the tuples will stay. I'm just looking for a better pedagogical trick, not arguing against them on theoretical grounds. --david

Tim Peters writes:
Not quite *that* simple; you'd also have to change various bits of the core that currently do PyInt_Check(whatever) to also accept longs. I just sent off a patch to make long*sequence legal, but there's still slicing and indexing to take care of: list[0L:5L] isn't currently legal. The Solaris large-file patch that makes .tell() return a long is probably going to turn up more errors like this. Making ints and long ints integerchangeable would be an *excellent* idea. Possible for 1.6? Maybe, if GvR indicates it should be a priority. 'grep -l PyInt_Check' over the Python source code lists lots of Modules, most of the files in Objects/, and 5 files in Python/ (ceval, marshal, pythonrun, structmember, and traceback). -- A.M. Kuchling http://starship.python.net/crew/amk/ "Doctor, we did good, didn't we?" "Perhaps. Time will tell. Always does." -- Ace and the Doctor, in Ben Aaronovitch's _Remembrance of the Daleks_

On Sat, 5 Feb 2000, Andrew Kuchling wrote:
Priority or not, it won't happen if the patch is not available :-) As we say in Apache-land, "+1 on concept" Cheers, -g ps. Python-land translation: I agree with the concept ... go for it -- Greg Stein, http://www.lyra.org/

[Tim, blissfully minimizes the difficulties of erasing the int/long distinction] [Andrew Kuchling]
Not quite *that* simple; you'd also have to change various bits of the core that currently do PyInt_Check(whatever) to also accept longs.
I had in mind a vague scheme to cheat. I have since recovered from the delusions.
Note that MS has already decided to leave sizeof(long) == 4 in 64-bit Windows, but sizeof(void*) will jump to 8. Python is remarkably free of dubious assumptions here, but, as you point out for Solaris, "large" files are likely going to make problems here on an increasing # of platforms.
The idea is very old, and has come up several times, but I don't recall Guido ever saying anything about it. So it's safe to conclude it *hasn't* been a priority for him. I can't channel him on this issue. I'm personally in favor of merging them, but along with Konrad (Hinsen) am also in favor of doing a lot more "numeric merging" in Python 3000. It's really unclear to me whether the distinction in Python1 can be erased without breaking programs -- but can't make time to think about it now, either. Sorry!

I'm not sure I want to make this a priority given the accellerated 1.6 schedule, but I certainly think this is the way of the future, and I don't expect many backwards compatibility problems... --Guido van Rossum (home page: http://www.python.org/~guido/)

[Guido]
I think more than one issue is on the table here: 1. Whether internal implementation code that currently relies on PyInt_Check should be liberalized to allow "int-sized longs" too. 2. Whether Python language semantics should be changed, so that e.g. int * int never overflows, but returns a long when appropriate. I was mostly talking about #2 but I think Andrew's enthusiastic agreement was really wrt #1. You may also believe I was talking about #1. Regardless, *just* tackling #1 at this time would be a good foundation for later decisions about #2, and has real value on its own (with, I agree, few backward-compatibility implications, and likely none serious (people would no longer get exceptions on stuff like [42]*42L)). Besides, I'm sure I heard Andrew volunteer to complete all the work by Wednesday <wink>.

(Hope no-one minds me keeping this thread alive --- as I said in my first reply to Tim Peters, there's either something very fundamental here, or a "just-so" story...)
Greg Wilson wrote: The fact that their current language doesn't allow this is irrelevant to the argument. Show them [1, "two"] and they (a) understand it, and (b) think it's cool; show them (1, "two") as well and they become confused.
Greg Wilson wrote: But *why* is it better? Or to put it another way: If tuples didn't already exist, would anyone ask for them to to be added to the language today?
Greg Wilson wrote: Analogic reasoning makes me nervous, as it is most often used to transfuse legitimacy from the defensible to the suspect.
Greg Wilson wrote: I've never had any trouble explaining int vs. float to students at any level; I've also never had any trouble explaining int vs. long (memory vs. accuracy). Thanks for your reply, Greg

gvwilson@nevex.com wrote:
(Hope no-one minds me keeping this thread alive [...])
Ditto. GV> Show them [1, "two"] and they (a) understand it, and (b) think GV> it's cool; show them (1, "two") as well and they become confused. Because they mean the same thing, I suppose? GV> If tuples didn't already exist, would anyone ask for them to GV> to be added to the language today? Why indeed? They are more space-efficient, and they are immutable, but those are both purely technical reasons. The first reason is likely to become less important (silicon gets faster), the second one *could* be solved by forcing keys to have a refcount of 1 - this means copying when needed, both on creation and on access. That's pretty awkward, copy on write would help a lot, but I doubt that Python can be made to do that (tuples neatly prevent circular immutable structures, btw). GV> I've never had any trouble explaining int vs. float to students at Because ints and floats differ in meaning? GV> any level; I've also never had any trouble explaining int vs. long GV> (memory vs. accuracy). That's interesting. Tuples vs. lists are a similar tradeoff, though both memory-savings and immutability are CS-type issues, whereas non- programmers are more likely to consider accuracy a meaningful tradeoff? -- Jean-Claude

Hi, Jean-Claude; thanks for your mail.
Redundancy seems to confuse people (did someone say "Perl" or "PL/1"?)
Agreed --- I could understand having tuples as an internal data structure, but do not understand why they are exposed to users. If Python had been a type-checked, type-inferenced langauge from the beginning, I guess I could see it... As for the immutability:
GV> I've never had any trouble explaining int vs. float to students at Because ints and floats differ in meaning?
People are taught "whole numbers" vs. "fractions" at an early age.
I just show them the range of values that can be represented in 8, 16, 32, or 64 bits (for ints); 'float' vs. 'double' follows naturally from that. Again, I've never had any trouble with this one... Interestingly, I've also never had trouble with strings being immutable. I point out to people that the number 123 isn't mutable, and that supposedly-mutable strings in other languages are really allocating/deallocating memory behind your back; everybody nods, and we carry on. Greg

Greg Wilson wrote:
Because ints and floats differ in meaning? People are taught "whole numbers" vs. "fractions" at an early age.
sure, but are they taught that what looks like fractions are treated as integers, and that small integers don't auto- matically become large integers when necessary? ;-)
interesting indeed. seems to me as if the best way to avoid list/tuple confusion is to start by introducing the basic types (numbers, strings, tuples, which are all immutable), *before* explaining that python also supports mutable container types (lists, dictionaries, instances, etc)? </F>

[Greg Wilson]
Fundamental but not obvious, and possibly a matter of taste. I can only repeat myself at this point, but I'll try minor rewording <wink>: I write my code deliberately to follow the guidelines I mentioned (tuples for fixed heterogenous products, lists for indefinite homogenous sequences). Perhaps I see it that way because I love Haskell too, where those "guidelines" are absolute requirements (btw, is Haskell being silly here too in your view?). In Python, I find that following them voluntarily is a truly effective aid to both reasoning and clarity. Give it a try! The distinction between ints and floats is much more a "just so" story to me: your students never questioned it because their previous languages (Fortran and C++ and ...) told them the same story. Now they suck on it for comfort <wink>. But, e.g., Perl got along fine for years without a distinct "int" type, and added one (well, added a funky "use int" pragma) purely for optimization. At the language level there's really little sense to this distinction -- it's "play nice with the guts of the machine" cruft. Now given Python's use to script various C interfaces more or less directly, I'd actually be loathe to see Python give up the distinction entirely. But, if you think about it *hard* (accept my "start from ground zero" invitation), I expect you'll find there's far less justification for it than you may currently believe. Heck, floating point is even faster than ints on some platforms <wink>.
I probably would, because I grew to like the distinction so much in Haskell, and would *expect* the Haskell benefits to carry over to Python as well. Note that I've never made the "dict key" argument here, because I don't think it's fundamental. However, if you hate tuples you're going to have to come up with a reasonable alternative (if that's the deepest use you can see for them now, fine, then at least address it for real *at* that level ...).
Show them [1, "two"] and they (a) understand it, and (b) think it's cool; show them (1, "two") as well and they become confused.
So don't show people [1, "two"] at first <0.5 wink>.
That last tradeoff is an artifact of the current implementation; there's no fundamental reason for this tension. Python already has different concrete implementations of a single "integer" interface, and essentially the only things needed to integrate int and long fully are changing the literal parsers to ignore "L", and changing the guts of the "if (overflow) {}" bits of intobject.c to return a long instead of raising an exception (a nice refinement would be also to change the guts of longobject.c to return an int for "small" longs). Note that, e.g., high end HP calculators use about a dozen(!) different internal representations for its one visible "number" type (to save precious space), and users aren't even aware of this. It's an old implementation trick. and-a-good-one-ly y'rs - tim

[Question for Guido at the bottom of the message...]
I'm a big fan of Haskell; if Python enforced the distinction you've made, I would probably never have questioned it. However: 1) As long as it's just a convention, that only a handful of people strictly conform to, it's a pedagogic wart --- every Python book or tutorial I've read spends at least a paragraph or two justifying tuples' existence. 2) I tried in my first class at LANL to say "tuples are like records". One guy put his hand up and said, "Records that you access using integer indices instead of names? *laugh* Well, it's good to see that Fortran-66 is alive and well!" *general laughter* The point is a serious one --- Pascal taught us to give meaningful names to the fields in structures, and then tuples take us back to "oh, I think the day of the month is in the fourth field --- or is it the fifth?"
That's part of it --- but again, I think the Logo community found that novice non-programmers understood "whole numbers vs. fractions" without any trouble. Don't remember if rounding (assigning float to int) was a problem or not; I'll ask Brian H.
If tuples didn't already exist, would anyone ask for them to to be added to the language today?
Given enforced typing (fixed-length heterogeneous vs. variable-length homogeneous), I'd agree. Guido, if you're still on this thread, can you please tell us about the history here --- were list and tuples both put into the language with this kind of distinction in mind? Thanks for your patience, Greg

Greg Wilson wrote:
When this comes up from newbies on the list (which is *much* less often than a number of other so-called warts), I explain the difference, and then say, "If you don't know which to use, use a list. One day it will become obvious." Now, experience (not a priori reasoning) tells me that this is safe: x, y = tpl and this is not: x, y = lst There's not much use in arguing about it, because both require trust in the programmer. It's just the in the first case, you *can* trust the programmer, and in the second you *can't*. Even when the programmer is yourself. The fact that you don't like "zen" arguments doesn't mean you have to make them. Don't defend it at all. Just point out that most Python programmers consider tuples very valuable and move on. In general, it's very hard to "defend" Python on theoretical grounds. The newsgroup is littered with posts from OO cultists berating Python for it's primitive object model. They either move on, or shut up once they realize that Python's object model is a lot cleaner *in practice* than the theoretically correct crap they cut their teeth on. (What astounds me is the number of functional programmers who are sure that Python is modeled after a functional language.) - Gordon

Gordon McMillan <gmcm@hypernet.com>:
Speaking as a functional programmer, it's always been quite clear to me that Python was *not* modeled after a functional language. OTOH, it resembles one sometimes because here are certain functional notions that *must* be covered by any language with a sufficiently broad expressive range -- and Guido was certainly trying for broad expressive range. -- <a href="http://www.tuxedo.org/~esr">Eric S. Raymond</a> "A system of licensing and registration is the perfect device to deny gun ownership to the bourgeoisie." -- Vladimir Ilyich Lenin

From: Gordon McMillan <gmcm@hypernet.com>
Doesn't really work with argumentative, opinionated and poorly informed students, which are majority in the computer field =).
In general, it's very hard to "defend" Python on theoretical grounds.
The point is not theoretical grounds, but, at least in my case, backing up the claim that Python has an elegant, spare design. Tuples show up as challenging that claim, as do some of the other warts on AMK's pages. I expect some of those to go away naturally (e.g. apply(Base.__init__, (self,)+args, kw) will mutate naturally to: Base.__init__(self, *args, **kw) but the tuples will stay. I'm just looking for a better pedagogical trick, not arguing against them on theoretical grounds. --david

Tim Peters writes:
Not quite *that* simple; you'd also have to change various bits of the core that currently do PyInt_Check(whatever) to also accept longs. I just sent off a patch to make long*sequence legal, but there's still slicing and indexing to take care of: list[0L:5L] isn't currently legal. The Solaris large-file patch that makes .tell() return a long is probably going to turn up more errors like this. Making ints and long ints integerchangeable would be an *excellent* idea. Possible for 1.6? Maybe, if GvR indicates it should be a priority. 'grep -l PyInt_Check' over the Python source code lists lots of Modules, most of the files in Objects/, and 5 files in Python/ (ceval, marshal, pythonrun, structmember, and traceback). -- A.M. Kuchling http://starship.python.net/crew/amk/ "Doctor, we did good, didn't we?" "Perhaps. Time will tell. Always does." -- Ace and the Doctor, in Ben Aaronovitch's _Remembrance of the Daleks_

On Sat, 5 Feb 2000, Andrew Kuchling wrote:
Priority or not, it won't happen if the patch is not available :-) As we say in Apache-land, "+1 on concept" Cheers, -g ps. Python-land translation: I agree with the concept ... go for it -- Greg Stein, http://www.lyra.org/

[Tim, blissfully minimizes the difficulties of erasing the int/long distinction] [Andrew Kuchling]
Not quite *that* simple; you'd also have to change various bits of the core that currently do PyInt_Check(whatever) to also accept longs.
I had in mind a vague scheme to cheat. I have since recovered from the delusions.
Note that MS has already decided to leave sizeof(long) == 4 in 64-bit Windows, but sizeof(void*) will jump to 8. Python is remarkably free of dubious assumptions here, but, as you point out for Solaris, "large" files are likely going to make problems here on an increasing # of platforms.
The idea is very old, and has come up several times, but I don't recall Guido ever saying anything about it. So it's safe to conclude it *hasn't* been a priority for him. I can't channel him on this issue. I'm personally in favor of merging them, but along with Konrad (Hinsen) am also in favor of doing a lot more "numeric merging" in Python 3000. It's really unclear to me whether the distinction in Python1 can be erased without breaking programs -- but can't make time to think about it now, either. Sorry!

I'm not sure I want to make this a priority given the accellerated 1.6 schedule, but I certainly think this is the way of the future, and I don't expect many backwards compatibility problems... --Guido van Rossum (home page: http://www.python.org/~guido/)

[Guido]
I think more than one issue is on the table here: 1. Whether internal implementation code that currently relies on PyInt_Check should be liberalized to allow "int-sized longs" too. 2. Whether Python language semantics should be changed, so that e.g. int * int never overflows, but returns a long when appropriate. I was mostly talking about #2 but I think Andrew's enthusiastic agreement was really wrt #1. You may also believe I was talking about #1. Regardless, *just* tackling #1 at this time would be a good foundation for later decisions about #2, and has real value on its own (with, I agree, few backward-compatibility implications, and likely none serious (people would no longer get exceptions on stuff like [42]*42L)). Besides, I'm sure I heard Andrew volunteer to complete all the work by Wednesday <wink>.
participants (10)
-
Andrew Kuchling
-
David Ascher
-
Eric S. Raymond
-
Fredrik Lundh
-
Gordon McMillan
-
Greg Stein
-
Guido van Rossum
-
gvwilson@nevex.com
-
Jean-Claude Wippler
-
Tim Peters