Hi.
[Mark Hammond]
> The point isn't about my suffering as such. The point is more that
> python-dev owns a tiny amount of the code out there, and I don't believe we
> should put Python's users through this.
>
> Sure - I would be happy to "upgrade" all the win32all code, no problem. I
> am also happy to live in the bleeding edge and take some pain that will
> cause.
>
> The issue is simply the user base, and giving Python a reputation of not
> being able to painlessly upgrade even dot revisions.
I agree with all this.
[As I imagined explicit syntax did not catch up and would require
lot of discussions.]
[GvR]
> > Another way is to use special rules
> > (similar to those for class defs), e.g. having
> >
> > <frag>
> > y=3
> > def f():
> > exec "y=2"
> > def g():
> > return y
> > return g()
> >
> > print f()
> > </frag>
> >
> > # print 3.
> >
> > Is that confusing for users? maybe they will more naturally expect 2
> > as outcome (given nested scopes).
>
> This seems the best compromise to me. It will lead to the least
> broken code, because this is the behavior that we had before nested
> scopes! It is also quite easy to implement given the current
> implementation, I believe.
>
> Maybe we could introduce a warning rather than an error for this
> situation though, because even if this behavior is clearly documented,
> it will still be confusing to some, so it is better if we outlaw it in
> some future version.
>
Yes this can be easy to implement but more confusing situations can arise:
<frag>
y=3
def f():
y=9
exec "y=2"
def g():
return y
return y,g()
print f()
</frag>
What should this print? the situation leads not to a canonical solution
as class def scopes.
or
<frag>
def f():
from foo import *
def g():
return y
return g()
print f()
</frag>
[Mark Hammond]
> > This probably won't be a very popular suggestion, but how about pulling
> > nested scopes (I assume they are at the root of the problem)
> > until this can be solved cleanly?
>
> Agreed. While I think nested scopes are kinda cool, I have lived without
> them, and really without missing them, for years. At the moment the cure
> appears worse then the symptoms in at least a few cases. If nothing else,
> it compromises the elegant simplicity of Python that drew me here in the
> first place!
>
> Assuming that people really _do_ want this feature, IMO the bar should be
> raised so there are _zero_ backward compatibility issues.
I don't say anything about pulling nested scopes (I don't think my opinion
can change things in this respect)
but I should insist that without explicit syntax IMO raising the bar
has a too high impl cost (both performance and complexity) or creates
confusion.
[Andrew Kuchling]
> >Assuming that people really _do_ want this feature, IMO the bar should be
> >raised so there are _zero_ backward compatibility issues.
>
> Even at the cost of additional implementation complexity? At the cost
> of having to learn "scopes are nested, unless you do these two things
> in which case they're not"?
>
> Let's not waffle. If nested scopes are worth doing, they're worth
> breaking code. Either leave exec and from..import illegal, or back
> out nested scopes, or think of some better solution, but let's not
> introduce complicated backward compatibility hacks.
IMO breaking code would be ok if we issue warnings today and implement
nested scopes issuing errors tomorrow. But this is simply a statement
about principles and raised impression.
IMO import * in an inner scope should end up being an error,
not sure about 'exec's.
We will need a final BDFL statement.
regards, Samuele Pedroni.
Aahz:
> It's not particularly convenient for me to try out 2.2a1, so I'm just
> going by what's written. One little hole that I don't see an answer to
> is what happens when you do this:
>
> class C(object):
> x = 0
> def foo(cls):
> cls.x += 1
> foo = classmethod(foo)
>
> C.foo()
Okay, after thinking about this a bit, I think that if the above code
requires __dynamic__=1 to work, then the default for __dynamic__ should
be changed. I don't find the arguments about changing __class__ to be
particularly persuasive, but I think the above code *is* closely related
to standard Python idioms that should work by default.
+1 on changing __dynamic__ or at least enabling some kind of class
variable mutability by default.
--
--- Aahz (@pobox.com)
Hugs and backrubs -- I break Rule 6 <*> http://www.rahul.net/aahz/
Androgynous poly kinky vanilla queer het Pythonista
I don't really mind a person having the last whine, but I do mind someone
else having the last self-righteous whine.
I just came back from teaching Python on a cruise ship attended by
mostly Perl folks. It was an interesting experience teaching Python
with Randal in the audience =).
One issue that came up is some of the lack of uniformity on what things
are statements, methods and built-in functions. In the long-term
version of the type/class unification work, things like int() become
class constructors, which makes beautiful sense. The fact that int()
calls __int__ methods fits nicely with type conversion mechanisms.
However, there are a few things which still seem oddballish:
copy.copy(), copy.deepcopy(), len()
These basically call magic methods of their arguments (whether tp_slots
or __methods__), and many languages implement them strictly as object
methods.
str() and repr() are a little weird -- I'm not sure which one will gain
'class constructor' status when the type/class unification work is done
-- from the definition I'd say repr() should win, but the name is quite
unfortunate given its new role... Guido, thoughts?
Summary: Should copy, deepcopy and len be added as object methods? And
if yes, how? Not all objects are copyable or measurable. Interfaces
seem the right way to do this, but interfaces aren't in the plans so far
that I know...
What about a stringification method?
--david
Thomas --
have you been following the saga of patch #449054, part of the PEP 250
implementation. If I understand things correctly, this patch only
partly implements PEP 250 -- the rest is waiting on a change to
bdist_wininst. What's the status of this change? Going to make it in?
Please see
http://sourceforge.net/tracker/?func=detail&atid=305470&aid=449054&group_id…
if you haven't already...
Greg
--
Greg Ward - geek-at-large gward(a)python.net
http://starship.python.net/~gward/
Just because you're paranoid doesn't mean they *aren't* out to get you.
Barry & I are planning to release Python 2.2a2 next Wednesday (August
19), according to the (updated) schedule in PEP 251.
Tomorrow (Saturday, August 15) I plan to fork off a short-lived
"release branch", from which the release will be done. Nobody but
Barry & I should be checking things in on the release branch. We'll
selectively merges the trunk into the branch if needed. We'll merge
the branch back to the trunk after the release.
Anybody who wants something to show up in 2.2a2 would be wise to check
it in today or before dawn tomorrow. Please mail me if you need me to
wait for something specific.
PS Martin: I currently have two failing tests!
- test_b1 crashes (in the "float" subtest):
Traceback (most recent call last):
File "../Lib/test/test_b1.py", line 259, in ?
if float(unicode(" \u0663.\u0661\u0664 ")) != 3.14:
ValueError: invalid literal for float(): \u0663.\u0661\u0664
- test_format.py fails:
u'abc %\\\u3000' % 1 works? ... no
Unexpected exceptions.ValueError : "unsupported format character '\\' (0x5c) at index 5"
I expect these are casualties of the --disable-unicode checkins.
--Guido van Rossum (home page: http://www.python.org/~guido/)
[Guido]
> + + A new command line option, -D<arg>, is added to control run-time
> + warnings for the use of classic division.
> ...
> + Using -Dwarn issues a run-time warning about all uses of classic
> + division for int, long, float and complex arguments.
I'm unclear on why we warn about classic division when a float or complex is
involved:
C:\Code\python\PCbuild>python -Dwarn
Python 2.2a2+ (#22, Aug 31 2001, 14:36:57) [MSC 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> 2./3.
__main__:1: DeprecationWarning: classic float division
0.66666666666666663
>>>
Given that this is going to return the same result even in Python 3.0, what
is it warning me about? That is, as an end user, I'm being warned about
behavior I can't do anything about, and behavior that Python isn't going to
change anyway.
Different issue: I would like to add OverflowError when coercion of long to
float yields a senseless result. PEP 238 mentions this as a possibility,
and
>>> from __future__ import division
>>> x = 1L << 3000
>>> x/(2*x)
-1.#IND
>>>
really sucks. For that matter,
>>> float(1L << 3000)
1.#INF
>>>
has always sucked; future division just makes it suck harder <wink>.
Any objections to raising OverflowError here?
Note this is a bit painful for a bogus reason: PyFloat_AsDouble returns a
double, and nothing in the codebase now checks it for a -1.0 error return
(despite that it can already produce one). So half the effort would be
repairing code currently ignoring the possibility of error.
Third issue: I don't see a *good* reason for the future-division
x/(2*x)
above not to return 0.5. long_true_divide could be made smart enough to do
that (more generally, to return a good float approximation whenever the true
result is representable as a float).
When I was updating the NEWS file this morning, I realized that we've
already added a truckload of nifty stuff in the 9 days since 2.2a2 was
released: int overflows return longs, classic division warnings,
subclassing builtins, super, getset, two fixes to float literals, the
new GC API, and PyString_FromFormat[V].
I expect that I'll be concentrating on documentation for the
type/class unification next. While writing the first piece of
documentation, I realized that, sadly, several more advanced things
aren't available in 2.2a2 yet.
The next alpha is planned for Sept. 19, almost three weeks off still.
Does anybody object against the release of an extra release, 2.2a3,
around Sept. 5? We could do 2.2a4 on Sept. 19, or a week later if the
schedule gets too crowded.
--Guido van Rossum (home page: http://www.python.org/~guido/)
I would be very much in favor of a 2.2a3 release next week, it would
save me a lot of explaining differences between 2.2a2 on unix/win and
on the Mac.
--
Jack Jansen | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen(a)oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.cwi.nl/~jack | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm
sjoerd wrote:
>
> Modified Files:
> _sre.c
> Log Message:
> Removed unreachable return to silence SGI compiler.
>
> ! /* shouldn't end up here */
> ! return SRE_ERROR_ILLEGAL;
> }
>
> --- 1141,1145 ----
> }
>
> ! /* can't end up here */
> }
I hate stuff like this: that line was there to make sure *I* don't
mess up when developing SRE, not to deal with potentially broken
compilers or misfired electrons.
isn't there any way to tell the SGI compiler to stop whining about
this?
</F>
Given
>>> a = [1, 2, 3, 4]
because
>>> a[-2]
-3
I expected that a.insert(-2, 0) would yield [1, 2, 0, 3, 4]. It was a
rude shock to discover that
>>> a
[0, 1, 2, 3, 4]
In fact I think this may be the nastiest surprise Python has handed me since
I started using it.
The reference manual says "same as s[i:i] = [x] if i >= 0" which of course
doesn't cover the i < 0 case. David Beasley's reference says "Inserts x
at index i" which sounds like the behavior I was expecting but didn't get.
Is this a deliberate design choice, an oversight, or a plain bug? If it's
a choice, it's damn poorly documented -- this deserves at least a footnote
in the list methods table. If it's an oversight or bug, I volunteer to fix it.
--
<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>
"America is at that awkward stage. It's too late to work within the system,
but too early to shoot the bastards."
-- Claire Wolfe