> The point isn't about my suffering as such. The point is more that
> python-dev owns a tiny amount of the code out there, and I don't believe we
> should put Python's users through this.
> Sure - I would be happy to "upgrade" all the win32all code, no problem. I
> am also happy to live in the bleeding edge and take some pain that will
> The issue is simply the user base, and giving Python a reputation of not
> being able to painlessly upgrade even dot revisions.
I agree with all this.
[As I imagined explicit syntax did not catch up and would require
lot of discussions.]
> > Another way is to use special rules
> > (similar to those for class defs), e.g. having
> > <frag>
> > y=3
> > def f():
> > exec "y=2"
> > def g():
> > return y
> > return g()
> > print f()
> > </frag>
> > # print 3.
> > Is that confusing for users? maybe they will more naturally expect 2
> > as outcome (given nested scopes).
> This seems the best compromise to me. It will lead to the least
> broken code, because this is the behavior that we had before nested
> scopes! It is also quite easy to implement given the current
> implementation, I believe.
> Maybe we could introduce a warning rather than an error for this
> situation though, because even if this behavior is clearly documented,
> it will still be confusing to some, so it is better if we outlaw it in
> some future version.
Yes this can be easy to implement but more confusing situations can arise:
What should this print? the situation leads not to a canonical solution
as class def scopes.
from foo import *
> > This probably won't be a very popular suggestion, but how about pulling
> > nested scopes (I assume they are at the root of the problem)
> > until this can be solved cleanly?
> Agreed. While I think nested scopes are kinda cool, I have lived without
> them, and really without missing them, for years. At the moment the cure
> appears worse then the symptoms in at least a few cases. If nothing else,
> it compromises the elegant simplicity of Python that drew me here in the
> first place!
> Assuming that people really _do_ want this feature, IMO the bar should be
> raised so there are _zero_ backward compatibility issues.
I don't say anything about pulling nested scopes (I don't think my opinion
can change things in this respect)
but I should insist that without explicit syntax IMO raising the bar
has a too high impl cost (both performance and complexity) or creates
> >Assuming that people really _do_ want this feature, IMO the bar should be
> >raised so there are _zero_ backward compatibility issues.
> Even at the cost of additional implementation complexity? At the cost
> of having to learn "scopes are nested, unless you do these two things
> in which case they're not"?
> Let's not waffle. If nested scopes are worth doing, they're worth
> breaking code. Either leave exec and from..import illegal, or back
> out nested scopes, or think of some better solution, but let's not
> introduce complicated backward compatibility hacks.
IMO breaking code would be ok if we issue warnings today and implement
nested scopes issuing errors tomorrow. But this is simply a statement
about principles and raised impression.
IMO import * in an inner scope should end up being an error,
not sure about 'exec's.
We will need a final BDFL statement.
regards, Samuele Pedroni.
From: Samuele Pedroni [mailto:email@example.com]
> the first candidate would be a generalization of 'class'
> (although that make it redundant with 'class' and meta-classes)
> so that
> KEYW-TO-BE kind name [ '(' expr,... ')' ] [ maybe  extended syntax ]:
> would be equivalent to
> name = kind(name-as-string,(expr,...),dict-populated-executing-suite)
[fixed up to exclude the docstring, as per the followup message]
I like this - it's completely general, and easy to understand. Then again,
I always like constructs defined in terms of code equivalence, it seems to
be a good way to make the semantics completely explicit.
The nice thing, to me, is that it solves the immediate problem (modulo a
suitable "kind" to work for properties), as well as being extensible to
allow it to be used in more general contexts.
The downside may be that it's *too* general - I've no feel for how it would
look if overused - it might feel like people end up defining their own
> the remaining problem would be to pick a suitable KEYW-TO-BE
Someone, I believe, suggested reusing "def" - this might be nice, but IIRC
it won't work because of the grammar's strict lookahead limits. (If it does
work, then "def" looks good to me).
If def won't work, how about "define"? The construct is sort of an extended
form of def. Or is that too cute?
By the way, can I just say that I am +1 on Michael Hudson's original patch
for [...] on definitions. Even though it doesn't solve the issue of
properties, I think it's a nice solution for classmethod and staticmethod,
and again I like the generality.
On Wednesday, January 22, 2003, at 08:22 AM, Guido van Rossum wrote:
>>>> My personal belief would be to include Gadfly in Python:
>>>> - Provides a reason for the DB API docs to be merged into the
>>>> Python library reference
>>>> - Gives Python relational DB stuff out of the box ala Java,
>>>> but with a working RDBMS as well ala nothing else I'm aware
>>>> - Makes including GadflyDA in Zope 3 a trivial decision, since
>>>> its size would be negligable and the DA code itself is
>>>> already ZPL.
>>> Would you be willing to find out (from c.l.py) how much interest
>>> there is in this?
>> A fairly positive response from the DB SIG. The trick will be to fix
>> the outstanding bugs or disable those features (losing the 'group
>> by' and 'unique' SQL clauses), and to confirm and fix any departures
>> from the DB-API 2.0 standard, as this would become a reference
>> implementation of sorts.
>> There is no permanent maintainer, as Richard Jones is in more of a
>> caretaker role with the code. I'll volunteer to try and get the code
>> into a Python release though.
>> If fixes, documentation and tests can be organized by the end of
>> January for alpha2, will this go out with Python 2.3 (assuming a
>> signoff on quality by python-dev and the DB-SIG)? If not, Jim is
>> back to deciding if he should include Gadfly with Zope3.
> Sorry for not responding before. I'm open for doing this, but you
> should probably probe python-dev next before you start a big coding
> project. How much C code is involved in Gadfly? If it's a lot, I'm a
> lot more reluctant, because C code usually requires much more
> maintenance (rare is the C source file that doesn't have some hidden
> platform dependency).
Gadfly comes with kjbuckets, which is written in C. The rest is Python.
Gadfly uses the included kjbuckets for storage if it is available, but
happily runs without it with a performance hit. So Jython gets a
RDBMS implementation too.
Stuart Bishop <zen(a)shangri-la.dropbear.id.au>
Hisao SUZUKI has just recently uploaded a patch to SF which
includes codecs for the Japanese encodings EUC-JP, Shift_JIS and
ISO-2022-JP and wants to contribute the code to the PSF.
The advantage of his codecs over the ones written by Tamito
lies in the fact that Hisao's codecs are small (88kB) and
written in pure Python. This makes it much easier to adapt
the codecs to special needs or to correct errors.
Provided Hisao volunteers to maintain these codecs, I'd like
to suggest adding them to Python's encodings package and making
them the default implementations for the above encodings.
Ideal would be if we could get Hisao and Tamito to team up
to support these codecs (I put him on CC).
Adding the codecs to the distribution would give Python a very
good argument in the Japanese world and also help people working
with XML or HTML targetting these locales.
CEO eGenix.com Software GmbH
eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,...
Python Consulting: http://www.egenix.com/
Python Software: http://www.egenix.com/files/python/
A while ago there was a proposal floating around to add an optional
part to function/method definitions, that would replace the current
clumsy classmethod etc. notation, and could be used for other purposes
too. I think the final proposal looked like this:
def name(arg, ...) [expr, ...]:
Does anyone remember or know where to find the thread where this
proposal was discussed? It ought to be turned into a PEP.
--Guido van Rossum (home page: http://www.python.org/~guido/)
Greg Ewing <greg(a)cosc.canterbury.ac.nz> wrote:
>> one glaring weakness is the absence of a high-performance native
>> code compiler. Are there any plans to develop this?
> This is asked fairly frequently, and the usual answer is "No, but
> you're welcome to volunteer." :-)
Really? It surprises me that after 10 years, this isn't something that
has been given more priority. Is the problem simply too difficult?
> There are some projects attacking parts of the problem, however,
> e.g. Psyco.
I've looked at Psyco, but it seems treated like an ad-hoc red-headed
stepchild. Why isn't something this made part of Python?
It would be nice to use Python for more serious projects, but it isn't
fast enough currently.
http://movies.yahoo.com.au - Yahoo! Movies
- What's on at your local cinema?
When working on the sets module, a bug was found
where trapping an exception (a TypeError for a mutable
argument passed to a dictionary) resulted in masking
other errors that should have been passed through
(potential TypeErrors in the called iterator for example).
Now, Walter is working on a bug for map(), zip(), and
reduce() where errors in the getiter() call are being
trapped, reported as TypeError (for non-iterability),
but potentially masking other real errors in
a __iter__ routine. The current proposed solution is
to remove the PyErr_Format call so that the underlying
error message gets propagated up its original form.
The downside of that approach is that it loses information
about which argument caused the error.
So, here's the bright idea. Add a function,
PyErr_FormatAppend, that leaves the original message
intact but allows additional information to be added
(like the name of the called function, identification
of which argument triggered the error, a clue as
to how many iterations had passed, or anything else
that makes the traceback more informative).
Python code has a number of cases where a higher
level routine traps an exception and reraises it with
new information and losing the lower level error
detail in the process.
I'm about to start working on this one and wanted
to check here first to make sure there is still a
demand for it and to get ideas on the best implementation
I'm thinking of summing all of the tp_basicsize slots
while recursing through tp_traverse.
Martin> * Doesn't solve the original problem: many processors writing to
Martin> the same file system (unless you manage to set an environment
Martin> variable differently on each node).
export PYCROOT=/tmp/`hostname --fqdn`
I just checked in a new version of PEP 304. It should be available at
within a few hours (look for version >= 1.10).
There is also a patch against CVS which implements most of the PEP for
Unix-ish systems. That's referenced in the PEP but also available directly
* Not all regression tests pass yet, mostly (I think) because some tests
expect to find auxiliary files in the same directory as mod.__file__.
* There is no support yet for Windows paths, but this shouldn't be hard
to add. I just can't build on Windows.
* You can't delete a source file after generating a .pyc file because
the .pyc file won't be in the same directory. I think I will have to
modify the search for .pyc files to include the bytecode base
directory as well.
Feedback welcome. Windows C programmers even more welcome. ;-)