> The point isn't about my suffering as such. The point is more that
> python-dev owns a tiny amount of the code out there, and I don't believe we
> should put Python's users through this.
> Sure - I would be happy to "upgrade" all the win32all code, no problem. I
> am also happy to live in the bleeding edge and take some pain that will
> The issue is simply the user base, and giving Python a reputation of not
> being able to painlessly upgrade even dot revisions.
I agree with all this.
[As I imagined explicit syntax did not catch up and would require
lot of discussions.]
> > Another way is to use special rules
> > (similar to those for class defs), e.g. having
> > <frag>
> > y=3
> > def f():
> > exec "y=2"
> > def g():
> > return y
> > return g()
> > print f()
> > </frag>
> > # print 3.
> > Is that confusing for users? maybe they will more naturally expect 2
> > as outcome (given nested scopes).
> This seems the best compromise to me. It will lead to the least
> broken code, because this is the behavior that we had before nested
> scopes! It is also quite easy to implement given the current
> implementation, I believe.
> Maybe we could introduce a warning rather than an error for this
> situation though, because even if this behavior is clearly documented,
> it will still be confusing to some, so it is better if we outlaw it in
> some future version.
Yes this can be easy to implement but more confusing situations can arise:
What should this print? the situation leads not to a canonical solution
as class def scopes.
from foo import *
> > This probably won't be a very popular suggestion, but how about pulling
> > nested scopes (I assume they are at the root of the problem)
> > until this can be solved cleanly?
> Agreed. While I think nested scopes are kinda cool, I have lived without
> them, and really without missing them, for years. At the moment the cure
> appears worse then the symptoms in at least a few cases. If nothing else,
> it compromises the elegant simplicity of Python that drew me here in the
> first place!
> Assuming that people really _do_ want this feature, IMO the bar should be
> raised so there are _zero_ backward compatibility issues.
I don't say anything about pulling nested scopes (I don't think my opinion
can change things in this respect)
but I should insist that without explicit syntax IMO raising the bar
has a too high impl cost (both performance and complexity) or creates
> >Assuming that people really _do_ want this feature, IMO the bar should be
> >raised so there are _zero_ backward compatibility issues.
> Even at the cost of additional implementation complexity? At the cost
> of having to learn "scopes are nested, unless you do these two things
> in which case they're not"?
> Let's not waffle. If nested scopes are worth doing, they're worth
> breaking code. Either leave exec and from..import illegal, or back
> out nested scopes, or think of some better solution, but let's not
> introduce complicated backward compatibility hacks.
IMO breaking code would be ok if we issue warnings today and implement
nested scopes issuing errors tomorrow. But this is simply a statement
about principles and raised impression.
IMO import * in an inner scope should end up being an error,
not sure about 'exec's.
We will need a final BDFL statement.
regards, Samuele Pedroni.
In the discussion on my request for an ("O@", typeobject,
void **) format for PyArg_Parse and Py_BuildValue MAL suggested
that I could get the same functionality by creating a type
WrapperTypeObject, which would be a subtype of TypeObject with
extra fields pointing to the _New() and _Convert() routines to
convert Python objects from/to C pointers. This would be good
enough for me, because then types wanting to participate in the
wrapper protocol would subtype WrapperTypeObject in stead of
TypeObject, and two global routines could return the _New and
_Convert routines given the type object, and we wouldn't need
yet another PyArg_Parse format specifier.
However, after digging high and low I haven't been able to
deduce how I would then use this WrapperType in C as the type
for my extension module objects. Are there any examples? If not,
could someone who understands the new inheritance scheme give me
some clues as to how to do this?
- Jack Jansen <Jack.Jansen(a)oratrix.com>
- If I can't dance I don't want to be part of your revolution --
Emma Goldman -
> The keyword module has an undocumented data object kwlist which is a
> list of keywords. Perhaps this should be documented and made part of
> the public API? I'd want to change the list to a tuple, but that
> seems harmless since it isn't already part of the API.
Why make it a tuple? Out of fear someone changes it? Let them change
it, and learn about sharing of object references!
Agree it should be documented of course.
--Guido van Rossum (home page: http://www.python.org/~guido/)
If I could "cvs up" I would submit a patch, but in the meantime, is there
any good reason that distutils shouldn't write its output to stderr? I'm
using PyInline to execute a little bit of C code that returns some
information about the system to the calling Python code. This code then
sends some output to stdout.
I've patched my local directory tree so that distutils sends its output to
sys.stderr. Is there some overriding reason distutils messages should go to
BTW, Python + PyInline makes a hell of a lot easier to understand configure
I've made some simple measurements of how long opcodes take to execute
and how long it takes to go around the mainloop, using the Pentim
timestamp counter, which measures processor cycles.
The results aren't particularly surprising, but they provide some
empirical validation of what we've believed all along. I don't have
time to go into all the gory details here, though I plan to at
Spam 10 developers day next week.
I put together a few Web pages that summarize the data I've collected
on some simple benchmarks:
Comments and questions are welcome. I've got a little time to do more
measurement and analysis before devday.
I recall having the discussion but I don't quite recall the
resolution: Is Next support now officially dropped from the
I have a revised dynamic loading module that strips out all
of the dead branches ( as well as better error reporting ):
I was going to call it dynload_darwin.c and add support to
configure, but grepping thru configure I only saw darwin
as triggering dynload_next.c -- it *looks* like the Next
has bee dropped.
Should we rename the file anyway ? ( to make it easier for
folks to know where to look. )
There has also been some discussion on the pythonmac-sig list
about dynamic loading. There are some other problems that
this module doesn't fix yet. If someone wants to subit a
better one, that's fine by me, but we REALLY need to get
the better error reporting in there so we can at least
find the problem.
The other thing that's been discussed is adding configure
support to build with the dlopen compatability libs if
that is available. ( doing config with --without-dyld
doesn't seem to change anything. )
This discussion started on pythonmac-SIG, but someone suggested
that it isn't really a MacPython-specific issue (even though the
implementation will be different for MacPython from unix-Python).
Begin forwarded message:
> From: Martin Miller <mmiller(a)adobe.com>
> Date: Wed Jan 30, 2002 08:14:13 PM Europe/Amsterdam
> To: pythonmac-sig(a)python.org
> Subject: Re: [Pythonmac-SIG] sys.exit() functionality
> On Wed, 30 Jan 2002 15:29:21 +0100, Jack Jansen wrote:
>> On Tuesday, January 29, 2002, at 08:54 , Jon Bradley wrote:
>>> hey all,
>>> In embedded Python - why does sys.exit() quit out of the application
>>> embedding the interpreter? Is there any way to trap or
>>> disregard this?
>>> If a user creates an application with Python and runs it through the
>>> embedded interpreter, calling quit or exit on the Python application
>>> is more than ok, but allowing it to force out of the parent
>> Sounds reasonable. How about a routine PyMac_SetExitFunc() that you
>> could call to set your own exit function, (similar to
>> PyMac_SetConsoleHandler())? MacPython would then do all it's normal
>> cleanup, but at the very end call your routine in stead of exit().
> With an approach like the above, wouldn't it be better to have a
> platform-independent way of defining a custom exit function,
> rather than
> calling a Mac-only system function -- or is this whole thing only an
> issue with MacPython embedding?
> Pythonmac-SIG maillist - Pythonmac-SIG(a)python.org
- Jack Jansen <Jack.Jansen(a)oratrix.com>
- If I can't dance I don't want to be part of your revolution --
Emma Goldman -
You will soon notice (if you haven't already) that your list admin
passwords on mail.python.org are broken. This happened due to an
upgrade of the version of Python running on that system. The old list
passwords can't be recovered, so they have to be reset.
List administrators can contact me to get this done. If you know the
old password, send it to me and I'll reset the list to it. Otherwise,
let me know and I'll generate a new password for you.
Sorry for the inconvenience,
I'm still a little ignorant to real threads.
In order to do the implementation of hard-wired microthreads
right, I tried to understand how real threads work.
My question, which I could not easily answer by reading
the source is:
What happens when the main thread ends? Do all threads run
until they are eady too, or are they just killed away?
And if they are killed, are they just removed, or do
they all get an exception for cleanup?
I would guess the latter, but I'm not sure.
When a thread ends, it may contain several levels of other
C calls which might need to finalize, so I thought of
a special exception for this, but didn't find such.
Many thanks and sorry about my ignorance - chris
Christian Tismer :^) <mailto:email@example.com>
Mission Impossible 5oftware : Have a break! Take a ride on Python's
Kaunstr. 26 : *Starship* http://starship.python.net/
14163 Berlin : PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF
where do you want to jump today? http://www.stackless.com/
Its taken longer than I'd hoped, however they're finally up for review.
The updated bits have been attached to the previous patch entries in the
435381: distutils changes
450265: build files - self contained subdirectory in PC/
450266: library changes - 3 patch files covering:-
- Lib/ (included os2emxpath.py as previously discussed here)
- Lib/plat-os2emx/ (new subdirectory)
- Lib/test/ (cope with 2 EMX limitations)
450267: core changes - 4 patch files covering:-
- Modules/ (lots of changes; see below for more info)
- Objects/ (see below for more info)
I hope that I got the patch links right...
Particular notes wrt #450267:
- the patch to Modules/import.c supports VACPP in addition to EMX.
Michael Muller has trialled this patch with a VACPP build successfully.
It is messy, but OS/2 isn't going to lose the 8.3 naming limit on DLLs
anytime soon :-( Although truncating the DLL (PYD) name to 8 characters
increases the chances of a name clash, the case-sensitive import support
in the same patch alleviates it somewhat, and the fact that the
"init<module>" entrypoint is maintained will result in an import failure
when there is an actual name clash.
- Modules/unicodedata.c is affected by a name clash between the internally
defined _getname() and an EMX routine of the same name defined in
<stdlib.h>. The patch renames the internal routine to _getucname() to
avoid this, but this change may not be acceptable - advice please.
- Objects/stringobject.c and Objects/unicodeobject.c contain changes to
handle the EMX runtime library returning "0x" as the prefix for output
formatted with a "%X" format.
I have tried to minimise the changes in these patches to the minimum
needed for the port to function, ie I've tried to eradicate the cosmetic
changes in the earlier patches, and avoid picking up unwanted files (such
as Modules/Setup). Please let me know if you find any such changes I
The patches uploaded apply cleanly to a copy of an anonoymously checked
out CVS tree as of 0527 AEST this morning (Jan 27), and have been built
and regression tested on both OS/2 EMX and FreeBSD 4.4R with no unexpected
If there are no unresolvable objections, and approval to apply these
patches is granted, I propose that the patches be applied as follows:-
Stage 1: the build patch (creates+populates PC/os2emx/)
Stage 2: the Lib/plat-os2emx/ patch
Stage 3: the Lib/ and Lib/test/ patches
Stage 4: the distutils patch
Stage 5: the Include/, Objects/ and Python/ patches
Stage 6: the Modules/ patch
I would expect to allow at least 48 hours between stages.
Comments/advice on this proposal also appreciated.
Andrew I MacIntyre "These thoughts are mine alone..."
E-mail: andymac(a)bullseye.apana.org.au | Snail: PO Box 370
andymac(a)pcug.org.au | Belconnen ACT 2616
Web: http://www.andymac.org/ | Australia