> The point isn't about my suffering as such. The point is more that
> python-dev owns a tiny amount of the code out there, and I don't believe we
> should put Python's users through this.
> Sure - I would be happy to "upgrade" all the win32all code, no problem. I
> am also happy to live in the bleeding edge and take some pain that will
> The issue is simply the user base, and giving Python a reputation of not
> being able to painlessly upgrade even dot revisions.
I agree with all this.
[As I imagined explicit syntax did not catch up and would require
lot of discussions.]
> > Another way is to use special rules
> > (similar to those for class defs), e.g. having
> > <frag>
> > y=3
> > def f():
> > exec "y=2"
> > def g():
> > return y
> > return g()
> > print f()
> > </frag>
> > # print 3.
> > Is that confusing for users? maybe they will more naturally expect 2
> > as outcome (given nested scopes).
> This seems the best compromise to me. It will lead to the least
> broken code, because this is the behavior that we had before nested
> scopes! It is also quite easy to implement given the current
> implementation, I believe.
> Maybe we could introduce a warning rather than an error for this
> situation though, because even if this behavior is clearly documented,
> it will still be confusing to some, so it is better if we outlaw it in
> some future version.
Yes this can be easy to implement but more confusing situations can arise:
What should this print? the situation leads not to a canonical solution
as class def scopes.
from foo import *
> > This probably won't be a very popular suggestion, but how about pulling
> > nested scopes (I assume they are at the root of the problem)
> > until this can be solved cleanly?
> Agreed. While I think nested scopes are kinda cool, I have lived without
> them, and really without missing them, for years. At the moment the cure
> appears worse then the symptoms in at least a few cases. If nothing else,
> it compromises the elegant simplicity of Python that drew me here in the
> first place!
> Assuming that people really _do_ want this feature, IMO the bar should be
> raised so there are _zero_ backward compatibility issues.
I don't say anything about pulling nested scopes (I don't think my opinion
can change things in this respect)
but I should insist that without explicit syntax IMO raising the bar
has a too high impl cost (both performance and complexity) or creates
> >Assuming that people really _do_ want this feature, IMO the bar should be
> >raised so there are _zero_ backward compatibility issues.
> Even at the cost of additional implementation complexity? At the cost
> of having to learn "scopes are nested, unless you do these two things
> in which case they're not"?
> Let's not waffle. If nested scopes are worth doing, they're worth
> breaking code. Either leave exec and from..import illegal, or back
> out nested scopes, or think of some better solution, but let's not
> introduce complicated backward compatibility hacks.
IMO breaking code would be ok if we issue warnings today and implement
nested scopes issuing errors tomorrow. But this is simply a statement
about principles and raised impression.
IMO import * in an inner scope should end up being an error,
not sure about 'exec's.
We will need a final BDFL statement.
regards, Samuele Pedroni.
I recently came up with a fix for thread support in Python
under Cygwin. Jason Tishler and Norman Vine are looking it
over, but I'm pretty sure something similar should be used
for the Cygwin Python port.
This is easily done--simply add a few lines to thread.c
and create a new thread_cygwin.h (context diff and new file
But there is a larger issue:
The thread interface code in thread_pthread.h uses mutexes
and condition variables to emulate semaphores, which are
then used to provide Python "lock" and "sema" services.
I know this is a common practice since those two thread
synchronization primitives are defined in "pthread.h". But
it comes with quite a bit of overhead. (And in the case of
Cygwin causes race conditions, but that's another matter.)
POSIX does define semaphores, though. (In fact, it's in
the standard just before Mutexes and Condition Variables.)
According to POSIX, they are found in <semaphore.h> and
_POSIX_SEMAPHORES should be defined if they work as POSIX
If they are available, it seems like providing direct
semaphore services would be preferable to emulating them
using condition variables and mutexes.
thread_posix.h.diff-c is a context diff that can be used
to convert thread_pthread.h into a more general POSIX
version that will use semaphores if available.
thread_cygwin.h would no longer be needed then, since all
it does is uses POSIX semaphores directly rather than
mutexes/condition vars. Changing the interface to POSIX
threads should bring a performance improvement to any
POSIX platform that supports semaphores directly.
Does this sound like a good idea? Should I create a
more thorough set of patch files and submit them?
(I haven't been accepted to the python-dev list yet, so
please CC me. Thanks.)
-O Gerald S. Williams, 22Y-103GA : mailto:firstname.lastname@example.org O-
-O AGERE SYSTEMS, 555 UNION BLVD : office:610-712-8661 O-
-O ALLENTOWN, PA, USA 18109-3286 : mobile:908-672-7592 O-
[Jeremy on python-checkins list, PEP 283: Python 2.3 release schedule]
> Planned features for 2.3
> Here are a few PEPs that I know to be under consideration.
> S 273 Import Modules from Zip Archives Ahlstrom
I haven't participated in the discussion of PEP 273,
IIRC it was mostly about implementation details...
Wouldn't it be the right time now, instead of complicating
the builtin import mechanism further, to simplify the builtin
import code, and use it as the foundation of a Python coded
implementation - imputil, or better Gordon's iu.py, or whatever?
I would appreciate any comments you might have on this proposal for adding a
logging system to the Python Standard Library. This PEP is still an early
draft so please forward your comments just to me directly for now.
Title: A Logging System
Version: $Revision: 1.1 $
Last-Modified: $Date: 2002/02/15 04:09:17 $
Author: trentm(a)activestate.com (Trent Mick)
Type: Standards Track
This PEP describes a proposed logging package for Python's
Basically the system involves the user creating one or more
logging objects on which methods are called to log debugging
notes/general information/warnings/errors/etc. Different logging
'levels' can be used to distinguish important messages from
A registry of named singleton logger objects is maintained so that
1) different logical logging streams (or 'channels') exist
(say, one for 'zope.zodb' stuff and another for
2) one does not have to pass logger object references around.
The system is configurable at runtime. This configuration
mechanism allows one to tune the level and type of logging done
while not touching the application itself.
If a single logging mechanism is enshrined in the standard
library, 1) logging is more likely to be done 'well', and 2)
multiple libraries will be able to be integrated into larger
applications which can be logged reasonably coherently.
This proposal was put together after having somewhat studied the
following logging packages:
o java.util.logging in JDK 1.4 (a.k.a. JSR047) 
o log4j 
These two systems are *very* similar.
o the Syslog package from the Protomatter project 
o MAL's mx.Log package 
This proposal will basically look like java.util.logging with a
smattering of log4j.
This shows a very simple example of how the logging package can be
used to generate simple logging output on stdout.
--------- mymodule.py -------------------------------
log = logging.getLogger("MyModule")
# do stuff ...
--------- myapp.py ----------------------------------
import mymodule, logging
log = logging.getLogger("MyApp")
log.info("start my app")
except Exception, e:
log.error("There was a problem doin' stuff.")
log.info("end my app")
> python myapp.py
0 [myapp.py:4] INFO MyApp - start my app
36 [mymodule.py:5] DEBUG MyModule - doin' stuff
51 [myapp.py:9] INFO MyApp - end my app
^^ ^^^^^^^^^^^^ ^^^^ ^^^^^ ^^^^^^^^^^
| | | | `-- message
| | | `-- logging name/channel
| | `-- level
| `-- location
NOTE: Not sure exactly what the default format will look like yet.
[Note: excerpts from Java Logging Overview. ]
Applications make logging calls on *Logger* objects. Loggers are
organized in a hierarchical namespace and child Loggers may
inherit some logging properties from their parents in the
Notes on namespace: Logger names fit into a "dotted name"
namespace, with dots (periods) indicating sub-namespaces. The
namespace of logger objects therefore corresponds to a single tree
"" is the root of the namespace
"Zope" would be a child node of the root
"Zope.ZODB" would be a child node of "Zope"
These Logger objects allocate *LogRecord* objects which are passed
to *Handler* objects for publication. Both Loggers and Handlers
may use logging *levels* and (optionally) *Filters* to decide if
they are interested in a particular LogRecord. When it is
necessary to publish a LogRecord externally, a Handler can
(optionally) use a *Formatter* to localize and format the message
before publishing it to an I/O stream.
Each Logger keeps track of a set of output Handlers. By default
all Loggers also send their output to their parent Logger. But
Loggers may also be configured to ignore Handlers higher up the
The APIs are structured so that calls on the Logger APIs can be
cheap when logging is disabled. If logging is disabled for a
given log level, then the Logger can make a cheap comparison test
and return. If logging is enabled for a given log level, the
Logger is still careful to minimize costs before passing the
LogRecord into the Handlers. In particular, localization and
formatting (which are relatively expensive) are deferred until the
Handler requests them.
The logging levels, in increasing order of importance, are:
This is consistent with log4j and Protomatter's Syslog and not
with JSR047 which has a few more levels and some different names.
Implementation-wise: these are just integer constants, to allow
simple comparison of importance. See "What Logging Levels?" below
for a debate on what standard levels should be defined.
Each Logger object keeps track of a log level (or threshold) that
it is interested in, and discards log requests below that level.
The *LogManager* maintains a hierarchical namespace of named
Logger objects. Generations are denoted with dot-separated names:
Logger "foo" is the parent of Loggers "foo.bar" and "foo.baz".
The main logging method is:
def log(self, level, msg, *args):
"""Log 'msg % args' at logging level 'level'."""
however convenience functions are defined for each logging level:
def debug(self, msg, *args): ...
def info(self, msg, *args): ...
def warn(self, msg, *args): ...
def error(self, msg, *args): ...
def fatal(self, msg, *args): ...
XXX How to defined a nice convenience function for logging an exception?
mx.Log has something like this, doesn't it?
XXX What about a .raising() convenience function? How about:
def raising(self, exception, level=ERROR): ...
It would create a log message describing an exception that is
about to be raised. I don't like that 'level' is not first
when it *is* first for .log().
Handlers are responsible for doing something useful with a given
LogRecord. The following core Handlers will be implemented:
- StreamHandler: A handler for writing to a file-like object.
- FileHandler: A handler for writing to a single file or set
of rotating files.
More standard Handlers may be implemented if deemed desirable and
feasible. Other interesting candidates:
- SocketHandler: A handler for writing to remote TCP ports.
- CreosoteHandler: A handler for writing to UDP packets, for
low-cost logging. Jeff Bauer already had such a system .
- MemoryHandler: A handler that buffers log records in memory
- SMTPHandler: Akin to log4j's SMTPAppender.
- SyslogHandler: Akin to log4j's SyslogAppender.
- NTEventLogHandler: Akin to log4j's NTEventLogAppender.
A Formatter is responsible for converting a LogRecord to a string
representation. A Handler may call its Formatter before writing a
record. The following core Formatters will be implemented:
- Formatter: Provide printf-like formatting, perhaps akin to
Other possible candidates for implementation:
- XMLFormatter: Serialize a LogRecord according to a specific
schema. Could copy the schema from JSR047's XMLFormatter or
- HTMLFormatter: Provide a simple HTML output of log
information. (See log4j's HTMLAppender.)
A Filter can be called by a Logger or Handler to decide if a
LogRecord should be logged.
JSR047 and log4j have slightly different filtering interfaces. The
former is simpler:
"""Return a boolean."""
The latter is modeled after Linux's ipchains (where Filter's can
be chained with each filter either 'DENY'ing, 'ACCEPT'ing, or
being 'NEUTRAL' on each check). I would probably favor to former
because it is simpler and I don't immediate see the need for the
No filter implementations are currently proposed (other that the
do nothing base class) because I don't have enough experience to
know what kinds of filters would be common. Users can always
subclass Filter for their own purposes. Log4j includes a few
filters that might be interesting.
Note: Configuration for the proposed logging system is currently
The main benefit of a logging system like this is that one can
control how much and what logging output one gets from an
application without changing that application's source code.
Log4j and Syslog provide for configuration via an external XML
file. Log4j and JSR047 provide for configuration via Java
properties (similar to -D #define's to a C/C++ compiler). All
three provide for configuration via API calls.
Configuration includes the following:
- What logging level a logger should be interested in.
- What handlers should be attached to which loggers.
- What filters should be attached to which handlers and loggers.
- Specifying attributes specific to certain Handlers and Filters.
- Defining the default configuration.
- XXX Add others.
In general each application will have its own requirements for how
a user may configure logging output. One application
(e.g. distutils) may want to control logging levels via
'-q,--quiet,-v,--verbose' options to setup.py. Zope may want to
configure logging via certain environment variables
(e.g. 'STUPID_LOG_FILE' :). Komodo may want to configure logging
via its preferences system.
This PEP proposes to clearly document the API for configuring each
of the above listed configurable elements and to define a
reasonable default configuration. This PEP does not propose to
define a general XML or .ini file configuration schema and the
backend to parse it.
It might, however, be worthwhile to define an abstraction of the
configuration API to allow the expressiveness of Syslog
configuration. Greg Wilson made this argument:
In Protomatter [Syslog], you configure by saying "give me
everything that matches these channel+level combinations",
such as "server.error" and "database.*". The log4j "configure
by inheritance" model, on the other hand, is very clever, but
hard for non-programmers to manage without a GUI that
essentially reduces it to Protomatter's.
This section presents a few usage scenarios which will be used to
help decide how best to specify the logging API.
(1) A short simple script.
This script does not have many lines. It does not heavily use
any third party modules (i.e. the only code doing any logging
would be the main script). Only one logging channel is really
needed and thus, the channel name is unnecessary. The user
doesn't want to bother with logging system configuration much.
(2) Medium sized app with C extension module.
Includes a few Python modules and a main script. Employs,
perhaps, a few logging channels. Includes a C extension
module which might want to make logging calls as well.
A large number of Python packages/modules. Perhaps (but not
necessarily) a number of logging channels are used.
Specifically needs to facilitate the controlling verbosity
levels via simple command line options to 'setup.py'.
(4) Large, possibly multi-language, app. E.g. Zope or (my
(I don't expect this logging system to deal with any
cross-language issues but it is something to think about.)
Many channels are used. Many developers involved. People
providing user support are possibly not the same people who
developed the application. Users should be able to generate
log files (i.e. configure logging) while reproducing a bug to
send back to developers.
XXX Details to follow consensus that this proposal is a good idea.
What Logging Levels?
The following are the logging levels defined by the systems I looked at:
- log4j: DEBUG, INFO, WARN, ERROR, FATAL
- syslog: DEBUG, INFO, WARNING, ERROR, FATAL
- JSR047: FINEST, FINER, FINE, CONFIG, INFO, WARNING, SEVERE
- zLOG (used by Zope):
TRACE=-300 -- Trace messages
DEBUG=-200 -- Debugging messages
BLATHER=-100 -- Somebody shut this app up.
INFO=0 -- For things like startup and shutdown.
PROBLEM=100 -- This isn't causing any immediate problems, but
WARNING=100 -- A wishy-washy alias for PROBLEM.
ERROR=200 -- This is going to have adverse effects.
PANIC=300 -- We're dead!
The current proposal is to copy log4j. XXX I suppose I could see
adding zLOG's "TRACE" level, but I am not sure of the usefulness
Static Logging Methods (as per Syslog)?
Both zLOG and Syslog provide module-level logging functions rather
(or in addition to) logging methods on a created Logger object.
XXX Is this something that is deemed worth including?
- It would make the simplest case shorter:
logging.error("Something is wrong")
log = logging.getLogger("")
log.error("Something is wrong")
- It provides more than one way to do it.
- It encourages logging without a channel name, because this
mechanism would likely be implemented by implicitly logging
on the root (and nameless) logger of the hierarchy.
 log4j: a Java logging package
 Protomatter's Syslog
 MAL mentions his mx.Log logging module:
 Jeff Bauer's Mr. Creosote
This document has been placed in the public domain.
We had a brief jam on date/time objects at Zope Corp. HQ today. I
won't get to writing up the full proposal that came out of this, but
I'd like to give at least a summary. (Th0se who were there: my
thoughts have advanced a bit since this afternoon.)
My plan is to create a standard timestamp object in C that can be
subclassed. The internal representation will favor extraction of
broken-out time fields (year etc.) in local time. It will support
comparison, basic time computations, and effbot's minimal API, as well
as conversions to and from the two currently most popular time
representations used by the time module: posix timestamps in UTC and
9-tuples in local time. There will be a C API.
Proposal for internal representation (also the basis for an efficient
year 2 bytes, big-endian, unsigned (0 .. 65535)
month 1 byte
day 1 byte
hour 1 byte
minute 1 byte
second 1 byte
usecond 3 bytes, big-endian
tzoffset 2 bytes, big-endian, signed (in minutes, -1439 .. 1439)
total 12 bytes
Things this will not address (but which you may address through
- leap seconds
- alternate calendars
- years far in the future or BC
- precision of timepoints (e.g. a separate Date type)
- DST flags (DST is accounted for by the tzoffset field)
- Why store a broken-out local time rather than seconds (or
microseconds) relative to an epoch in UTC? There are two kinds of
operations on times: accessing the broken-out fields (probably in
local time), and time computations. The chosen representation
favors accessing broken-out fields, which I expect to be more common
than time computations.
- Why a big-endian internal representation? So that comparison can be
done using a single memcmp() call as long as the tzoffset fields are
- Why not pack the fields closer to save a few bytes? To make the
pack and unpack operations more efficient; the object footprint
isn't going to make much of a difference.
- Why is the year unsigned? So memcmp() will do the right thing for
comparing dates (in the same timezone).
- What's the magic number 1439? One less than 24 * 60. Timezone
offsets may be up to 24 hours. (The C99 standard does it this way.)
I'll try to turn this into a proper PEP ASAP.
(Stephan: do I need to CC you or are you reading python-dev?)
--Guido van Rossum (home page: http://www.python.org/~guido/)
I propose adding a basic time type (or time base type ;-) to the standard
library, which can be subclassed by more elaborate date/time/timestamp
implementations, such as mxDateTime, custom types provided by DB-API
The goal is to make it easy to extract the year, month, day, hour, minute,
and second from any given time object.
Or to put it another way, I want the following to work for any time object,
including mxDateTime objects, any date/timestamp returned by a DB-API
driver, and weird date/time-like types I've developed myself:
if isinstance(t, basetime):
# yay! it's a timestamp
The goal is not to standardize any behaviour beyond this; anything else
should be provided by subtypes.
More details here:
I can produce PEP and patch if necessary.
[Kevin Jacobs wrote me in private to ask my position on __slots__.
I'm posting my reply here, quoting his full message -- I see no reason
to carry this on as a private conversation. Sorry, Kevin, if this
wasn't your intention.]
> Hi Guido;
> Now that you are back from your travels, I'll start bugging you, as
> gently as possible, for some insight into your intent wrt slots and
> metaclasses. As you can read from the python-dev archives, I've
> instigated a fair amount of discussion on the topic, though the
> conversation is almost meaningless without your input.
Hi Kevin, you got me to finally browse the thread "Meta-reflections".
My first response was: "you've got it all wrong." My second response
was a bit more nuanced: "that's not how I intended it to be at all!"
OK, let me elaborate. :-)
You want to be able to find out which instance attributes are defined
by __slots__, so that (by combining this with the instance's __dict__)
you can obtain the full set of attribute values. But this defeats the
purpose of unifying built-in types and user-defined classes.
A new-style class, with or without __slots__, should be considered no
different from a new-style built-in type, except that all of the
methods happen to be defined in Python (except maybe for inherited
In order to find all attributes, you should *never* look at __slots__.
Your should search the __dict__ of the class and its base classes, in
MRO order, looking for descriptors, and *then* add the keys of the
__dict__ as a special case. This is how PEP 252 wants it to be.
If the descriptors don't tell you everything you need, too bad -- some
types just are like that. For example, if you're deriving from a list
or tuple, there's no attribute that leads to the items: you have to
use __len__ and __getitem__ to find out about these, and you have to
"know" that that's how you get at them (although the presence of
__getitem__ should be a clue).
Why do I reject your suggestion of making __slots__ (more) usable for
introspection? Because it would create another split between built-in
types and user-defined classes: built-in types don't have __slots__,
so any strategy based on __slots__ will only work for user-defined
types. And that's exactly what I'm trying to avoid!
You may complain that there are so many things to be found in a
class's __dict__, it's hard to tell which things are descriptors.
Actually, it's easy: if it has a __get__ (method) attribute, it's a
descriptor; if it also has a __set__ attribute, it's a data attribute,
otherwise it's a method. (Note that read-only data attributes have a
descriptor that has a __set__ method that always raises TypeError or
Given this viewpoint, you won't be surprised that I have little desire
to implement your other proposals, in particular, I reject all these:
- Proxy the instance __dict__ with something that makes the slots
- Flatten slot lists and make them immutable
- Alter vars(obj) to return a dict of all attrs
- Flatten slot inheritance (see below)
- Change descriptors to fall back on class variables for unfilled
I'll be the first to admit that some details are broken in 2.2.
In particular, the fact that instances of classes with __slots__
appear picklable but lose all their slot values is a bug -- these
should either not be picklable unless you add a __reduce__ method, or
they should be pickled properly. This is a bug of the same kind as
the problem with pickling time.localtime() (SF bug #496873), so I'm
glad this problem has now been entered in the SF database (as
#520644). I haven't made up my mind on how to fix this -- it would be
nice if __slots__ would automatically be pickled, but it's tricky
(although I think it's doable -- without ever referencing the
__slots__ variable :-).
I'm not so sure that the fact that you can "override" or "hide" slots
defined in a base class should be classified as a bug. I see it more
as a "don't do that" issue: If you're deriving a class that overrides
a base class slot, you haven't done your homework. PyChecker could
warn about this though.
I think you're mostly right with your proposal "Update standard
library to use new reflection API". Insofar as there are standard
support classes that use introspection to provide generic services for
classic classes, it would be nice of these could work correctly for
new-style classes even if they use slots or are derived from
non-trivial built-in types like dict or list. This is a big job, and
I'd love some help. Adding the right things to the inspect module
(without breaking pydoc :-) would probably be a first priority.
Now let me get to the rest of your letter.
> So I've been sitting on my hands and waiting for you to dive in and
> set us all straight. Actually, that is not entirely true; I picked
> up a copy of 'Putting Metaclasses to Work' and read it cover to
Wow. That's more than I've ever managed (due to what I hope can still
be called a mild case of ADD :-). But I think I studied all the
important parts. (I should ask the authors for a percentage -- I
think they've made quite some sales because of my frequent quoting of
their book. :-)
> Many things you've done in Python 2.2 are much clearer now,
> though new questions have emerged. I would greatly appreciate it if
> you would answer a few of them at a time. In return, I will
> synthesize your ideas with my own and compile a document that
> clearly defines and justifies the new Python object model and
> metaclass protocol.
Maybe you can formulate it as a set of tentative clarifying patches to
PEPs 252, 253, and 254?
> To start, there are some fairly broad and overlapping questions to get
> 1) How much of IBM's SOMobject MetaClass Protocol (SOMMCP) do you
> want to adapt to Python? For now (Python 2.2/2.3/2.4 time
> frame)? And in the future (Python 3.0/3000)?
Not much more than what I've done so far. A lot of what they describe
is awfully C++ specific anyway; a lot of the things they struggle with
(such as the redispatch hacks and requestFirstCooperativeMethodCall)
can be done so much simpler in a dynamic language like Python that I
doubt we should follow their examples literally.
> 2) In Python 2.2, what intentional deviations have you chosen from the
> SOMMCP and what differences are incidental or accidental?
Hard to say, unless you specifically list all the things that you
consider part of the SOMMCP. Here are some things I know:
- In descrintro.html, I describe a slightly different algorithm for
calculating the MRO than they use. But my implementation is theirs
-- I didn't realize the two were different until it was too late,
and it only matters in uninteresting corner cases.
- I currently don't complain when there are serious order
disagreements. I haven't decided yet whether to make these an error
(then I'd have to implement an overridable way of defining
"serious") or whether it's more Pythonic to leave this up to the
- I don't enforce any of their rules about cooperative methods. This
is Pythonic: you can be cooperative but you don't have to be. It
would also be too incompatible with current practice (I expect few
people will adopt super().)
- I don't automatically derive a new metaclass if multiple base
classes have different metaclasses. Instead, I see if any of the
metaclasses of the bases is usable (i.e. I don't need to derive one
anyway), and then use that; instead of deriving a new metaclass, I
raise an exception. To fix this, the user can derive a metaclass
and provide it in the __metaclass__ variable in the class statement.
I'm not sure whether I should automatically derive metaclasses; I
haven't got enough experience with this stuff to get a good feel for
when it's needed. Since I expect that non-trivial metaclasses are
often implemented in C, I'm not so comfortable with automatically
merging multiple metaclasses -- I can't prove to myself that it's
- I don't check that a base class doesn't override instance
variables. As I stated above, I don't think I should, but I'm not
> 3) Do you intend to enforce monotonicity for all methods and slots?
> (Clearly, this is not desirable for instance __dict__ attributes.)
If I understand the concept of monotonicity, no. Python traditionally
allows you to override methods in ways that are incompatible with the
contract of the base class method, and I don't intend to forbid this.
It would be good if PyChecker checked for accidental mistakes in this
area, and maybe there should be a way to declare that you do want this
enforced; I don't know how though.
There's also the issue that (again, if I remember the concepts right)
there are some semantic requirements that would be really hard to
check at compile time for Python.
> 4) Should descriptors work cooperatively? i.e., allowing a
> 'super' call within __get__ and __set__.
I don't think so, but I haven't thought through all the consequences
(I'm not sure why you're asking this, and whether it's still a
relevant question after my responses above). You can do this for
Thanks for the dialogue!
--Guido van Rossum (home page: http://www.python.org/~guido/)
A quick grep-find through the Python-2.2 sources reveals the following:
Include/dictobject.h:49: long aligner;
Include/objimpl.h:275: double dummy; /* force worst-case alignment */
Modules/addrinfo.h:162: LONG_LONG __ss_align; /* force desired structure
storage alignment */
Modules/addrinfo.h:164: double __ss_align; /* force desired structure
storage alignment */
At first glance, there appear to be different assumptions at work here about
what constitutes maximal alignment on any given platform. I've been using a
little C++ metaprogram to find a type which will properly align any other
given type. Because of limitations of one compiler, I had to disable the
computation and instead used the objimpl.h assumption that double was
maximally aligned, but also added a compile-time assertion to check that the
alignment is always greater than or equal to that of the target type. Well,
it failed today on Tru64 Unix with the latest compaq CXX 6.5 prerelease
compiler; it appears that the alignment of long double is greater than that
of double on that platform.
I thought someone might want to know,
C++ Booster (http://www.boost.org) O__ ==
Pythonista (http://www.python.org) c/ /'_ ==
resume: http://users.rcn.com/abrahams/resume.html (*) \(*) ==
----- Original Message -----
From: "Gordon McMillan" <gmcm(a)hypernet.com>
> That's not even part of import. import is done when it
> has [name1, name2, name3]. It's ceval.c that
> does the binding.
Yep, so I discovered.
> Sounds to me like you want to override __setitem__
> on the module's __dict__.
Not neccessarily, though that might be one approach. I might want to treat
explicit setting of attributes differently from an import.
> Tricky, 'cause a module
> is hardly in charge of its own __dict__.
> But if you see value in it, you'd better persue it
> now, because Jeremy's plans for optimization of
> module __dict__ will likely make things harder.
I thought this /was/ pursuing it. What did you have in mind?