stoopid question: why the heck is xmllib using
"RuntimeError" to flag XML syntax errors?
raise RuntimeError, 'Syntax error at line %d: %s' % (self.lineno, message)
what's wrong with "SyntaxError"?
An HTML version of the attached can be viewed at
This will be adopted for 2.0 unless there's an uproar. Note that it *does*
have potential for breaking existing code -- although no real-life instance
of incompatibility has yet been reported. This is explained in detail in
the PEP; check your code now.
although-if-i-were-you-i-wouldn't-bother<0.5-wink>-ly y'rs - tim
Title: Change the Meaning of \x Escapes
Version: $Revision: 1.4 $
Author: tpeters(a)beopen.com (Tim Peters)
Type: Standards Track
Change \x escapes, in both 8-bit and Unicode strings, to consume
exactly the two hex digits following. The proposal views this as
correcting an original design flaw, leading to clearer expression
in all flavors of string, a cleaner Unicode story, better
compatibility with Perl regular expressions, and with minimal risk
to existing code.
The syntax of \x escapes, in all flavors of non-raw strings, becomes
where h is a hex digit (0-9, a-f, A-F). The exact syntax in 1.5.2 is
not clearly specified in the Reference Manual; it says
implying "two or more" hex digits, but one-digit forms are also
accepted by the 1.5.2 compiler, and a plain \x is "expanded" to
itself (i.e., a backslash followed by the letter x). It's unclear
whether the Reference Manual intended either of the 1-digit or
In an 8-bit non-raw string,
expands to the character
Note that this is the same as in 1.6 and before.
In a Unicode string,
acts the same as
i.e. it expands to the obvious Latin-1 character from the initial
segment of the Unicode space.
An \x not followed by at least two hex digits is a compile-time error,
specifically ValueError in 8-bit strings, and UnicodeError (a subclass
of ValueError) in Unicode strings. Note that if an \x is followed by
more than two hex digits, only the first two are "consumed". In 1.6
and before all but the *last* two were silently ignored.
>>> "\x123465" # same as "\x65"
>>> "\x123465" # \x12 -> \022, "3456" left alone
[ValueError is raised]
[ValueError is raised]
History and Rationale
\x escapes were introduced in C as a way to specify variable-width
character encodings. Exactly which encodings those were, and how many
hex digits they required, was left up to each implementation. The
language simply stated that \x "consumed" *all* hex digits following,
and left the meaning up to each implementation. So, in effect, \x in C
is a standard hook to supply platform-defined behavior.
Because Python explicitly aims at platform independence, the \x escape
in Python (up to and including 1.6) has been treated the same way
across all platforms: all *except* the last two hex digits were
silently ignored. So the only actual use for \x escapes in Python was
to specify a single byte using hex notation.
Larry Wall appears to have realized that this was the only real use for
\x escapes in a platform-independent language, as the proposed rule for
Python 2.0 is in fact what Perl has done from the start (although you
need to run in Perl -w mode to get warned about \x escapes with fewer
than 2 hex digits following -- it's clearly more Pythonic to insist on
2 all the time).
When Unicode strings were introduced to Python, \x was generalized so
as to ignore all but the last *four* hex digits in Unicode strings.
This caused a technical difficulty for the new regular expression
SRE tries very hard to allow mixing 8-bit and Unicode patterns and
strings in intuitive ways, and it no longer had any way to guess what,
for example, r"\x123456" should mean as a pattern: is it asking to
the 8-bit character \x56 or the Unicode character \u3456?
There are hacky ways to guess, but it doesn't end there. The ISO C99
standard also introduces 8-digit \U12345678 escapes to cover the entire
ISO 10646 character space, and it's also desired that Python 2 support
that from the start. But then what are \x escapes supposed to mean?
Do they ignore all but the last *eight* hex digits then? And if less
than 8 following in a Unicode string, all but the last 4? And if less
than 4, all but the last 2?
This was getting messier by the minute, and the proposal cuts the
Gordian knot by making \x simpler instead of more complicated. Note
that the 4-digit generalization to \xijkl in Unicode strings was also
redundant, because it meant exactly the same thing as \uijkl in Unicode
strings. It's more Pythonic to have just one obvious way to specify a
Unicode character via hex notation.
Development and Discussion
The proposal was worked out among Guido van Rossum, Fredrik Lundh and
Tim Peters in email. It was subsequently explained and disussed on
Python-Dev under subject "Go \x yourself", starting 2000-08-03.
Response was overwhelmingly positive; no objections were raised.
Changing the meaning of \x escapes does carry risk of breaking existing
code, although no instances of incompabitility have yet been discovered.
The risk is believed to be minimal.
Tim Peters verified that, except for pieces of the standard test suite
deliberately provoking end cases, there are no instances of \xabcdef...
with fewer or more than 2 hex digits following, in either the Python
CVS development tree, or in assorted Python packages sitting on his
It's unlikely there are any with fewer than 2, because the Reference
Manual implied they weren't legal (although this is debatable!). If
there are any with more than 2, Guido is ready to argue they were buggy
anyway <0.9 wink>.
Guido reported that the O'Reilly Python books *already* document that
Python works the proposed way, likely due to their Perl editing
heritage (as above, Perl worked (very close to) the proposed way from
Finn Bock reported that what JPython does with \x escapes is
unpredictable today. This proposal gives a clear meaning that can be
consistently and easily implemented across all Python implementations.
Effects on Other Tools
Believed to be none. The candidates for breakage would mostly be
parsing tools, but the author knows of none that worry about the
internal structure of Python strings beyond the approximation "when
there's a backslash, swallow the next character". Tim Peters checked
python-mode.el, the std tokenize.py and pyclbr.py, and the IDLE syntax
coloring subsystem, and believes there's no need to change any of
them. Tools like tabnanny.py and checkappend.py inherit their immunity
The code changes are so simple that a separate patch will not be
Fredrik Lundh is writing the code, is an expert in the area, and will
simply check the changes in before 2.0b1 is released.
Yes, ValueError, not SyntaxError. "Problems with literal
traditionally raise 'runtime' exceptions rather than syntax errors."
This document has been placed in the public domain.
> dict[key] = 1
> if key in dict: ...
> for key in dict: ...
> No chance of a time-machine escape, but I *can* say that I agree that
> Ping's proposal makes a lot of sense. This is a reversal of my
> previous opinion on this matter. (Take note -- those don't happen
> very often! :-)
> First to submit a working patch gets a free copy of 2.1a2 and
> subsequent releases,
Thomas since submitted a patch to do the "if key in dict" part (which I
reviewed and accepted, pending resolution of doc issues).
It does not do the "for key in dict" part. It's not entirely clear whether
you intended to approve that part too (I've simplified away many layers of
quoting in the above <wink>). In any case, nobody is working on that part.
WRT that part, Ping produced some stats in:
> How often do you write 'dict.has_key(x)'? (std lib says: 206)
> How often do you write 'for x in dict.keys()'? (std lib says: 49)
> How often do you write 'x in dict.values()'? (std lib says: 0)
> How often do you write 'for x in dict.values()'? (std lib says: 3)
However, he did not report on occurrences of
for k, v in dict.items()
I'm not clear exactly which files he examined in the above, or how the
counts were obtained. So I don't know how this compares: I counted 188
instances of the string ".items(" in 122 .py files, under the dist/ portion
of current CVS. A number of those were assignment and return stmts, others
were dict.items() in an arglist, and at least one was in a comment. After
weeding those out, I was left with 153 legit "for" loops iterating over
x.items(). In all:
153 iterating over x.items()
118 " over x.keys()
17 " over x.values()
So I conclude that iterating over .values() is significantly more common
than iterating over .keys().
On c.l.py about an hour ago, Thomas complained that two (out of two) of his
coworkers guessed wrong about what
for x in dict:
would do, but didn't say what they *did* think it would do. Since Thomas
doesn't work with idiots, I'm guessing they *didn't* guess it would iterate
over either values or the lines of a freshly-opened file named "dict"
So if you did intend to approve "for x in dict" iterating over dict.keys(),
maybe you want to call me out on that "approval post" I forged under your
Way to go, Christian!
--Guido van Rossum (home page: http://www.python.org/~guido/)
------- Forwarded Message
Date: Wed, 31 Jan 2001 22:46:06 +0900
From: "Changjune Kim" <junaftnoon(a)yahoo.com>
Subject: The 2nd Korea Python Users Seminar
Dear Mr. Guido van Rossum,
First of all, I can't thank you more for your great contribution to the
presence of Python. It is not a mere computer programming language but a whole
culture, I think.
I am proud to tell you that we are having the 2nd Korea Python Users Seminar
which is wide open to the public. There are already more than 400 people who
registered ahead, and we expect a few more at the site. The seminar will be
held in Seoul, South Korea on Feb 2.
With the effort of Korea Python Users Group, there has been quite a boom or
phenomenon for Python among developers in Korea. Several magazines are
_competitively_ carrying regular articles about Python -- I'm one of the
authors -- and there was an article even on a _normal_ newspaper, one of the
major four big newspapers in Korea, which described the sprouting of Python in
Korea and pointed its extreme easiness to learn. (moreover, it's the year of
the snake in the 12 zodiac animals)
The seminar is mainly about:
Python 2.0, intro for newbies, Python coding style, ZOPE, internationalization
of Zope for Korean, GUIs such as wxPython, PyQt, Internet programming in
Python, Python with UML, Python C/API, XML with Python, and Stackless Python.
Christian Tismer is coming for SPC presentation with me, and Hostway CEO Lucas
Roh will give a talk about how they are using Python, and one of the Python
evangelists, Brian Lee, CTO of Linuxkorea will give a brief intro to Python
and Python C/API.
I'm so excited and happy to tell you this great news. If there is any message
you want to give to Korea Python Users Group and the audience, it'd be
great -- I could translate it and post it at the site for all the audience.
Thank you again for your wonderful snake.
June from Korea.
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com
------- End of Forwarded Message
I have been researching the question of how to ask a file descriptor how much
data it has waiting for the next sequential read, with a view to discovering
what cross-platform behavior we could count on for a hypothetical `waiting'
method in Python's built-in file class.
1: Why bother?
I have these main applications in mind:
1. Detecting EOF on a static plain file.
2. Non-blocking poll of a socket opened in non-blocking mode.
3. Non-blocking poll of a FIFO opened in non-blocking mode.
4. Non-blocking poll of a terminal device opened in non-blocking mode.
These are all frequently requested capabilities on C newsgroups -- how
often have *you* seen the "how do I detect an individual keypress"
question from beginning programmers? I believe having these
capabilities would substantially enhance Python's appeal.
2: What would be under the hood?
Summary: We can do this portably, and we can do it with only one (1)
new #ifdef. Our tools for this purpose will be the fstat(2) st_size
field and the FIONREAD ioctl(2) call. They are complementary.
In all supposedly POSIX-conformant environments I know of, the st_size
field has a documented meaning for plain files (S_IFREG) and may or
may not give a meaningful number for FIFOs, sockets, and tty devices.
The Single Unix Specification is silent on the meaning of st_size for
file types other than regular files (S_IFREG). I have filed a defect
report about this with OpenGroup and am discussing appropriate language
(The last sentence of the Inferno operating system's language on
stat(2) is interesting: "If the file resides on permanent storage and
is not a directory, the length returned by stat is the number of bytes
in the file. For directories, the length returned is zero. Some
devices report a length that is the number of bytes that may be read
from the device without blocking.")
The FIONREAD ioctl(2) call, on the other hand, returns bytes waiting
on character devices such as FIFOs, sockets, or ttys -- but does not
return a useful value for files or directories or block devices. The
FIONREAD ioctl was supported in both SVr4 and 4.2BSD. It's present in
all the open-source Unixes, SunOS, Solaris, and AIX. Via Google
search I have discovered that it's also supported in the Windows
Sockets API and the GUSI POSIX libraries for the Macintosh. Thus, it
can be considered portable for Python's purposes even though it's
rather sparsely documented.
I was able to obtain confirming information on Linux from Linus
Torvalds himself. My information on Windows and the Mac is from
Gavriel State, formerly a lead developer on Corel's WINE team and a
programmer with extensive cross-platform experience. Gavriel reported
on the MSCRT POSIX environment, on the Metrowerks Standard Library
POSIX implementation for the Mac, and on the GUSI POSIX implementation
for the Mac.
2.1: Plain files
Torvalds and State confirm that for plain files (S_IFREG) the st_size
field is reliable on all three platforms. On the Mac it gives the
file's data fork size.
One apparent difficulty with the plain-file case is that POSIX does
not guarantee anything about seek_t quantities such as lseek(2)
returns and the st_size field except that they can be compared for
equality. Thus, under the strict letter of POSIX law, `waiting' can
be used to detect EOF but not to get a reliable read-size return in
any other file position.
Fortunately, this is less an issue than it appears. The weakness of
the POSIX language was a 1980s-era concession to a generation of
mainframe operating systems with record-oriented file structures --
all of which are now either thoroughly obsolete or (in the case of IBM
VM/CMS) have become Linux emulators :-). On modern operating systems
under which files have character granularity, stat(2) emulations can
be and are written to give the right result.
2.2: Block devices
The directory case (S_IFDIR) is a complete loss. Under Unixes,
including Linux, the fstat(2) size field gives the allocated size of
the directory as if it were a plain file. Under MSCRT POSIX the
meaning is undocumented and unclear. Metroworks returns garbage.
GUSI POSIX returns the number of files in the directory! FIONREAD
cannot be used on directories.
Block devices (S_IFBLK) are a mess again. Linus points out that a
system with removable or unmountable volumes *cannot* return a useful
st_size field -- what happens when the device is dismounted?
2.3: Character devices
Pipes and FIFOs (S_IFIFO) look better. On MSCRT the fstat(2) size
field returns the number of bytes waiting to be read. This is also
true under current Linuxes, though Torvalds says it is "an
implementation detail" and recommends polling with the FIONREAD ioctl
instead. Fortunately, FIONREAD is available under Unix, Windows, and
Sockets (S_IFSOCK) look better too. Under Linux, the fstat(2) size
field gives number of bytes waiting. Torvalds again says this is "an
implementation detail" and recommends polling with the FIONREAD ioctl.
Neither MSCRT POSIX nor Metroworks has direct support for sockets.
GUSI POSIX returns 1 (!) in the st_size field. But FIONREAD is
available under Unix, Windows, and the GUSI POSIX libraries on the
Character devices (S_IFCHR) can be polled with FIONREAD. This technique
has a long history of use with tty devices under Unix. I don't know whether
it will work with the equivalents of terminal devices for Windows and the Mac.
Fortunately this is not a very important question, as those are GUI
environments with the terminal devices are rarely if ever used.
3. How does this turn into Python?
The upshot of our portability analysis is that by using FIONREAD and
fstat(2), we can get useful results for plain files, pipes, and
sockets on all three platforms. Directories and block devices are a
complete loss. Character devices (in particular, ttys) we can poll
reliably under Unix. What we'll get polling the equivalents of tty or
character devices under Windows and the Mac is presently unknown, but
My proposed semantics for a Python `waiting' method is that it reports
the amount of data that would be returned by a read() call at the time
of the waiting-method invocation. The interpreter throws OSError if
such a report is impossible or forbidden.
I have enclosed a patch against the current CVS sources, including
documentation. This patch is tested and working against plain files,
sockets, and FIFOs under Linux. I have also attached the
Python test program I used under Linux.
I would appreciate it if those of you on Windows and Macintosh
machines would test the waiting method. The test program will take
some porting, because it needs to write to a FIFO in background.
Under Linux I do it this way:
(echo -n '%s' >testfifo; echo 'Data written to FIFO.') &
I don't know how to do the equivalent under Windows or Mac.
When you run this program, it will try to mail me your test results.
<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>
Sometimes it is said that man cannot be trusted with the government
of himself. Can he, then, be trusted with the government of others?
-- Thomas Jefferson, in his 1801 inaugural address
"M.-A. Lemburg" <mal(a)lemburg.com> writes:
> > My conclusion? Python 2.1 is slower than Python 2.0, but not by
> > enough to care about.
> What compiler did you use and on which platform ?
Argh, sorry; I meant to put this in!
$ uname -a
Linux atrus.jesus.cam.ac.uk 2.2.14-1.1.0 #1 Thu Jan 6 05:12:58 EST 2000 i686 unknown
$ gcc --version
It's a Dell Dimension XPS D233 (a 233MHz PII) with a reasonably fast
hard drive (two year old 10G IBM 7200rpm thingy) and quite a lot of
> AFAIR, Vladimir's malloc implementation favours small objects.
> All number objects (except longs) fall into this category.
Well, longs & complex numbers don't do any free list handling (like
floats and int do), so I see two conclusions:
1) Don't add obmalloc to the core, but do simple free list stuff for
longs (might be tricky) and complex nubmers (this should be a
2) Integrate obmalloc - then maybe we can ditch all of that icky
> Perhaps we should think about adding his lib to the core ?!
Strikes me as the better solution. Can anyone try this on Windows?
Seeing as windows malloc reputedly sucks, maybe the differences would
Our lecture theatre has just crashed. It will currently only
silently display an unexplained line-drawing of a large dog
accompanied by spookily flickering lights.
-- Dan Sheppard, ucam.chat (from Owen Dunn's summary of the year)
With the current CVS version, running python setup.py as part of the
build process fails with a syntax error:
Traceback (most recent call last):
File "../setup.py", line 12, in ?
from distutils.core import Extension, setup
File "/usr/people/sjoerd/src/python/Lib/distutils/core.py", line 20, in ?
from distutils.cmd import Command
File "/usr/people/sjoerd/src/python/Lib/distutils/cmd.py", line 15, in ?
from distutils import util, dir_util, file_util, archive_util, dep_util
SyntaxError: 'from ... import *' may only occur in a module scope
The fix is to change the from ... import * that the compiler complains
RCS file: /cvsroot/python/python/dist/src/Lib/distutils/file_util.py,v
retrieving revision 1.7
diff -u -c -r1.7 file_util.py
*** file_util.py 2000/09/30 17:29:35 1.7
--- file_util.py 2001/01/31 20:01:56
*** 106,112 ****
# changing it (ie. it's not already a hard/soft link to src OR
# (not update) and (src newer than dst).
! from stat import *
from distutils.dep_util import newer
if not os.path.isfile(src):
--- 106,112 ----
# changing it (ie. it's not already a hard/soft link to src OR
# (not update) and (src newer than dst).
! from stat import ST_ATIME, ST_MTIME, ST_MODE, S_IMODE
from distutils.dep_util import newer
if not os.path.isfile(src):
I didn't check this in because distutils is Greg Ward's baby.
-- Sjoerd Mullender <sjoerd.mullender(a)oratrix.com>
In the interest of generating some numbers (and filling up my hard
drive), last night I wrote a script to build lots & lots of versions
of python (many of which turned out to be redundant - eg. -O6 didn't
seem to do anything different to -O3 and pybench doesn't work with
1.5.2), and then run pybench with them. Summarised results below;
first a key:
src-n: this morning's CVS (with Jeremy's f_localsplus optimisation)
(only built this with -O3)
src: CVS from yesterday afternoon
src-obmalloc: CVS from yesterday afternoon with Vladimir's obmalloc
patch applied. More on this later...
Python-2.0: you can guess what this is.
All runs are compared against Python-2.0-O2:
Benchmark: src-n-O3 (rounds=10, warp=20)
Average round time: 49029.00 ms -0.86%
Benchmark: src (rounds=10, warp=20)
Average round time: 67141.00 ms +35.76%
Benchmark: src-O (rounds=10, warp=20)
Average round time: 50167.00 ms +1.44%
Benchmark: src-O2 (rounds=10, warp=20)
Average round time: 49641.00 ms +0.37%
Benchmark: src-O3 (rounds=10, warp=20)
Average round time: 49104.00 ms -0.71%
Benchmark: src-O6 (rounds=10, warp=20)
Average round time: 49131.00 ms -0.66%
Benchmark: src-obmalloc (rounds=10, warp=20)
Average round time: 63276.00 ms +27.94%
Benchmark: src-obmalloc-O (rounds=10, warp=20)
Average round time: 46927.00 ms -5.11%
Benchmark: src-obmalloc-O2 (rounds=10, warp=20)
Average round time: 46146.00 ms -6.69%
Benchmark: src-obmalloc-O3 (rounds=10, warp=20)
Average round time: 46456.00 ms -6.07%
Benchmark: src-obmalloc-O6 (rounds=10, warp=20)
Average round time: 46450.00 ms -6.08%
Benchmark: Python-2.0 (rounds=10, warp=20)
Average round time: 68933.00 ms +39.38%
Benchmark: Python-2.0-O (rounds=10, warp=20)
Average round time: 49542.00 ms +0.17%
Benchmark: Python-2.0-O3 (rounds=10, warp=20)
Average round time: 48262.00 ms -2.41%
Benchmark: Python-2.0-O6 (rounds=10, warp=20)
Average round time: 48273.00 ms -2.39%
My conclusion? Python 2.1 is slower than Python 2.0, but not by
enough to care about.
Interestingly, adding obmalloc speeds things up. Let's take a closer
$ python pybench.py -c src-obmalloc-O3 -s src-O3
Benchmark: src-O3 (rounds=10, warp=20)
Tests: per run per oper. diff *
BuiltinFunctionCalls: 843.35 ms 6.61 us +2.93%
BuiltinMethodLookup: 878.70 ms 1.67 us +0.56%
ConcatStrings: 1068.80 ms 7.13 us -1.22%
ConcatUnicode: 1373.70 ms 9.16 us -1.24%
CreateInstances: 1433.55 ms 34.13 us +9.06%
CreateStringsWithConcat: 1031.75 ms 5.16 us +10.95%
CreateUnicodeWithConcat: 1277.85 ms 6.39 us +3.14%
DictCreation: 1275.80 ms 8.51 us +44.22%
ForLoops: 1415.90 ms 141.59 us -0.64%
IfThenElse: 1152.70 ms 1.71 us -0.15%
ListSlicing: 397.40 ms 113.54 us -0.53%
NestedForLoops: 789.75 ms 2.26 us -0.37%
NormalClassAttribute: 935.15 ms 1.56 us -0.41%
NormalInstanceAttribute: 961.15 ms 1.60 us -0.60%
PythonFunctionCalls: 1079.65 ms 6.54 us -1.00%
PythonMethodCalls: 908.05 ms 12.11 us -0.88%
Recursion: 838.50 ms 67.08 us -0.00%
SecondImport: 741.20 ms 29.65 us +25.57%
SecondPackageImport: 744.25 ms 29.77 us +18.66%
SecondSubmoduleImport: 947.05 ms 37.88 us +25.60%
SimpleComplexArithmetic: 1129.40 ms 5.13 us +114.92%
SimpleDictManipulation: 1048.55 ms 3.50 us -0.00%
SimpleFloatArithmetic: 746.05 ms 1.36 us -2.75%
SimpleIntFloatArithmetic: 823.35 ms 1.25 us -0.37%
SimpleIntegerArithmetic: 823.40 ms 1.25 us -0.37%
SimpleListManipulation: 1004.70 ms 3.72 us +0.01%
SimpleLongArithmetic: 865.30 ms 5.24 us +100.65%
SmallLists: 1657.65 ms 6.50 us +6.63%
SmallTuples: 1143.95 ms 4.77 us +2.90%
SpecialClassAttribute: 949.00 ms 1.58 us -0.22%
SpecialInstanceAttribute: 1353.05 ms 2.26 us -0.73%
StringMappings: 1161.00 ms 9.21 us +7.30%
StringPredicates: 1069.65 ms 3.82 us -5.30%
StringSlicing: 846.30 ms 4.84 us +8.61%
TryExcept: 1590.40 ms 1.06 us -0.49%
TryRaiseExcept: 1104.65 ms 73.64 us +24.46%
TupleSlicing: 681.10 ms 6.49 us -3.13%
UnicodeMappings: 1021.70 ms 56.76 us +0.79%
UnicodePredicates: 1308.45 ms 5.82 us -4.79%
UnicodeProperties: 1148.45 ms 5.74 us +13.67%
UnicodeSlicing: 984.15 ms 5.62 us -0.51%
Average round time: 49104.00 ms +5.70%
*) measured against: src-obmalloc-O3 (rounds=10, warp=20)
Words fail me slightly, but maybe some tuning of the memory allocation
of longs & complex numbers would be in order?
Time for lectures - I don't think algebraic geometry is going to make
my head hurt as much as trying to explain benchmarks...
ARTHUR: But which is probably incapable of drinking the coffee.
-- The Hitch-Hikers Guide to the Galaxy, Episode 6
On Tue, 30 Jan 2001, Guido van Rossum wrote:
> Can you say "PEP time"? :-)
Okay, i have written a draft PEP that tries to combine the
"elt in dict", custom iterator, and "for k:v" issues into a
coherent proposal. Have a look:
Could i get a number for this please?
"The only `intuitive' interface is the nipple. After that, it's all learned."
-- Bruce Ediger, on user interfaces
On Mon, Jan 29, 2001 at 05:27:30PM -0800, Jeremy Hylton wrote:
> add note about two kinds of illegal imports that are now checked
> + - The compiler will report a SyntaxError if "from ... import *" occurs
> + in a function or class scope or if a name bound by the import
> + statement is declared global in the same scope. The language
> + reference has also documented that these cases are illegal, but
> + they were not enforced.
Woah. Is this really a good idea ? I have seen 'from ... import *' in a
function scope put to good (relatively -- we're talking 'import *' here)
use. I also thought of 'import' as yet another assignment statement, so to
me it's both logical and consistent if 'import' would listen to 'global'.
Otherwise we have to re-invent 'import spam; eggs = spam' if we want eggs to
Is there really a reason to enforce this, or are we enforcing the wording of
the language reference for the sake of enforcing the wording of the language
reference ? When writing 'import as' for 2.0, I fixed some of the
inconsistencies in import, making it adhere to 'global' statements in as
many cases as possible (all except 'from ... import *') but I was apparently
not aware of the wording of the language reference. I'd suggest updating the
wording in the language reference, not the implementation, unless there is a
good reason to disallow this.
I also have another issue with your recent patches, Jeremy, also in the
backwards-compatibility departement :) You gave new.code two new,
non-optional arguments, in the middle of the long argument list. I sent a
note about it to python-checkins instead of python-dev by accident, but Fred
seemed to agree with me there.
Thomas Wouters <thomas(a)xs4all.net>
Hi! I'm a .signature virus! copy me into your .signature file to help me spread!