> The point isn't about my suffering as such. The point is more that
> python-dev owns a tiny amount of the code out there, and I don't believe we
> should put Python's users through this.
> Sure - I would be happy to "upgrade" all the win32all code, no problem. I
> am also happy to live in the bleeding edge and take some pain that will
> The issue is simply the user base, and giving Python a reputation of not
> being able to painlessly upgrade even dot revisions.
I agree with all this.
[As I imagined explicit syntax did not catch up and would require
lot of discussions.]
> > Another way is to use special rules
> > (similar to those for class defs), e.g. having
> > <frag>
> > y=3
> > def f():
> > exec "y=2"
> > def g():
> > return y
> > return g()
> > print f()
> > </frag>
> > # print 3.
> > Is that confusing for users? maybe they will more naturally expect 2
> > as outcome (given nested scopes).
> This seems the best compromise to me. It will lead to the least
> broken code, because this is the behavior that we had before nested
> scopes! It is also quite easy to implement given the current
> implementation, I believe.
> Maybe we could introduce a warning rather than an error for this
> situation though, because even if this behavior is clearly documented,
> it will still be confusing to some, so it is better if we outlaw it in
> some future version.
Yes this can be easy to implement but more confusing situations can arise:
What should this print? the situation leads not to a canonical solution
as class def scopes.
from foo import *
> > This probably won't be a very popular suggestion, but how about pulling
> > nested scopes (I assume they are at the root of the problem)
> > until this can be solved cleanly?
> Agreed. While I think nested scopes are kinda cool, I have lived without
> them, and really without missing them, for years. At the moment the cure
> appears worse then the symptoms in at least a few cases. If nothing else,
> it compromises the elegant simplicity of Python that drew me here in the
> first place!
> Assuming that people really _do_ want this feature, IMO the bar should be
> raised so there are _zero_ backward compatibility issues.
I don't say anything about pulling nested scopes (I don't think my opinion
can change things in this respect)
but I should insist that without explicit syntax IMO raising the bar
has a too high impl cost (both performance and complexity) or creates
> >Assuming that people really _do_ want this feature, IMO the bar should be
> >raised so there are _zero_ backward compatibility issues.
> Even at the cost of additional implementation complexity? At the cost
> of having to learn "scopes are nested, unless you do these two things
> in which case they're not"?
> Let's not waffle. If nested scopes are worth doing, they're worth
> breaking code. Either leave exec and from..import illegal, or back
> out nested scopes, or think of some better solution, but let's not
> introduce complicated backward compatibility hacks.
IMO breaking code would be ok if we issue warnings today and implement
nested scopes issuing errors tomorrow. But this is simply a statement
about principles and raised impression.
IMO import * in an inner scope should end up being an error,
not sure about 'exec's.
We will need a final BDFL statement.
regards, Samuele Pedroni.
I'd like to do a 2.3b1 release someday. Maybe at the end of next
week, that would be Friday April 25. If anyone has something that
needs to be done before this release go out, please let me know!
Assigning a SF bug or patch to me and setting the priority to 7 is a
good way to get my attention.
--Guido van Rossum (home page: http://www.python.org/~guido/)
I've written some doctest extensions to:
- Generate a unitest (pyunit) test suite from a module with doctest
tests. Each doc string containing one or more doctest tests becomes
a test case.
If a test fails, an error message is included in the unittest
output that has the module file name and the approximate line number
of the docstring containing the failed test formatted in a way
understood by emacs error parsing. This is important. ;)
- Debug doctest tests. Normally, doctest tests can't be debugged
with pdb because, while they are running, doctest has taken over
standard output. This tool extracts the tests in a doc string
into a separate script and runs pdb on it.
- Extract a doctest doc string into a script file.
I think that these would be good additions to doctest and propose
to add them,
The current source can be found here:
I ended up using a slightly different (and simpler) strategy for
finding docstrings than doctest uses. This might be an issue.
Jim Fulton mailto:email@example.com Python Powered!
CTO (703) 361-1714 http://www.python.org
Zope Corporation http://www.zope.comhttp://www.zope.org
Currently, oss_audio_device objects have a setparameters() method with a
rather silly interface:
oss.setparameters(sample_rate, sample_size, num_channels, format [, emulate])
This is silly because 1) 'sample_size' is implicit in 'format', and 2)
the implementation doesn't actually *use* sample_size for anything -- it
just checks that you have passed in the correct sample size, ie. if you
specify an 8-bit format, you must pass sample_size=8. (This is code
inherited from linuxaudiodev that I never got around to cleaning up.)
In addition to being silly, this is not the documented interface. The
docs don't mention the 'sample_size' argument at all. Presumably the
doc writer realized the silliness and was going to pester me to remove
'sample_size', but never got around to it. (Lot of that going around.)
So, even though we're in a beta cycle, am I allowed to change the code
so it's 1) sensible and 2) consistent with the documentation?
Greg Ward <gward(a)python.net> http://www.gerg.ca/
Sure, I'm paranoid... but am I paranoid ENOUGH?
Hello. We have analyzed this software to determine its vulnerability
to a new class of DoS attacks that related to a recent paper. ''Denial
of Service via Algorithmic Complexity Attacks.''
This paper discusses a new class of denial of service attacks that
work by exploiting the difference between average case performance and
worst-case performance. In an adversarial environment, the data
structures used by an application may be forced to experience their
worst case performance. For instance, hash tables are usually thought
of as being constant time operations, but with large numbers of
collisions will degrade to a linked list and may lead to a 100-10,000
times performance degradation. Because of the widespread use of hash
tables, the potential for attack is extremely widespread. Fortunately,
in many cases, other limits on the system limit the impact of these
To be attackable, an application must have a deterministic or
predictable hash function and accept untrusted input. In general, for
the attack to be signifigant, the applications must be willing and
able to accept hundreds to tens of thousands of 'attack
inputs'. Because of that requirement, it is difficult to judge the
impact of these attack without knowing the source code extremely well,
and knowing all ways in which a program is used.
As part of this project, I have examined python 2.3b1, and the hash
function 'string_hash' is deterministic. Thus any script that may hash
untrusted input may vulnerable to our attack. Furthermore, the
structure of the hash functions allows our fast collision generation
algorithm to work. This means that any script written in python that
hashes a large number of keys from an untrusted source is potentially
subject to a severe performance degradation.
Depending on the application or script, this could be a critical DoS.
The solution for these attacks on hash tables is to make the hash
function unpredictable via a technique known as universal
hashing. Universal hashing is a keyed hash function where, based on
the key, one of a large set hash functions is chosen. When
benchmarking, we observe that for short or medium length inputs, it is
comparable in performance to simple predictable hash functions such as
the ones in Python or Perl. Our paper has graphs and charts of our
I highly advise using a universal hashing library, either our own or
someone elses. As is historically seen, it is very easy to make silly
mistakes when attempting to implement your own 'secure' algorithm.
The abstract, paper, and a library implementing universal hashing is
available at http://www.cs.rice.edu/~scrosby/hash/.
I've been lurking for a bit, and now seems like a good time to
* I build messaging systems for banks, earlier I was CTO of a dot-com.
* I started programming on the TRS-80 and the RCA COSMAC VIP, later
on the Apple ][.
* I am a Java refugee (well, I might still code in Java for pay).
* I'm into formal methods. Translation: I like *talking* about
formal methods, but I never use them myself :-)
I read somewhere that the best way to build big Python callouses was
to write a PEP. Here goes:
Programming by Contract for Python... pre-conditions, post-conditions,
invariants, with all the Eiffel goodness like weakening pre-conditions
and strengthening invariants and post-conditions on inheritance, and
access to old values. All from docstrings, like doctest.
I'm also into handling insane numbers of incoming connections on cheap
boxes: compare Jef Poskanzer's thttpd to Apache. 10000 simultaneous
HTTP connections on a $400 computer just gets me giggling. Stackless
Python intrigues me greatly for the same reason.
I guess that's it for now... Cheers!
Boost.Python is now trying hard to accomodate the "Python.h before
system headers rule". Unfortunately, we still need a wrapper around
Python.h, at least for some versions of Python, so that we can
work around some issues like:
// Python's LongObject.h helpfully #defines ULONGLONG_MAX for us
// even when it's not defined by the system which confuses Boost's
To cope with that correctly, we need to see <limits.h> (a system
header) before longobject.h. Currently, we're including <limits.h>,
then <patchlevel.h>, well, and then the wrapper gets a little
complicated adjusting for various compilers.
Anyway, the point is that I'd like to have the rule changed to "You
have to include Python.h or xxxx.h before any system header" where
xxxx.h is one of the other existing headers #included in Python.h that
is responsible for setting up whatever macros cause this
inclusion-order requirement in the first place (preferably not
LongObject.h!) That way I might be able to get those configuration
issues sorted out without violating the #inclusion order rule. What
I have now seems to work, but I'd rather do the right thing (TM).
I'm happy to announce the release of Python 2.2.3 (final). This is a
bug fix release for the stable Python 2.2 code line. It contains more
than 40 bug fixes and memory leak patches since Python 2.2.2, and all
Python 2.2 users are encouraged to upgrade.
The new release is available here:
For full details, see the release notes at
There are a small number of minor incompatibilities with Python 2.2.2;
for details see:
Perhaps the most important is that the Bastion.py and rexec.py modules
have been disabled, since we do not deem them to be safe.
As usual, a Windows installer and a Unix/Linux source tarball are made
available. The documentation has been updated as well, and is available
both on-line and in many different formats. At the moment, no Mac
version or Linux RPMs are available, although I expect them to appear
On behalf of Guido, I'd like to thank everyone who contributed to this
release, and who continue to ensure Python's success.
I received this problem report (Kurt is the IDLEFORK developer). Does
anybody know what could be the matter here? What changed recently???
--Guido van Rossum (home page: http://www.python.org/~guido/)
------- Forwarded Message
Date: Fri, 30 May 2003 15:50:15 -0400
From: kbk(a)shore.net (Kurt B. Kaiser)
To: Guido van Rossum <guido(a)python.org>
I find that
while 1: pass
doesn't respond to a KeyboardInterrupt on Python2.3b1 on either
WinXP or W2K. Is this generally known? I couldn't find any mention
while 1: a = 0
is fine on 2.3b1, and both work on Python2.2.
------- End of Forwarded Message