Since 2.7 is probably going to exist for a while, I am running Clang 2.7's
static analyzer (``clang --static``) over trunk. It's mostly just finding
stuff like unneeded variable initialization or variables that are never used
(compilation is picking up unused returned values, almost all from
When I check in these changes I will do it file by file, but my question is
how to handle Misc/NEWS. I have gone through the underscores and the 'a's in
Modules and already have six modified files, so the final count might be a
little high. Do people want individual entries per file, or simply a single
entry that lists each file modified?
We should probably go through the C code and fix the whitespace before we
hit 2.7 final (there is a ton of lines with extraneous spaces).
It looks like the changes to the python-dev mailman archives broke some
of the links in PEPs.
All the best,
-------- Original Message --------
Subject: broken mailing list links in PEP(s?)
Date: Tue, 4 May 2010 20:22:57 -0700
From: Bayle Shanks <bayle.shanks(a)gmail.com>
On http://www.python.org/dev/peps/pep-0225/ , in the section "Credits
and archives", there are a bunch of links like
which are broken
-----BEGIN PGP SIGNED MESSAGE-----
We don't have any buildbot backing this system.
OSF/1 last version was in 1994, was picked by Digital (Tru64 Unix). Last
version of Tru64 was released in late 2006. Now Digital is owned by HP
with its own Unix (HP-UX).
Maybe we can drop OSF/1 safely supporting Tru64 yet, but we don't have
any buildbot running any of this systems...
Deprecated systems are documented in PEP-11.
Jesus Cea Avion _/_/ _/_/_/ _/_/_/
jcea(a)jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/
jabber / xmpp:firstname.lastname@example.org _/_/ _/_/ _/_/_/_/_/
. _/_/ _/_/ _/_/ _/_/ _/_/
"Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/
"My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
-----END PGP SIGNATURE-----
Often I have the contents to be written in a file at a given path that
I know as well. I recently tried to find a function in stdlib to do
that and to my surprise this is what I found :
- Such function exists
- It's `distutils.file_util.write_file`
IMO the last place where people'd look for such a function is inside
`distutils` package. Besides I reviewed modules listed under `File and
directory access` category in `Library Reference` and found nothing
- Is it a good idea to provide a similar function in
e.g. shutils module ?
Probably there's already a better way to do this and my comment is
just irrelevant . Anyway is just a suggestion ;o)
Blog ES: http://simelo-es.blogspot.com/
Blog EN: http://simelo-en.blogspot.com/
Soporte para AMF (RPC) en Trac -
Does the online dev version of the docs build in response to docs
checkins, or just once a day?
(And is that written down somewhere and I've just forgotten where to
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
I'm trying to get a good friend of mine to start doing bug triage on Python.
As part of my trying to mentor him on it, I've found that many of the common
things I do in triage, like setting a priority for priorityless bugs,
assigning them to people who obviously are the next step, requires enhanced
He has no reputation in the Python community, so I'd be up for getting him
started on things that require fewer privileges like verifying older patches
still apply against newer Pythons, or maybe summarizing priority/assignment
changes to the list and having someone (possibly me) make the changes, etc...
However, I will step up for him and say that I've known him a decade, and he's
very trustworthy. He has been the president (we call that position Maximum
Leader) of our Linux Users Group here for 5 years or so.
Sean Reifschneider, Member of Technical Staff <jafo(a)tummy.com>
tummy.com, ltd. - Linux Consulting since 1995: Ask me about High Availability
I was looking for a reference for the addition of multiple context
manager support to with statements in 3.1 and 2.7 and came up empty
(aside from the initial python-ideas thread  that I linked to from
I was hoping to find something that clearly spelled out:
- the two major flaws in contextlib.nested*
- the parallel with import statements for the precise chosen syntax
- Guido's blessing to go ahead and do it
*Those flaws being
- that __init__/__new__ methods of inner context managers aren't covered
by outer context managers
- that outer context managers can't suppress exceptions from inner
Note that I'm not complaining about the decision itself (that would be
silly, since I agree with the outcome), I'm just trying to find
something to point to about it that is a little more concrete than a
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
At 10:59 AM 3/7/2010 -0800, Jeffrey Yasskin wrote:
>So is it that you just don't like the idea of blocking, and want to
>stop anything that relies on it from getting into the standard library?
Um, no. As I said before, call it a "parallel task queue" or
"parallel task manager" or something to that general effect and I'm on board.
It may not be in the Zen of Python, but ISTM that names should
generally follow use cases. It is something of a corollary to "one
obvious way to do it", in that if you see something whose name
matches what you want to do, then it should be obvious that that's
the way in question. ;-)
The use cases for "parallel task queues", however, are a subset of
those for "futures" in the general case. Since the proposed module
addresses most of the former but very little of the latter, calling
it futures is inappropriate.
1. Confusing to people who don't know what futures are (see e.g R.D.
Murray's post), and
2. Underpowered for people who expect/want a more fully-featured
futures system along the lines of E or Deferreds.
It seems that the only people for whom it's an intuitively correct
description are people who've only had experience with more limited
futures models (like Java's). However, these people should not have
a problem understanding the notion of parallel task queueing or task
management, so changing the name isn't really a loss for them, and
it's a gain for everybody else.
> Given the set_result and set_exception methods, it's pretty
> straightforward to fill in the value of a future from something
> that isn't purely computational.
Those are described as "internal" methods in the PEP; by contrast,
the Deferred equivalents are part of the public API.
> Given a way to register "on-done" callbacks with the future, it
> would be straightforward to wait for a future without blocking, too.
Yes, and with a few more additions besides that one, you might be on
the way to an actual competitor for Deferreds. For example: retry
support, chaining, logging, API for transparent result processing,
coroutine support, co-ordination tools like locks, sempaphores and queues, etc.
These are all things you would very likely want or need if you
actually wanted to write a program using futures as *your main
computational model*, vs. just needing to toss out some parallel
tasks in a primarily synchronous program.
Of course, Deferreds are indeed overkill if all you're ever going to
want is a few parallel tasks, unless you're already skilled in using
Twisted or some wrapper for it.
So, I totally support having a simple task queue in the stdlib, as
there are definitely times I would've used such a thing for a quick
script, if it were available.
However, I've *also* had use cases for using futures as a
computational model, and so that's what I originally thought this PEP
was about. After the use cases were clarified, though, it seems to
me that *calling* it futures is a bad idea, because it's really just
a nice task queuing system.
I'm +1 on adding a nice task queuing system, -1 on calling it by any
other name. ;-)