I know the PEP is accepted, but I would still like to see some
1. What is the effect of this PEP on Windows? Is this a Linux-only
feature? If not, who is going to provide the changes for Windows?
(More specifically: if this is indeed meant for Windows, and
if no Windows implementation arrives before 3.2b1, I'd ask that
the changes be rolled back, and integration is deferred until there
is Windows support)
2. Why does the PEP recommend installing stuff into /usr/share/pyshared?
According to the Linux FHS, /usr/share is for Architecture-
independent data, see
In particular, it's objective is that you can NFS-share it across,
say, both SPARC Linux and x86 Linux. I believe the PEP would break
this, as SPARC and x86 executables would override each other.
3. When the PEP recommends that stuff gets installed into pyshared,
why does the patch then not implement this recommendation, but
continues installing files into lib-dynload?
Ive just stumbled accross your changes Krisvale, and from your last reply, I
can see that you invalidated your changes :
I just realized that this is probably a redundant change.
> We have C apis to get all the Thread states in an interpreter state (I didn't even know there was such a thing as multiple interpreter states, but there!)
> This is the PyInterpreterState_ThreadHead() api et al.
> From C, all that is missing is a SetTrace api that takes a thread state.
> From python, the threading module provides access to all Thread objects, and each of those has a settrace/setprofile method.
> To turn on global tracing from cProfile, all that is needed is to iterate over all the Thread objects.
> Setting this to invalid, since there already are APIs to do this, at least from .py code.
> Could you please provide more explanations, or even an example ? Because
it seems that you're the only one on earth to finally find a way to
multithread the cProfiler...
In CPython, the builtin max() and min() have the property that if there
are items with equal keys, the first item is returned. From a quick look
at their source, I think this is true for Jython and IronPython too.
However, this isn't currently a documented guarantee. Could it be made
so? (As with the decision to declare sort() stable, it seems likely that
by now there's code out there relying on it anyway.)
This is a meta-question which I hope is appropriate in this list (**).
Recently I've switched to to VIM as my main development platform (in terms
of code editing and navigation). Working on the Python code-base is both a
concrete requirement and a yardstick for me - I want to be as effective as
possible at it. Therefore I would like to ask those of you working on
Python's code with VIM about your setups - the special tweaks to VIM &
plugins you use to make working with the code as simple and effective as
Myself, since I'm still a VIM newbie, my setup is quite spartan. I created
ctags -R Grammar Include Modules/ Objects/ Parser/ Python/
And now happily browse around the source code with Ctrl-], Ctrl-I/O, Ctrl-T
and so on. I've also installed the Taglist plugin to show all
functions/macros in open buffers - it's sometimes helpful. Other plugins
I've found useful ar NERD-commenter (for commenting out chunks of code) and
highlight_current_line (for a visual cue of where I am in a file).
Besides, I've taken the suggested settings from the .vimrc file in Misc/ to
help enforcing PEP-8.
I heard that there are more sophisticated tags plugins that also allow one
to check where a function is called for, and other intellisens-y stuff,
though I'm not sure whether anyone uses it for Python's code.
Thanks in advance,
(**) Note that it deals with the source code *of Python* (the stuff you
download from Python's official SVN), not Python source code.
While the EuroPython sprints are still going on, I am back home, and
after a somewhat restful night of sleep, I have some thoughts I'd like
to share before I get distracted. Note, I am jumping wildly between
- Commit privileges: Maybe we've been too careful with only giving
commit privileges to to experienced and trusted new developers. I
spoke to Ezio Melotti and from his experience with getting commit
privileges, it seems to be a case of "the lion is much more afraid of
you than you are afraid of the lion". I.e. having got privileges he
was very concerned about doing something wrong, worried about the
complexity of SVN, and so on. Since we've got lots of people watching
the commit stream, I think that there really shouldn't need to be a
worry at all about a new committer doing something malicious, and
there shouldn't be much worry about honest beginners' mistakes either
-- the main worry remains that new committers don't use their
privileges enough. So, my recommendation (which surely is a
turn-around of my *own* attitude in the past) is to give out more
commit privileges sooner.
- Concurrency and parallelism: Russel Winder and Sarah Mount pushed
the idea of CSP
several talks at the conference. They (at least Russell) emphasized
the difference between concurrency (interleaved event streams) and
parallelism (using many processors to speed things up). Their
prediction is that as machines with many processing cores become more
prevalent, the relevant architecture will change from cores sharing a
single coherent memory (the model on which threads are based) to one
where each core has a limited amount of private memory, and
communication is done via message passing between the cores. This
gives them (and me :-) hope that the GIL won't be a problem as long as
we adopt a parallel processing model. Two competing models are the
Actor model, which is based on asynchronous communication, and CSP,
which is synchronous (when a writer writes to a channel, it blocks
until a reader reads that value -- a rendezvous). At least Sarah
suggested that both models are important. She also mentioned that a
merger is under consideration between the two major CSP-for-Python
packages, Py-CSP and Python-CSP. I also believe that the merger will
be based on the stdlib multiprocessing package, but I'm not sure. I do
expect that we may get some suggestions from that corner to make some
minor changes to details of multiprocessing (and perhaps threading),
and I think we should be open to those (I expect these will be good
suggestions for small tweaks, not major overhauls).
- After seeing Raymond's talk about monocle (search for it on PyPI) I
am getting excited again about PEP 380 (yield from, return values from
generators). Having read the PEP on the plane back home I didn't see
anything wrong with it, so it could just be accepted in its current
form. Implementation will still have to wait for Python 3.3 because of
the moratorium. (Although I wouldn't mind making an exception to get
it into 3.2.)
- This made me think of how the PEP process should evolve so as to not
require my personal approval for every PEP. I think the model for
future PEPs should be the one we used for PEP 3148 (futures, which was
just approved by Jesse): the discussion is led and moderated by one
designated "PEP handler" (a different one for each PEP) and the PEP
handler, after reviewing the discussion, decides when the PEP is
approved. A PEP handler should be selected for each PEP as soon as
possible; without a PEP handler, discussing a PEP is not all that
useful. The PEP handler should be someone respected by the community
with an interest in the subject of the PEP but at an arms' length (at
least) from the PEP author. The PEP handler will have to moderate
feedback, separating useful comments from (too much) bikeshedding,
repetitious lines of questioning, and other forms of obstruction. The
PEP handler should also set and try to maintain a schedule for the
discussion. Note that a schedule should not be used to break a tie --
it should be used to stop bikeshedding and repeat discussions, while
giving all interested parties a chance to comment. (I should say that
this is probably similar to the role of an IETF working group director
with respect to RFCs.)
- Specifically, if Raymond is interested, I wouldn't mind seeing him
as the PEP handler for PEP 380. For some of Martin von Löwis's PEPs
(382, 384) I think a PEP handler is sorely lacking -- from the
language summit it appeared as if nobody besides Martin understands
- A lot of things seem to be happening to make PyPI better. Is this
being summarized somewhere? Based on some questions I received during
my keynote Q&A (http://bit.ly/bdflqa) I think not enough people are
aware of what we are already doing in this area. Frankly, I'm not sure
I do, either: I think I've heard of a GSOC student and of plans to
take over pypi.appspot.com (with the original developer's permission)
to become a full and up-to-date mirror. Mirroring apparently also
requires some client changes. Oh, and there's a proposed solution for
the "register user" problem where apparently the clients had been
broken by a unilateral change to the server to require a certain "yes
I agree" checkbox.
For a hopefully eventually exhaustive overview of what was
accomplished at EuroPython, go to http://wiki.europython.eu/After --
and if you know some blog about EuroPython not yet listed, please add
--Guido van Rossum (python.org/~guido)
On Tue, Sep 7, 2010 at 4:48 AM, antoine.pitrou
> Modified: python/branches/py3k/Lib/test/test_memoryio.py
> --- python/branches/py3k/Lib/test/test_memoryio.py (original)
> +++ python/branches/py3k/Lib/test/test_memoryio.py Mon Sep 6 20:48:21 2010
> @@ -384,7 +384,31 @@
> del __main__.PickleTestMemIO
> -class PyBytesIOTest(MemoryTestMixin, MemorySeekTestMixin, unittest.TestCase):
> +class BytesIOMixin:
> + def test_getbuffer(self):
> + memio = self.ioclass(b"1234567890")
> + buf = memio.getbuffer()
> + self.assertEqual(bytes(buf), b"1234567890")
> + memio.seek(5)
> + buf = memio.getbuffer()
> + self.assertEqual(bytes(buf), b"1234567890")
> + # Trying to change the size of the BytesIO while a buffer is exported
> + # raises a BufferError.
> + self.assertRaises(BufferError, memio.write, b'x' * 100)
> + self.assertRaises(BufferError, memio.truncate)
> + # Mutating the buffer updates the BytesIO
> + buf[3:6] = b"abc"
> + self.assertEqual(bytes(buf), b"123abc7890")
> + self.assertEqual(memio.getvalue(), b"123abc7890")
> + # After the buffer gets released, we can resize the BytesIO again
> + del buf
> + support.gc_collect()
> + memio.truncate()
I've raised an RFE (http://bugs.python.org/issue9789) to point out
that the need for that GC collect call in there to make the test
portable to other implementations is rather ugly and supporting an
explicit "buf.release()" call may be a nicer option. (And added Guido
to the nosy list, since he wasn't keen on supporting the context
management protocol idea, but I don't believe he said anything one way
or the other about an ordinary method).
> +class PyBytesIOTest(MemoryTestMixin, MemorySeekTestMixin,
> + BytesIOMixin, unittest.TestCase):
I was going to ask why CBytesIOTest wasn't affected, but checking the
full source of the test file made everything clear :)
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
On Sep 05, 2010, at 08:28 PM, georg.brandl wrote:
>Date: Sun Sep 5 20:28:46 2010
>New Revision: 84536
>Fix after changing NEWS layout.
>--- sandbox/trunk/release/release.py (original)
>+++ sandbox/trunk/release/release.py Sun Sep 5 20:28:46 2010
>@@ -396,13 +396,13 @@
> with open('Misc/NEWS', encoding="utf-8") as fp:
> lines = fp.readlines()
> for i, line in enumerate(lines):
>- if line.startswith("(editors"):
>+ if line.startswith("Python News"):
> start = i
> if line.startswith("What's"):
> end = i
> with open('Misc/NEWS', 'w', encoding="utf-8") as fp:
> print("Please fill in the the name of the next version.")
Will this still work with the Python 2.7 NEWS file?