I am working on some code in the sandbox to automatically generate PEP
0. This is also leading to code that checks all the PEPs follow some
One of those guidelines is an author having a single email address.
The Owners index at the bottom of PEP 0 is going to be created from
the names and email addresses found in the PEPs themselves. But that
doesn't work too well when an author has multiple addresses listed.
If you are listed below, please choose a single address to use. You
can either change the PEPs yourself or just reply with the email you
prefer. I can tell you the multiple spellings if you want. If I
don't hear from people I will just use my best judgement.
And even better, if you spell your name multiple ways in the PEPs
(e.g., Martin v. Loewis, Martin v. Löwis, Martin von Löwis) also let
it be known which spelling you prefer (unifying name spelling comes
after unifying the email addresses).
Martin v. Löwis:
Clark C. Evans:
Georg Brandl wrote:
> Walter Dörwald schrieb:
>> Georg Brandl wrote:
>>> Nick Coghlan schrieb:
>>>> Georg Brandl wrote:
>>>>> Guido van Rossum schrieb:
>>>>>> I've written up a comprehensive status report on Python 3000. Please read:
>>>>> Thank you! Now I have something to show to interested people except "read
>>>>> the PEPs".
>>>>> A minuscule nit: the rot13 codec has no library equivalent, so it won't be
>>>>> supported anymore :)
>>>> Given that there are valid use cases for bytes-to-bytes translations,
>>>> and a common API for them would be nice, does it make sense to have an
>>>> additional category of codec that is invoked via specific recoding
>>>> methods on bytes objects? For example:
>>>> encoded = data.encode_bytes('bz2')
>>>> decoded = encoded.decode_bytes('bz2')
>>>> assert data == decoded
>>> This is exactly what I proposed a while before under the name
>>> IMO it would make a common use pattern much more convenient and
>>> should be given thought.
>>> If a PEP is called for, I'd be happy to at least co-author it.
>> Codecs are a major exception to Guido's law: Never have a parameter
>> whose value switches between completely unrelated algorithms.
> I don't think that applies here. This is more like __import__():
> depending on the first parameter, completely different things can happen.
> Yes, the same import algorithm is used, but in the case of
> bytes.encode_bytes, the same algorithm is used to find and execute the
What would a registry of tranformation algorithms buy us compared to a
module with transformation functions?
The function version is shorter:
If each transformation has its own function, these functions can have
their own arguments, e.g.
transform.bz2encode(data: bytes, level: int=6) -> bytes
Of course str.transform() could pass along all arguments to the
registered function, but that's worse from a documentation viewpoint,
because the real signature is hidden deep in the registry.
I've just submitted a patch on sourceforge to make inspect compatible
with IronPython (and Jython I think). This patch originally comes from
the IPCE ( http://fepy.sf.net ) project by Seo Sanghyeon. It is a
trivial change really.
The patch is number 1739696
It moves getting a reference to 'code.co_code' into the body of the loop
responsible for inspecting anonymous (tuple) arguments.
In IronPython, accessing 'co_code' raises a NotImplementedError -
meaning that inspect.get_argspec is broken.
This patch means that *except* for functions with anonymous tuple
arguments, it will work again on IronPython - whilst maintaining full
compatibility with the previous behaviour.
Jython has a similar patch to overcome the same issue by the way. See
As it is a bugfix - backporting to 2.5 would be great. Should I generate
a separate patch?
All the best,
we got another feature request for multi-line comments.
While it is nice to comment out multiple lines at once, every editor
that deserves that name can add a '#' to multiple lines.
And there's always "if 0" and triple-quoted strings...
Georg Brandl wrote:
> Nick Coghlan schrieb:
>> Georg Brandl wrote:
>>> Guido van Rossum schrieb:
>>>> I've written up a comprehensive status report on Python 3000. Please read:
>>> Thank you! Now I have something to show to interested people except "read
>>> the PEPs".
>>> A minuscule nit: the rot13 codec has no library equivalent, so it won't be
>>> supported anymore :)
>> Given that there are valid use cases for bytes-to-bytes translations,
>> and a common API for them would be nice, does it make sense to have an
>> additional category of codec that is invoked via specific recoding
>> methods on bytes objects? For example:
>> encoded = data.encode_bytes('bz2')
>> decoded = encoded.decode_bytes('bz2')
>> assert data == decoded
> This is exactly what I proposed a while before under the name
> IMO it would make a common use pattern much more convenient and
> should be given thought.
> If a PEP is called for, I'd be happy to at least co-author it.
Codecs are a major exception to Guido's law: Never have a parameter
whose value switches between completely unrelated algorithms.
Why don't we put all string transformation functions into a common
module (the string module might be a good place):
>>> import string
Hello. I investigated ref leak report related to thread.
Please run python regrtest.py -R :: test_leak.py (attached file)
Sometimes ref leak is reported.
# I saw this as regression failure on python-checkins.
# total ref count 92578 -> 92669
Probably this happens because threading.Thread is implemented as Python
(expecially threading.Thread#join), the code of regrtest.py
if i >= nwarmup:
deltas.append(sys.gettotalrefcount() - rc - 2)
can run before thread really quits. (before Moudles/threadmodule.c
So I experimentally inserted the code to wait for thread termination.
(attached file experimental.patch) And I confirmed error was gone.
# Sorry for hackish patch which only runs on windows. It should run
# on other platforms if you replace Sleep() in Python/sysmodule.c
# sys_debug_ref_leak_leave() with appropriate function.
I'd like to upgrade www.python.org this coming Thursday (June 21),
between 6:00am and 12:00am UTC. During that time, neither www
nor subversion access will be available (although I hope that
I need much less than 6 hours).
mail.python.org, and all other services running on other machines,
will continue to work.
I will send another message when I start.
ACTIVITY SUMMARY (06/10/07 - 06/17/07)
Tracker at http://bugs.python.org/
To view or respond to any of the issues listed below, click on the issue
number. Do NOT respond to this message.
1645 open ( +0) / 8584 closed ( +0) / 10229 total ( +0)
Average duration of open issues: 829 days.
Median duration of open issues: 777 days.
Open Issues Breakdown
open 1645 ( +0)
pending 0 ( +0)