How about we just continue to improve both branches, doing forward or backports as appropriate. No need to develop a policy of crippling one branch on the theory that it will make the other seem more attractive.
Besides, if 2.7 and 3.2 get released within a few months of each other, any inversion of incentives will be temporary and fleeting. Most likely people's decisions on switching to 3.x will be dominated by other factors such as the availability of third-party modules or other dependencies.
IIRC, Benjamin's current merge procedures flow from the trunk to the py3k branch. Probably, it is best to continue with that practice lest we muck-up his merge/block entries.
I just wanted to share my experience with the mercurial checkout. I cloned http://code.python.org/hg/branches/py3k to continue work on http://bugs.python.org/issue1578269 but I found that when I click on PC/VS8.0/pcbuild.sln, nothing happens.
This appears to be due to a bug/limitation in vslauncher in that it doesn't recognize LF as a line separator. vslauncher is the default association for sln files and its purpose is to parse out the .sln file and launch it with the appropriate Visual Studio version based on the header. What makes matters worse is if vslauncher fails to recognize the format, it does nothing, so it just appears as if the file fails to launch anything.
It seems that within the hg repository, everything has been converted to LF for line endings. I suspect this is because HG provides no integrated support for line-ending conversions and because the hg to svn bridge is probably running on a Unix OS.
So converting the pcbuild.sln file to CRLF line endings resolved the problem and the file would launch normally. Also, without conversion, it was possible to open the .sln file in Visual Studio explicitly.
I wanted to share this with the community in case anyone else runs into this issue. Also, if there's a recommended procedure for addressing this issue (and others that might arise due to non-native line endings), I'd be interested to hear it.
in the last few weeks I have been working on defining use-cases which
will lead the improvement of the Roundup tracker. As this is very important,
I would like your valuable input in form of comments, criticism and advices.
Daniel is a great hacker, but the only time he has time to hack is
when his child is asleep. And his child wakes up every now and then,
pretty often actually. So when it starts crying, Daniel commits fix to
that silly bug that has been hunting us for ages, but forgets to close
down the bug so it stays open.
Looking over bug tracker, Jane finds the same bug and starts working
on the same bug. Wasted time. Daniel should be able to format commit
message in such a way that would automatically close the bug.
Maria is a student, and got involved in FOSS recently. With all the
studies she has to do, she would prefer not to spend her time changing
bug status. She should be able to format commit message in such a way
that would change bug status to something other then the current
Technical talk: USE CASE A: Integrate issue property editing with
* USE CASE A1: Allow users to close issues via tokens embedded
into commit messages.
* USE CASE A2: Allow users to change issue status via tokens
embedded into commit messages.
Ronny is about to fix a torny bug, so he has a public branch for that.
He is making great progress, but the issue that tracks the bug only
contains a very out-of-date patch, prompting other developers to try
to fix the same bug. Ronny should be able to tell Roundup about where
his code lives, so users can get up-to-date patches/diffs
automatically. This also allows other users to know all the code that
changed to fix a given bug.
Technical talk: Integrate branch and rich patch handling into Roundup
* USE CASE B: Track all changesets that relate to a given issue:
o USE CASE B1: Using tokens embedded into commit messages.
o USE CASE B2: Using named branches, bookmarks. (Pre or post commit)
o USE CASE B3: Using patchsets, bundles or whatchacallit for
fat Mercurial patches. (Pre or post commit)
Brett wants to fix a couple of related issues and has a local
Mercurial branch for that. He would like his commit messages to
include useful information for when his patch/branch lands in the
Python repository. Besides the Mercurial->Roundup integration, a
Roundup->Mercurial one that would allow one to fetch issue details and
existing patches/branches with metadata would make Brett's work more
Technical talk: USE CASE C: Add a CLI interface for Roundup so VCSs
can query the tracker.
* USE CASE C1: Automatically fetch issue data.
* USE CASE C2: Pre-format output for greater usefulness in commit messages:
o USE CASE C2.1: Setting issue properties.
o USE CASE C2.2: Grouping changesets.
* USE CASE C3: Fetch branch information for local cloning.
* USE CASE C4: Add a Mercurial extension to exercize the CLI client.
USE CASE D:
Antoine is merging lots of branches and applying lots of patches to
his local branch of the Python repository. He goes to each issue with
a patch/branch and tells people whether their patches apply cleanly,
just to avoid merge issues in the main branch. While he could use
Mercurial Patch Queues (mq), Roundup would need to learn to both
listen to and to submit patches to mq in order to completely replace
Antoine's work with automated tools. Having a quick 'check if this
patch applies cleanly' button would make triaging issues much easier.
USE CASE E:
David is checking the python-commits list for avoiding bad code from
landing and nesting in the Python code base. He only sees the patches:
while informative, it requires a bit of mental gymanstics to see how
it would merge with the surrounding code. Whenever he thinks some code
has tabs and spaces or lines that are too long, he runs pylint and
reindent.py locally. He can only raise concerns about the code after
it lands on the main repository. It should be easier to see the code
changes in context. Having a way to check the code for mistakes in the
tracker would make David's life a lot easier.
USE CASE F:
Van Lindberg is concerned about code submissions from non-core
developers: how can the PSF re-license this code in the future without
talking to each contributor, whether the PSF is safe from litigation
based on copyrights of these contributions and related questions are
always bugging him. While the PSF has Contributor Agreements exactly
for these cases, it would be great to have the issue tracker and the
VCS talk to each other, so they could ask contributors to sign (or
declare to have already signed) the CAs when necessary.
USE CASE G: Use Transplant/Patch Branch to generate patches from
branches linked from Roundup.
USE CASE J:
Integrate the code/commits navigation interface with Roundup, so
changesets, branches, etc., can be easily linked/browsed (starting)
from the Roundup UI and issues can be created/linked to commits
(starting) from the navigation tool UI.
USE CASE K: For a given issue, add per patched file links for RSS logs
USE CASE M: Besides links to files, allow adding links to files at
given versions/tags/branchs, links to tarballs and easy to clone links
to branches and repositories.
USE CASE V:
Handle small branches (and maybe suggest using them for small
patches?) generated using the convert extension with --filemap.
Improvements to the use cases will follow on the wiki page:
Thanks for your attention and time.
I can't currently file a bug report on this, but I was told by Lisandro
Dalcín that there is a serious problem with the doctest module in Py3.1rc1.
In Cython, we use doctests to test the compiler in that we compile a
Python/Cython module with doctests into a C module and then run doctest on
the imported extension module.
>From the error report it seems to me that doctest is now trying to read the
module itself through linecache for some reason, which horribly fails for a
Could someone please look into this? I'll open up a bug report tomorrow
unless someone beats me to it.
On Mon, Jun 1, 2009 at 11:32 AM, Guido van Rossum <guido at python.org > wrote:
> I haven't read the bug, but given the extensive real-life use that
> ipaddr.py has seen at Google before inclusion into the stdlib, I think
> "deep conceptual flaws" must be an overstatement.
When the users of the library are also the authors of the library, it
is not surprising that the library enjoys "extensive real-life use."
The real test of a library is not how well it succeeds at one
installation, but how well it meets the needs of the larger user base.
Having read the code and the author's comments, it seems to me that
networking concepts not employed at Google have been neglected. While
some of these features are easily added to ipaddr, their omission
exposes a narrow view of the problem domain that has resulted in more
fundamental problems in the library, such as the conflation of
addresses and networks.
> I think we should just stick to "sorry, too late, try again for 3.2".
> We've done that with plenty of more important flaws that were
> discovered on the verge of a release, and I don't recall ever
> regretting it. We can always add more features to the module in 3.2.
My concerns are not so much about adding features as they are about
changing the API. Addressing the concerns that I and others have
raised requires making backwards-incompatible changes. Doing that now,
before ipaddr enjoys widespread deployment as part of the stdlib,
seems prudent. Removing ipaddr from 3.1 appears to me less painful
than fixing apps when ipaddr's API changes in 3.2.
If this were a core feature that many developers were anxiously
awaiting, I could understand the desire to release and iterate. But is
there really a pressing need for an IP library in the stdlib? Until a
satisfactory library is available for inclusion in the stdlib, the few
developers that do require such functionality can easily enough
download a library that meets their needs.
Raymond solicited a comment from me about the design of ipaddr. By way
of full disclosure, I have a small competing project called pynet.
That said, I test drove ipaddr for about 30 minutes and so far like the
big-picture API design quite a bit. I'll specifically address Clay's
concern about hosts vs networks, because this issue is important to me;
I've been in the network engineering field for over 15 years, worked on
Cisco's product development team, and held a CCIE (consider it the
equivalent of a CPA for network engineers) for 10 years...
Clay seems to object to ipaddr's IP object because it is not the same as
the object model used in the BSD ip stack. Indeed, I'm one of the
raving fans of what BSD has done for the quality of ip networking, but
let's also consider their requirements. BSD must approach ip networking
from a host perspective, it is the consumer of individual IP packets and
their payloads. ipaddr's whole point of existence is really driven
towards the manipulation of potentially massive lists of ip addresses.
This is no small difference in requirements, and I believe ipaddr's
different approach makes their code much simpler for the tasks it needs
to do. Incorporating host addresses as a special case of a /32 IPv4
network or /128 IPv6 network makes a lot of sense to me, in fact, I also
chose this same object model. Perl's NetAddr::IP does this too, it is
considered the gold standard for perl's address manipulation module.
Whether python includes ipaddr now, later, or uses another module
entirely does not bother me. Whatever is included should have a very
stable API, and major bugs should be worked out. Documentation should
be good enough for the average consumer, and if anything this is where
ipaddr to be lacking a bit.
I hope that python does include something to manipulate IPv4 and IPv6
address blocks in the future, since this is a big hole is python's
batteries-included philosophy. However, I'd need more time at the wheel
of ipaddr before I could comment whether this truly is ready for
inclusion in stdlib.
All the best,
In http://bugs.python.org/issue3959 , Clay McClure is raising some objections to the new ipaddr.py module. JP Calderone shares his
concerns. I think they were the only commenters not directly affiliated with one of the competing projects. The issues they've
raised seem serious, but I don't know enough about the subject to make a meaningful comment.
Am hoping python-dev participants can provide some informed judgments. At issue is whether the module has some deep conceptual
flaws that would warrant pulling it out of the 3.1 release. Also at issue is whether the addition was too rushed (see David Moss's
comment on the tracker and later comments by Antoine Pitrou).
Does anyone here know if Clay's concern about subnets vs netmasks in accurate and whether it affects the usability of the module?
> The chances of a problem being introduced due to its removal
> are vanishingly small.
But that provides little consolation to the user who sees it in the
standard library, is not aware to this discussion, and builds it into
his app. Changes to the lib later may cause subtle but significant
effects. ...perhaps undetected for a while.
> > > I don't hear a public outcry - only a single complainer.
> > Clay repeatedly pointed out that other people have objected
> > to ipaddr and been ignored. It's really, really disappointing
> > to see you continue to ignore not only them, but the repeated
> > attempts Clay has made to point them out.
> I don't have time to argue this issue, but I agree with
> essentially everything Clay has said in this thread....
I too agree. If it is not ready, it is not ready. Please don't create
problems for others. Remove the lib until it is ready.