<p dir="ltr"><br>
On Dec 5, 2014 4:18 PM, "Eric Snow" <<a href="mailto:ericsnowcurrently@gmail.com">ericsnowcurrently@gmail.com</a>> wrote:<br>
><br>
> Very nice, Brett.<br>
><br>
> On Fri, Dec 5, 2014 at 1:04 PM, Brett Cannon <<a href="mailto:bcannon@gmail.com">bcannon@gmail.com</a>> wrote:<br>
> > And we can't forget the people who help keep all of this running as well.<br>
> > There are those that manage the SSH keys, the issue tracker, the review<br>
> > tool, <a href="http://hg.python.org">hg.python.org</a>, and the email system that let's use know when stuff<br>
> > happens on any of these other systems. The impact on them needs to also be<br>
> > considered.<br>
><br>
> It sounds like Guido would rather as much of this was done by a<br>
> provider rather than relying on volunteers. That makes sense though<br>
> there are concerns about control of certain assents. However, that<br>
> applies only to some, like <a href="http://hg.python.org">hg.python.org</a>.<br>
><br>
> ><br>
> > ## Contributors<br>
> > I see two scenarios for contributors to optimize for. There's the simple<br>
> > spelling mistake patches and then there's the code change patches. The<br>
> > former is the kind of thing that you can do in a browser without much effort<br>
> > and should be a no-brainer commit/reject decision for a core developer. This<br>
> > is what the GitHub/Bitbucket camps have been promoting their solution for<br>
> > solving while leaving the cpython repo alone. Unfortunately the bulk of our<br>
> > documentation is in the Doc/ directory of cpython. While it's nice to think<br>
> > about moving the devguide, peps, and even breaking out the tutorial to repos<br>
> > hosting on Bitbucket/GitHub, everything else is in Doc/ (language reference,<br>
> > howtos, stdlib, C API, etc.). So unless we want to completely break all of<br>
> > Doc/ out of the cpython repo and have core developers willing to edit two<br>
> > separate repos when making changes that impact code **and** docs, moving<br>
> > only a subset of docs feels like a band-aid solution that ignores the big,<br>
> > white elephant in the room: the cpython repo, where a bulk of patches are<br>
> > targeting.<br>
><br>
> With your ideal scenario this would be a moot point, right? There<br>
> would be no need to split out doc-related repos.<br>
><br>
> ><br>
> > For the code change patches, contributors need an easy way to get a hold of<br>
> > the code and get their changes to the core developers. After that it's<br>
> > things like letting contributors knowing that their patch doesn't apply<br>
> > cleanly, doesn't pass tests, etc.<br>
><br>
> This is probably more work than it seems at first.<br>
><br>
> > As of right now getting the patch into the<br>
> > issue tracker is a bit manual but nothing crazy. The real issue in this<br>
> > scenario is core developer response time.<br>
> ><br>
> > ## Core developers<br>
> > There is a finite amount of time that core developers get to contribute to<br>
> > Python and it fluctuates greatly. This means that if a process can be found<br>
> > which allows core developers to spend less time doing mechanical work and<br>
> > more time doing things that can't be automated -- namely code reviews --<br>
> > then the throughput of patches being accepted/rejected will increase. This<br>
> > also impacts any increased patch submission rate that comes from improving<br>
> > the situation for contributors because if the throughput doesn't change then<br>
> > there will simply be more patches sitting in the issue tracker and that<br>
> > doesn't benefit anyone.<br>
><br>
> This is the key concern I have with only addressing the contributor<br>
> side of things. I'm all for increasing contributions, but not if they<br>
> are just going to rot on the tracker and we end up with disillusioned<br>
> contributors.<br>
><br>
> ><br>
> > # My ideal scenario<br>
> > If I had an infinite amount of resources (money, volunteers, time, etc.),<br>
> > this would be my ideal scenario:<br>
> ><br>
> > 1. Contributor gets code from wherever; easiest to just say "fork on GitHub<br>
> > or Bitbucket" as they would be official mirrors of <a href="http://hg.python.org">hg.python.org</a> and are<br>
> > updated after every commit, but could clone <a href="http://hg.python.org/cpython">hg.python.org/cpython</a> if they<br>
> > wanted<br>
> > 2. Contributor makes edits; if they cloned on Bitbucket or GitHub then they<br>
> > have browser edit access already<br>
> > 3. Contributor creates an account at <a href="http://bugs.python.org">bugs.python.org</a> and signs the CLA<br>
><br>
> There's no real way around this, is there? I suppose account creation<br>
> *could* be automated relative to a github or bitbucket user, though it<br>
> probably isn't worth the effort. However, the CLA part is pretty<br>
> unavoidable.<br>
><br>
> > 3. The contributor creates an issue at <a href="http://bugs.python.org">bugs.python.org</a> (probably the one<br>
> > piece of infrastructure we all agree is better than the other options,<br>
> > although its workflow could use an update)<br>
><br>
> I wonder if issue creation from a PR (where no issue # is in the<br>
> message) could be automated too without a lot of extra work.<br>
><br>
> > 4. If the contributor used Bitbucket or GitHub, they send a pull request<br>
> > with the issue # in the PR message<br>
> > 5. <a href="http://bugs.python.org">bugs.python.org</a> notices the PR, grabs a patch for it, and puts it on<br>
> > <a href="http://bugs.python.org">bugs.python.org</a> for code review<br>
> > 6. CI runs on the patch based on what Python versions are specified in the<br>
> > issue tracker, letting everyone know if it applied cleanly, passed tests on<br>
> > the OSs that would be affected, and also got a test coverage report<br>
> > 7. Core developer does a code review<br>
> > 8. Contributor updates their code based on the code review and the updated<br>
> > patch gets pulled by <a href="http://bugs.python.org">bugs.python.org</a> automatically and CI runs again<br>
> > 9. Once the patch is acceptable and assuming the patch applies cleanly to<br>
> > all versions to commit to, the core developer clicks a "Commit" button,<br>
> > fills in a commit message and NEWS entry, and everything gets committed (if<br>
> > the patch can't apply cleanly then the core developer does it the<br>
> > old-fashion way, or maybe auto-generate a new PR which can be manually<br>
> > touched up so it does apply cleanly?)<br>
><br>
> 6-9 sounds a lot like PEP 462. :) This seems like the part the would<br>
> win us the most.<br>
><br>
> ><br>
> > Basically the ideal scenario lets contributors use whatever tools and<br>
> > platforms that they want and provides as much automated support as possible<br>
> > to make sure their code is tip-top before and during code review while core<br>
> > developers can review and commit patches so easily that they can do their<br>
> > job from a beach with a tablet and some WiFi.<br>
><br>
> Sign me up!<br>
><br>
> ><br>
> > ## Where the current proposed solutions seem to fall short<br>
> > ### GitHub/Bitbucket<br>
> > Basically GitHub/Bitbucket is a win for contributors but doesn't buy core<br>
> > developers that much. GitHub/Bitbucket gives contributors the easy cloning,<br>
> > drive-by patches, CI, and PRs. Core developers get a code review tool -- I'm<br>
> > counting Rietveld as deprecated after Guido's comments about the code's<br>
> > maintenance issues -- and push-button commits **only for single branch<br>
> > changes**. But for any patch that crosses branches we don't really gain<br>
> > anything. At best core developers tell a contributor "please send your PR<br>
> > against 3.4", push-button merge it, update a local clone, merge from 3.4 to<br>
> > default, do the usual stuff, commit, and then push; that still keeps me off<br>
> > the beach, though, so that doesn't get us the whole way.<br>
><br>
> This will probably be one of the trickiest parts.<br>
><br>
> > You could force<br>
> > people to submit two PRs, but I don't see that flying. Maybe some tool could<br>
> > be written that automatically handles the merge/commit across branches once<br>
> > the initial PR is in? Or automatically create a PR that core developers can<br>
> > touch up as necessary and then accept that as well? Regardless, some<br>
> > solution is necessary to handle branch-crossing PRs.<br>
> ><br>
> > As for GitHub vs. Bitbucket, I personally don't care. I like GitHub's<br>
> > interface more, but that's personal taste. I like hg more than git, but<br>
> > that's also personal taste (and I consider a transition from hg to git a<br>
> > hassle but not a deal-breaker but also not a win). It is unfortunate,<br>
> > though, that under this scenario we would have to choose only one platform.<br>
> ><br>
> > It's also unfortunate both are closed-source, but that's not a deal-breaker,<br>
> > just a knock against if the decision is close.<br>
> ><br>
> > ### Our own infrastructure<br>
> > The shortcoming here is the need for developers, developers, developers!<br>
> > Everything outlined in the ideal scenario is totally doable on our own<br>
> > infrastructure with enough code and time (donated/paid-for infrastructure<br>
> > shouldn't be an issue). But historically that code and time has not<br>
> > materialized. Our code review tool is a fork that probably should be<br>
> > replaced as only Martin von Löwis can maintain it. Basically Ezio Melotti<br>
> > maintains the issue tracker's code.<br>
><br>
> Doing something about those two tools is something to consider. Would<br>
> it be out of scope for this discussion or any resulting PEPS? I have<br>
> opinions here, but I'd rather not sidetrack the discussion.<br>
><br>
> > We don't exactly have a ton of people<br>
> > constantly going "I'm so bored because everything for Python's development<br>
> > infrastructure gets sorted so quickly!" A perfect example is that R. David<br>
> > Murray came up with a nice update for our workflow after PyCon but then ran<br>
> > out of time after mostly defining it and nothing ever became of it (maybe we<br>
> > can rectify that at PyCon?). Eric Snow has pointed out how he has written<br>
> > similar code for pulling PRs from I think GitHub to another code review<br>
> > tool, but that doesn't magically make it work in our infrastructure or get<br>
> > someone to write it and help maintain it (no offense, Eric).<br>
><br>
> None taken. I was thinking the same thing when I wrote that. :)<br>
><br>
> ><br>
> > IOW our infrastructure can do anything, but it can't run on hopes and<br>
> > dreams. Commitments from many people to making this happen by a certain<br>
> > deadline will be needed so as to not allow it to drag on forever. People<br>
> > would also have to commit to continued maintenance to make this viable<br>
> > long-term.<br>
> ><br>
> > # Next steps<br>
> > I'm thinking first draft PEPs by February 1 to know who's all-in (8 weeks<br>
> > away), all details worked out in final PEPs and whatever is required to<br>
> > prove to me it will work by the PyCon language summit (4 months away). I<br>
> > make a decision by May 1, and<br>
> > then implementation aims to be done by the time 3.5.0 is cut so we can<br>
> > switch over shortly thereafter (9 months away). Sound like a reasonable<br>
> > timeline?<br>
><br>
> Sounds reasonable to me, but I don't have plans to champion a PEP. :)<br>
> I could probably help with the tooling between GitHub/Bitbucket<br>
> though.<br>
><br>
> -eric<br>
> _______________________________________________<br>
> Python-Dev mailing list<br>
> <a href="mailto:Python-Dev@python.org">Python-Dev@python.org</a><br>
> <a href="https://mail.python.org/mailman/listinfo/python-dev">https://mail.python.org/mailman/listinfo/python-dev</a><br>
> Unsubscribe: <a href="https://mail.python.org/mailman/options/python-dev/graffatcolmingov%40gmail.com">https://mail.python.org/mailman/options/python-dev/graffatcolmingov%40gmail.com</a></p>
<p dir="ltr">I have extensive experience with the GitHub API and some with BitBucket. I'm willing to help out with the tooling as well.</p>