Hi!
The timeline for this year's election will be the same as last year.
* The nomination period will begin Nov 1, 2020 (do not post nominations
until then)
* Nomination period will end Nov 15, 2020
* Voting will begin Dec 1, 2020
* Voting will end Dec 15, 2020
Nominations will be collected via https://discuss.python.org/ (more details
to follow on Nov 1).
New for this year: Ernest W. Durbin III will be running the vote along with
the assistance of Joe Carey, a PSF employee. They will be co-admins going
forward. I have cc'ed them in on this thread as well in case there are any
questions.
Thanks,
Ewa
Hi,
Python has no mandatory Linux CI job on pull requests anymore. Right
now Windows (x64) remains the only mandatory job. Please be careful to
manually check other CI before merging a PR.
--
We had to deal with at least 3 different issues on the Travis CI over
the last 6 months. The latest one (3rd issue) is on the Travis CI side
and is known for months. Sometimes, a Travis CI build completes but
the GitHub pull request is never updated. Since Travis CI was
mandatory, it was never possible to merge some pull requests. I also
noticed a 4th bug, sometimes a PR gets *two* Travis CI jobs on a PR
for the same Travis CI build, only one is updated, and so again, the
PR cannot be merged.
For all these reasons, Travis CI was made optional.
I would be nice to have a mandatory Linux job: "Tests / Ubuntu
(pull_request)" is a good candidate. But I didn't check if it's
reliable or not.
See https://github.com/python/core-workflow/issues/377 for the discussion.
Note: if someone manages to fix all Travis CI issues, we can
reconsider making it mandatory again. But it seems like most people
who tried (included me) are tired or bored by Travis CI.
Victor
--
Night gathers, and now my watch begins. It shall not end until my death.
Over in:
* https://bugs.python.org/issue30681
* https://github.com/python/cpython/pull/22090
Georges Toth has a PR that fixes some problems with email.utils.parsedate_to_datetime(). I like the PR, and am ready to approve it for 3.10. Georges would like it back ported, which I would be normally be okay with *except* that it adds a new “defect” class.
Defects are a way for the email parser to indicate problems with the incoming data without throwing an exception. This is an important constraint because we never want clients of the parser to have to deal with exceptions. So if e.g. a message had some formatting or syntactic problem, but was otherwise parseable, you’d still get an email object back from the parser, but attached to it would be a list of defects that were encountered. Clients then could choose to ignore or handle these defects depending on the use case. Defects are implemented as classes that get instantiated with some useful information and appended to an email message’s “defects” list.
PR #22090 adds an InvalidDateDefect for cases where parsing the Date: header encounters problems, such as an invalid hour value. I think this is the right thing to do to fix the reported bug, but I am on the fence as to whether this new defect class should prevent back porting. OT1H, it can’t break existing code, but OTOH this defect will only be found when run in Python bug fix releases with the new defect detection.
What do you think? And especially, what does Łukasz think, since he’s the RM for back port candidates 3.8 and 3.9?
Cheers,
-Barry
https://www.python.org/dev/peps/pep-0641/ (once the cron job runs)
https://discuss.python.org/t/pep-641-using-an-underscore-in-the-version-por…
for discussions.
This was discussed at the core dev sprints and has RM sign-off. The plan is
to discuss this at the next steering council meeting where I will advise to
make Pablo the PEP delegate.
The main reason for the PEP is for visibility, give people something to
point to, and to see if this plan raises any major red flags somehow.
With the next SC election fast approaching, I did the final tweaks I wanted
to make to the voters repo to address visibility issues we had in the last
election.
First, there is now a monthly cron job that will run at
https://github.com/python/voters/actions?query=workflow%3A%22Projected+Vote…
which will project a Dec 01 vote and then calculate who would fall off the
voter roll based solely on activity, who would be added, and then the full
list of voters. What that means is the two year of activity is calculated
back from the next Dec 01, so you can check to see if you haven't committed
or authored code in that timeframe to automatically be put on the voter
roll.
Second, I created
https://github.com/python/voters/actions?query=workflow%3A%22Generate+Voter…
for manually creating the voter roll. This means people can manually
trigger the same code used to create the initial voter roll and see who
would (not) be automatically placed on it. I expect this to mostly be used
by the folks running the election. And I do advise specifying the full date
as the input instead of using the MM-DD shortcut if you choose today as it
will most likely wrap around to projecting a vote next year.
Finally, I updated the data to include when someone left the core team (and
if someone was ejected, which is a term from PEP 13). For those that never
entered a GitHub username, I implicitly put them as having left the team
the day the first PR was merged on GitHub since they stopped being able to
participate actively from that day forward with an appropriate note as to
why (2017-02-10). This is now shown in the developer log at
https://devguide.python.org/developers/.
Hopefully this is enough to easily check if one should try to get a quick
PR committed and/or authored before an election. We can all also try to
remember to include it in the vote announcement email going forward if
anyone forgets.
This week, we granted bug triage permissions to two new members: Irit
Katriel[1] and Andre Delfino[2].
Irit has been active commenting on issues on the bug tracker and has helped
move the issues along. She is also actively participating in our sprint
this week.
Andre already has the Developer role on bpo. Andre has been contributing to
CPython for more than two years, has made lots of pull requests, many of
them merged, and is very familiar with our workflow.
Thank you Irit and Andre for all the work you do!
The requests for triage role:
[1] https://github.com/python/core-workflow/issues/378
[2] https://github.com/python/core-workflow/issues/379
Hi!
I have updated the branch benchmarks in the pyperformance server and now
they include 3.9. There are
some benchmarks that are faster but on the other hand some benchmarks are
substantially slower, pointing
at a possible performance regression in 3.9 in some aspects. In particular
some tests like "unpack sequence" are
almost 20% slower. As there are some other tests were 3.9 is faster, is not
fair to conclude that 3.9 is slower, but
this is something we should look into in my opinion.
You can check these benchmarks I am talking about by:
* Go here: https://speed.python.org/comparison/
* In the left bar, select "lto-pgo latest in branch '3.9'" and "lto-pgo
latest in branch '3.8'"
* To better read the plot, I would recommend to select a "Normalization" to
the 3.8 branch (this is in the top part of the page)
and to check the "horizontal" checkbox.
These benchmarks are very stable: I have executed them several times over
the weekend yielding the same results and,
more importantly, they are being executed on a server specially prepared to
running reproducible benchmarks: CPU affinity,
CPU isolation, CPU pinning for NUMA nodes, CPU frequency is fixed, CPU
governor set to performance mode, IRQ affinity is
disabled for the benchmarking CPU nodes...etc so you can trust these
numbers.
I kindly suggest for everyone interested in trying to improve the 3.9 (and
master) performance, to review these benchmarks
and try to identify the problems and fix them or to find what changes
introduced the regressions in the first place. All benchmarks
are the ones being executed by the pyperformance suite (
https://github.com/python/pyperformance) so you can execute them
locally if you need to.
---
On a related note, I am also working on the speed.python.org server to
provide more automation and
ideally some integrations with GitHub to detect performance regressions.
For now, I have done the following:
* Recompute benchmarks for all branches using the same version of
pyperformance (except master) so they can
be compared with each other. This can only be seen in the "Comparison"
tab: https://speed.python.org/comparison/
* I am setting daily builds of the master branch so we can detect
performance regressions with daily granularity. These
daily builds will be located in the "Changes" and "Timeline" tabs (
https://speed.python.org/timeline/).
* Once the daily builds are working as expected, I plan to work on trying
to automatically comment or PRs or on bpo if
we detect that a commit has introduced some notable performance regression.
Regards from sunny London,
Pablo Galindo Salgado.
On Tue, Oct 13, 2020 at 11:17:33AM +0100, Steve Holden wrote:
> Full marks to the SC for transparency. That's a healthy sign that the
> community acknowledges its disciplinary processes must also be open to
> scrutiny, and rather better than dealing with matters in a Star Council.
The SC didn't say anything until Antoine posted an open letter from
Stefan to the list.
There is tension between the requirements of openness and privacy, and I
don't have a good answer for where the balance should be. But it seems
to me that giving "full marks for transparency" for a decision made
behind closed doors that we only found about about because one of the
parties was able to announce their ban via a third party is a remarkably
soft grade.
Steve
We've got the automerge tag on GH, it+bot make it awesome. There's one more
thing I'd like to see that could help with bug hygiene: A tag to close the
associated bug as "fixed" after the merge happens.
This doesn't have to be tied to automerge; in practice you'd find them used
in unison somewhat often. More readily on features done on the main branch
rather than bug fixes needing backports to multiple releases.
We've had such a system at work for so long I don't even remember when it
was added, but it has been a great time saver. Less more bugs laying
around fixed but not marked as such. Less need for triagers to manually
ask someone who has the permissions to change the bug state. Less
unintentionally still open bugs in the way distracting people. Good all
around.
It isn't the primary way to close issues, but it helps in situations where
it makes sense. I'd assume the same set of people allowed to add automerge
should be allowed to add this label.
-gps