I just spent a few minutes staring at a bug caused by a missing comma
-- I got a mysterious argument count error because instead of foo('a',
'b') I had written foo('a' 'b').
This is a fairly common mistake, and IIRC at Google we even had a lint
rule against this (there was also a Python dialect used for some
specific purpose where this was explicitly forbidden).
Now, with modern compiler technology, we can (and in fact do) evaluate
compile-time string literal concatenation with the '+' operator, so
there's really no reason to support 'a' 'b' any more. (The reason was
always rather flimsy; I copied it from C but the reason why it's
needed there doesn't really apply to Python, as it is mostly useful
Would it be reasonable to start deprecating this and eventually remove
it from the language?
--Guido van Rossum (python.org/~guido)
The french translation of the Python Documentation  has translated
20% of the pageviews of docs.python.org. I think it's the right moment
to push it do docs.python.org. So there's some questions ! And I'd like
TL;DR (with my personal choices):
- URL may be "http://docs.python.org/fr/"
- For localized variations of languages we should use dash and
lowercase like "docs.python.org/pt-br/"
- po files may be hosted on the python's github
- existing script to build doc may be patched to build translations
- each translations may crosslink to others
- untranslated strings may be visually marked as so
I also opened: http://bugs.python.org/issue26546.
# Chronology, dependencies
The only blocking decision here is the URL, (also reviewing my patch
...), with those two, translated docs can be pushed to production, and
the other steps can be discussed and applied one by one.
# The URL
## CCTLD vs path vs subdomain
I think we should use a variation of "docs.python.org/fr/" for
simplicity and clarity.
I think we should avoid using CCTLDs as they're sometime hard or near
impossible to obtain (may cost a lot of time), also some are expensive,
so it's time and money we clearly don't need to loose.
Last possibility I see is to use a subdomain, like fr.docs.python.org or
docs.fr.python.org but I don't think it's the role / responsibility of
the sub-domain to do it.
So I'm for docs.python.org/LANGUAGE_TAG/ (without moving current
documentation inside a /en/).
## Language tag in path
### Dropping the default locale of a language
I personally think we should not show the region in case it's redundant:
so to use "fr" instead of "fr-FR", "de" instead of "de-DE", but keeping
the possibility to use a locale code when it's not redundant like for
"pt-br" or "de-AT" (German ('de') as used in Austria ('AT')).
I think so because I don't think we'll have a lot of locale variations
(like de-AT, fr-CH, fr-CA, ...) so it will be most of the time redundant
(visually heavy, longer to type, longer to read) but we'll still need
some locale (pt-BR typically).
### gettext VS IETF language tag format
gettext goes by using an underscore between language and locale  and
IETF goes by using a dash .
As sphinx is using gettext, and gettext uses underscore we may choose
underscore too. But URLs are not here to leak the underlying
implementation, and the IETF looks like to be the standard way to
represent language tags. Also I visually prefer the dash over the
underscore, so I'm for the dash here.
### Lower case vs upper case local tag
RFC 5646 section-2.1 tells us language tags are not case sensitive, yet
ISO3166-1 recommends that country codes (part of the language tag) be
capitalized. I personally prefer the all-lowercase one as paths in URLs
typically are lowercase. I searched for `inurl:"pt-br"` to see if I'm
not too far away from the usage here and usage seems to agree with me,
although there's some "pt-BR" in urls.
# Where to host the translated files
Currently we're hosting the *po* files in the afpy's (Francophone
association for python)  github  but it may make sense to use (in
the generation scripts) a more controlled / restricted clone in the
python github, at least to have a better view of who can push on the
We may want to choose between aggregating all translations under the
same git repository but I don't feel it's useful.
# How to
Currently, a python script  is used to generate `docs.python.org`, I
proposed a patch in  to make this script clone and build the french
translation too, it's a simple and effective way, I don't think we need
more ? Any idea welcome.
In our side, we have a Makefile  to build the translated doc which
is only a thin layer on top of the Sphinx Makefile. So my proposed patch
to build scripts "just" delegate the build to our Makefile which itself
delegate the hard work to the Sphinx Makefile.
# Next ?
## Document how to translate Python
I think I can (should) write a documentation on "how to start a Python
doc translation project" and "how to migrate existing  python
doc translation projects to docs.python.org" if french does goes
docs.python.org because it may hopefully motivate people to do the same,
and I think our structure is a nice way to do it (A Makefile to generate
the doc, all versions translated, people mainly working on latest
version, scripts to propagating translations to older version, etc...).
## Crosslinking between existing translations
Once the translations are on `docs.python.org`, crosslinks may be
established so people on a version can be aware of other version, and
easily switch to them. I'm not a UI/UX man but I think we may have a
select box right before the existing select box about version, on the
top-left corner. Right before because it'll reflect the path: /fr/3.5/
-> [select box fr][select box 3.5].
## Marking as "untranslated, you can help" the untranslated paragraphs
The translations will always need work to follow upstream modifications:
marking untranslated paragraphs as so may transform the "Oh they suck,
this paragraph is not even translated :-(" to "Hey, nice I can help
translating that !". There's an opened sphinx-doc ticket to do so 
but I have not worked on it yet. As previously said I'm real bad at
designing user interfaces, so I don't even visualize how I'd like it to be.
This is a writeup of a proposal I floated here:
last Sunday. If the response is positive I wish to write a PEP.
Briefly, it is a natural expectation in users that the command:
python -m module_name ...
used to invoke modules in "main program" mode on the command line imported the
module as "module_name". It does not, it imports it as "__main__". An import
within the program of "module_name" makes a new instance of the module, which
causes cognitive dissonance and has the side effect that now the program has
two instances of the module.
What I propose is that the above command line _should_ bind
sys.modules['module_name'] as well as binding '__main__' as it does currently.
I'm proposing that the python -m option have this effect (python pseudocode):
% python -m module.name ...
# pseudocode, with values hardwired for clarity
M = new_empty_module(name='__main__', qualname='module.name')
sys.modules['__main__'] = M
sys.modules['module.name'] = M
# load the module code from wherever (not necessarily a file - CPython
# already must do this phase)
Specificly, this would have the following two changes to current practice:
1) the module is imported _once_, and bound to both its canonical name and
also to __main__.
2) imported modules acquire a new attribute __qualname__ (analogous to the
recent __qualname__ on functions). This is always the conanoical name of the
module as resolved by the importer. For most modules __name__ will be the same
as __qualname__, but for the "main" module __name__ will be '__main__'.
This change has the following advantages:
The current standard boilerplate:
if __name__ == '__main__':
... invoke "main program" here ...
continues to work unchanged.
Importantly, if the program then issues "import module_name", it is already
there and the existing instance is found and used.
The thread referenced above outlines my most recent encounter with this and the
trouble it caused me. Followup messages include some support for this proposed
change, and some criticism.
The critiquing article included some workarounds for this multiple module
situation, but they were (1) somewhat dependent on modules coming from a file
pathname and (2) cumbersome and require every end user to adopt these changes
if affected by the situation. I'd like to avoid that.
Cameron Simpson <cs(a)zip.com.au>
The reasonable man adapts himself to the world; the unreasonable one persists
in trying to adapt the world to himself. Therefore all progress depends
on the unreasonable man. - George Bernard Shaw
Has anyone else found this to be too syntactically noisy?
from module import Foo as _Foo, bar as _bar
That is horrifically noisy IMO. The problem is, how do we
remove the noise without sacrificing intuitiveness? My first
idea was to do this:
from module import_private Foo, bar
And while it's self explanatory, it's also too long. So i
from module _import Foo, bar
I'm leaning more towards the latter, but i'm not loving it
either. Any ideas?
We have a modified version of singledispatch at work which works for
methods as well as functions. We have open-sourced it as methoddispatch
IMHO I thought it would make a nice addition to python stdlib.
What does everyone else think?
Hi all !
Ive heard some people saying it was rude to post on a mailing list without
introducing yourself so here goes something: my name is James Pic and I've
been developing and deploying a wide variety fof Python projects Python for
the last 8 years, I love to learn and share and writing documentation
amongst other things such as selling liquor.
The way I've been deploying Python projects so far is probably similar to
what a lot of people do and it almost always includes building runtime
dependencies on the production server. So, nobody is going to congratulate
me for that for sure but I think a lot of us have been doing so.
Now I'm fully aware of distribution specific packaging solutions like
dh-virtualenv shared by Spotify but here's my mental problem: I love to
learn and to hack. I'm always trying now distributions and I rarely run the
one that's in production in my company and when I'm deploying personal
projects I like funny distributions like arch, Alpine Linux, or
interesting paas solutions such as cloudfoundry, openshift, rancher and
And I'm always facing the same problem: I have to either build runtime
dependencies on the server, either package my thing in the platform
specific way. I feel like I've spent a really huge amount of time doing
this king of thing. But the java people, they have jars, and they have
smooth deployments no matter where they deploy it.
So that's the idea I'm trying to share: I'd like to b able to build a file
with my dependencies and my project in it. I'm not sure packaging only
Python bytecode would work here because of c modules. Also, I'm always
developing against a different Python version because I'm using different
distributions because it's part of my passions in life, as ridiculous as it
could sound to most people, I'm expecting at least some understanding from
this list :)
So I wonder, do you think the best solution for me would be to build an elf
binary with my Python and dependencies that I could just run on any
distribution given its on the right architecture ? Note that I like to use
Arm too, so I know I'd need to be able to cross compile too.
Thanks a lot for reading and if you can to take some time to share your
thoughts and even better : point me in a direction, if that idea is the
right solution and I'm going to be the only one interested I don't care if
it's going to take years for me to achieve this.
Thanks a heap !
PS: I'm currently at the openstack summit in Barcelona if anybody there
would like to talk about it in person, in which case I'll buy you the
Currently str(slice(10)) returns "slice(None, 10, None)"
If the start and step are None, consider not emitting them. Similarly
slice(None) is rendered slice(None, None, None).
When you're printing a lot of slices, it's a lot of extra noise.
The idea is to let generator expressions and list/set comprehensions have a
clean syntax to access its last output. That would allow them to be an
alternative syntax to the scan higher-order function  (today implemented
in the itertools.accumulate function), which leads to an alternative way to
write a fold/reduce. It would be nice to have something like:
>>> last(abs(prev - x) for x in [3, 4, 5] from prev = 2)
instead of a reduce:
>>> from functools import reduce
>>> reduce(lambda prev, x: abs(prev - x), [3, 4, 5], 2)
or an imperative approach:
>>> prev = 2
>>> for x in [3, 4, 5]:
... prev = abs(prev - x)
or getting the last from accumulate:
>>> from itertools import accumulate
>>> list(accumulate([2, 3, 4, 5], lambda prev, x: abs(prev - x)))[-1]
>>> [prev for prev in 
... for x in [3, 4, 5]
... for prev in [abs(prev - x)]
Actually, I already wrote a solution for something similar to that:
PyScanPrev . I'm using bytecode manipulation to modify the generator
expression and set/list comprehensions semantics to create a "scan", but it
has the limitation of using only code with a valid syntax as the input, so
I can't use "from" inside a generator expression / list comprehension. The
solution was to put the first output into the iterable and define the
"prev" name elsewhere:
>>> last(abs(prev - x) for x in [2, 3, 4, 5])
That line works with PyScanPrev (on Python 3.4 and 3.5) when defined in a
function with a @enable_scan("prev") decorator. That was enough to create a
"test suite" of doctest-based examples that shows several scan use cases
This discussion started in a Brazilian list when someone asked how she
could solve a simple uppercase/lowercase problem . The goal was to
alternate the upper/lower case of a string while neglecting the chars that
doesn't apply (i.e., to "keep the state" when the char isn't a letter). After
the discussion, I wrote the PyScanPrev package, and recently I've added
this historical "alternate" function as the "conditional toggling" example
Then I ask, can Python include that "scan" access to the last output in its
list/set/dict comprehension and generator expression syntax? There are
several possible applications for the scan itself as well as for the
fold/reduce (signal processing, control theory, physics, economics, etc.),
some of them I included as PyScanPrev examples. Some friends (people who
like control engineering and/or signal processing) liked the "State-space
model" example, where I included a "leaking bucket-spring-damper"
simulation using the scan-enabled generator expressions .
About the syntax, there are several ideas on how that can be written. Given
a "prev" identifier, a "target" identifier, an input "iterable" and an
optional "start" value (and perhaps an optional "echo_start", which I
assume True by default), some of them are:
[func(prev, target) for target in iterable from prev = start]
[func(prev, target) for target in iterable] -> prev = start
[func(prev, target) for target in iterable] -> prev as start
[func(prev, target) for target in iterable] from prev = start
[func(prev, target) for target in iterable] from prev as start
[func(prev, target) for target in iterable] with prev as start
prev = start -> [func(prev, target) for target in iterable]
prev(start) -> [func(prev, target) for target in iterable]
[func(prev, target) for prev -> target in start -> iterable]
[prev = start -> func(prev, target) for target in iterable]
# With ``start`` being the first value of the iterable, i.e.,
# iterable = prepend(start, data)
[func(prev, target) for target in iterable from prev]
[func(prev, target) for target in iterable] -> prev
[func(prev, target) for target in iterable] from prev
prev -> [func(prev, target) for target in iterable]
Before writing PyScanPrev, in  (Brazilian Portuguese) I used stackfull
 to implement that idea, an accumulator example using that library is:
>>> from stackfull import push, pop, stack
>>> [push(pop() + el if stack() else el) for el in range(5)]
[0, 1, 3, 6, 10]
[0, 1, 3, 6, 10]
There are more I can say (e.g. the pyscanprev.scan function has a "start"
value and an "echo_start" keyword argument, resources I missed in
itertools.accumulate) but the links below already have a lot of information.
Danilo J. S. Bellini
"*It is not our business to set up prohibitions, but to arrive at
conventions.*" (R. Carnap)
Based on some emails I read in the " unpacking generalisations for list
comprehension", I feel like I need to address this entire list about its
If you don't follow me on Twitter you may not be aware that I am taking the
entire month of October off from volunteering any personal time on Python
for my personal well-being (this reply is being done on work time for
instance). This stems from my wife pointing out that I had been rather
stressed in July and August outside of work in relation to my Python
volunteering (having your weekends ruined is never fun). That stress
stemmed primarily from two rather bad interactions I had to contend with on
the issue track in July and August ... and this mailing list.
When I have talked to people about this mailing list it's often referred to
by others as the "wild west" of Python development discussions (if you're
not familiar with US culture, that turn of phrase basically means "anything
goes"). To me that is not a compliment. When I created this list with Titus
the goal was to provide a safe place where people could bring up ideas for
Python where people could quickly provide basic feedback so people could
know whether there was any chance that python-dev would consider the
proposal. This was meant to be a win for proposers by not feeling like they
were wasting python-dev's time and a win for python-dev by keeping that
list focused on the development of Python and not fielding every idea that
people want to propose.
And while this list has definitely helped with the cognitive load on
python-dev, it has not always provided a safe place for people to express
ideas. I have seen people completely dismiss people's expertise and
opinion. There has been name calling and yelling at people (which is always
unnecessary). There have been threads that have completely derailed itself
and gone entirely off-topic. IOW I would not hold this mailing list up as
an example of the general discourse that I experience elsewhere within the
Now I realize that we are all human beings coming from different cultural
backgrounds and lives. We all have bad days and may not take the time to
stop and think about what we are typing before sending it, leading to
emails that are worded in a way that can be hurtful to others. It's also
easy to forget that various cultures views things differently and so that
can lead to people "reading between the lines" a lot and picking up things
that were never intended. There are 1,031 people on this mailing list from
around the world and it's easy to forget that e.g. Canadian humour may not
translate well to Ukrainian culture (or something). What this means is it's
okay to *nicely* say that something bothered you, but also try to give
people the benefit of the doubt as you don't know what their day had been
like before they wrote that email (I personally don't like the "just mute
the thread" approach to dealing with bad actors when the muting is silent
as that doesn't help new people who join this mailing list and the first
email they see is someone being rude that everyone else didn't see because
they muted the thread days ago).
As for the off-topic threads, please remember there are 1,031 people on
this mailing list (this doesn't count people reading through gmane or
Google Groups). Being extremely generous and assuming every person on this
list only spends 10 seconds deciding if they care about your email, that's
still nearly 3 hours of cumulative time spent on your email. So please be
cognisant when you reply, and if you want to have an off-topic
conversation, please take it off-list.
And finally, as one of the list administrators I am in a position of power
when it comes to the rules of this list and the CoC. While I'm one of the
judges on when someone has violated the CoC, I purposefully try not to play
the role of police to avoid bias and abuse of power. What that means is
that I never personally lodge a CoC complaint against anyone. That means
that if you feel someone is being abusive here you cannot rely on list
admins noticing and doing something about it. If you feel someone has
continuously been abusive on this list and violating the CoC then you must
email the list admins about it if you wish to see action taken (all
communications are kept private among the admins). Now I'm not asking
people to email us on every small infraction (as I said above, try to give
everyone a break knowing we all have bad days), but if you notice a pattern
then you need to speak up if you would like to see something change.
When I started my month off I thought that maybe if I only read this
mailing list once a week that the frequency would be low enough that I
could handle the stress of being both a participant and admin who is
ultimately responsible for the behaviour here, but I'm afraid that isn't
going to cut it. What I don't think people realize is that I don't take my
responsibility as admin lightly; any time anyone acts rudely I take it
personally like I somehow failed by letting the atmosphere and discourse on
this list become what it is. Because of this I'm afraid I need to mute this
mailing list for the rest of my vacation from volunteering in the Python
community after I send this email. I personally hope people do take the
time to read this email and reflect upon how they conduct themselves on
this mailing list -- and maybe on other lists as well -- so that when I
attempt to come back in November I don't have to permanent stop being a
participant on this list and simply become an admin for this list to
prevent complete burn-out for me in the Python community (and I know this
last sentence sounds dramatic, but I'm being serious; the irony of
receiving the Frank Willison award the same year I'm having to contemplate
fundamentally shifting how I engage with the community to not burn out is
not lost on me).