Issue 26852  proposes an enhancement to reduce the size of the Python
installation by using a sourceless distribution. It seems that
discussion on this problem (the size of the Python installation on
mobile devices) should be moved here.
IMHO there is no need to debate on the fact that a full install of
Python on a mobile device today is a problem.
As a reference, here is a raw estimation of the sizes of the different
full install: 111M
without the test suite: 53M
without the test suite and as a sourceless distribution: 23M
same as above but also without ensurepip: 14M
It would be useful to find the references to the previous discussions
related to sourceless distributions, especially the reasons why their
use is discouraged.
At Dropbox I see a lot of people who want to start using type
annotations (PEP 484, mypy) struggle with the collection ABCs.
It's pretty common to want to support both sets and lists of values,
as these have a lot of useful behavior: they are (re)iterable, have a
size, and implement `__contains__` (i.e. x in c, x not in c). It's
pretty common for people to think that Iterable is the answer, but
it's not. I'm beginning to think that it would be useful to add
another ABC to collections.abc (and to typing) that represents the
intersection of these three ABCs, and to "upgrade" Set and Sequence to
inherit from it. (And Mapping.)
(Another useful concept is "reiterable", i.e. an Iterable that can be
iterated over multiple times -- there's no ABC to indicate this
concept. Sequence, Set and Mapping all support this concept, Iterator
does not include it, but Iterable is wishy-washy.)
--Guido van Rossum (python.org/~guido)
> Date: Thu, 28 Jul 2016 22:23:35 +0000
> From: Emanuel Barry <vgr255(a)live.ca>
> To: "guido(a)python.org" <guido(a)python.org>
> Cc: "python-ideas(a)python.org" <python-ideas(a)python.org>
> Subject: Re: [Python-ideas] Make types.MappingProxyType hashable
>> What real-world use case do you have for this?
> Not me, actually. Someone in #python asked about an immutable dict, > and
I pointed them towards mappingproxy, only to realize it wasn't hashable.
Maybe something like frozendict could be added, though? At this point I
don't really have an opinion, so it's up to the list to decide if they want
it or not :)
If the keys can be converted to/from valid identifiers, and the values are
hashable, then namedtuple can give you your frozendict.
While answering a question earlier today, I realized that
types.MappingProxyType is immutable and unhashable. It seems to me that if
all values are hashable (all keys are already guaranteed to be hashable),
then the mappingproxy itself should be hashable. Should I go ahead and make
a patch for this, or is this a bad idea?
I am new to this mailing list, hopefully this idea hasn't been raised
already - I searched the PEPs and python-ideas and couldn't find anything
similar, so decided to post this idea. Please forgive me if it is a
Idea: add jsonprc server and client to stdlib "Internet Protocols and
I would like to suggest adding a jsonrpc server and client to the stdlib
"Internet Protocols and Support" with identical functionality to the
existing xmlrpc.server and client but using json instead of xml, including
a subclass SimpleJSONRPCServer just like SimpleXMLRPCServer based on
JSON is a very common data encoding format, widely established and very
close to python's dictionaries, yet there exists today no jsonrpc server
Many jsonrpc implementations using external libraries are available, so
there is clearly demand, but no standard python library.
All of these external implementations seem to require other external
libraries for transport, eg. Werkzeug - no way to do this just using
Identical to existing xmlrpc, just using json instead of xml encoding:
I am too new to python-dev so others should estimate this, but it seems to
me that all the hard work has already been done with xmlrpc - only the
encoding would have to be adapted to encode/decode json instead of xml,
which seems trivial, given that a json module already exists in stdlib.
I appreciate your feedback.
Achim Andreas von Roznowski
I have been reading Luciano Ramalho’s Fluent Python, and in the book he says he cannot think of a good use for staticmethod, I have been thinking about that, and so far I have only thought of one good use case: when you have a version of a common function, e.g., sum, and you want to qualify it with a better name but something like sumLineItems would sound wrong (to me at least). Would this be a good choice for something like:
. . . . . . . .
def __add__(self, other):
### Function body ###
. . . . . . . .
def sum(items: Sequence[LineItem]) -> LineItem:
### Add up LineItems ###
Does this seem like a reasonable use for staticmethod? I think its a good way to qualify a name in a more concise way.
Here is my speculative language idea for Python:
Allow the following alternative spelling of the keyword `lambda':
(That is "Unicode Character 'GREEK SMALL LETTER LAMDA' (U+03BB).")
I have been using the Vim "conceal" functionality with a rule which visually
replaces lambda with λ when editing Python files. I find this a great
readability since λ is visually less distracting while still quite
(The fact that λ is syntax-colored as a keyword also helps with this.)
However, at the moment the nice syntax is lost when looking at the file
through another editor or viewer.
Therefore I would really like this to be an official part of the Python
I know people have been clamoring for shorter lambda-syntax in the past, I
think this is
a nice minimal extension.
lst.sort(key=lambda x: x.lookup_first_name())
lst.sort(key=λ x: x.lookup_first_name())
# Church numerals
zero = λ f: λ x: x
one = λ f: λ x: f(x)
two = λ f: λ x: f(f(x))
(Yes, Python is my favorite Scheme dialect. Why did you ask?)
Note that a number of other languages already allow this. (Racket, Haskell).
You can judge the aesthetics of this on your own code with the following
* The lambda keyword is quite long and distracts from the "meat" of the
Replacing it by a single-character keyword improves readability.
* The resulting code resembles more closely mathematical notation (in
particular, lambda-calculus notation),
so it brings Python closer to being "executable pseudo-code".
* The alternative spelling λ/lambda is quite intuitive (at least to anybody
who knows Greek letters.)
For your convenience already noticed here:
* Introducing λ is introducing TIMTOWTDI.
* Hard to type with certain editors.
But note that the old syntax is still available.
Easy to fix by upgrading to VIM ;-)
* Will turn a pre-existing legal identifier λ into a keyword.
Needless to say, my personal opinion is that the advantages outweigh the
On 21 July 2016 at 15:08, Rustom Mody <rustompmody(a)gmail.com> wrote:
> My “wrongheaded” was (intended) quite narrow and technical:
> - The embargo on non-ASCII everywhere in the language except identifiers
> and comments obviously dont count as “in” the language
> - The opening of identifiers to large swathes of Unicode widens as you say
> hugely the surface area of attack
> This was solely the contradiction I was pointing out.
OK, thanks for the clarification, and my apologies for jumping on you.
I can be a bit hypersensitive on this topic, as my day job sometimes
includes encouraging commercial redistributors and end users to stop
taking community volunteers for granted and instead help find ways to
ensure their work is sustainable :)
As it is, I think there are some possible checks that could be added
to the code generator pipeline to help clarify matters:
- for the "invalid character" error message, we should be able to
always report both the printed symbol *and* the ASCII hex escape,
rather than assuming the caret will point to the correct place
- the caret positioning logic for syntax errors needs to be checked to
see if it's currently counting encoded UTF-8 bytes instead of code
points (as that will consistently do the wrong thing on a correctly
configured UTF-8 terminal)
- (more speculatively) when building the symbol table, we may be able
to look for identifiers referenced in a namespace that are not NKFC
equivalent, but nevertheless qualify as Unicode confusables, and emit
a SyntaxWarning (this is speculative, as I'm not sure what degree of
performance hit would be associated with it)
As far as Danilo's observation regarding the CPython code generator
always emitting SyntaxError and SyntaxWarning (regardless of which
part of the code generation actually failed) goes, I wouldn't be
opposed to our getting more precise about that by defining additional
subclasses, but one of the requirements would be for documentation in
https://docs.python.org/devguide/compiler.html or helper functions in
the source to clearly define "when working on <this> part of the code
generation pipeline, raise <that> kind of error if something goes
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
This idea of "visually confusable" seems like a very silly thing to worry
about, as others have noted.
It's not just that completely different letters from different alphabets
may "look similar", it's also that the similarity is completely dependent
on the specific font used for display. My favorite font might have clearly
distinguished glyphs for the Cyrillic, Roman, and Greek "A", even if your
font uses identical glyphs.
So in this crazy scenario, Python would have to gain awareness of the fonts
installed in every text editor and display device of every user.
On Jul 21, 2016 12:55 AM, "Chris Angelico" <rosuav(a)gmail.com> wrote:
On Thu, Jul 21, 2016 at 5:47 PM, Rustom Mody <rustompmody(a)gmail.com> wrote:
>> On Thu, Jul 21, 2016 at 4:26 PM, Rustom Mody <rusto...(a)gmail.com> wrote:
>> > IOW
>> > 1. Disallow co-existence of confusables (in identifiers)
>> > 2. Identify confusables to a normal form — like case-insensitive
>> > comparison
>> > and like NKFC
>> > 3. Leave the confusables to confuse
>> > My choice
>> > 1 better than 2 better than 3
>> So should we disable the lowercase 'l', the uppercase 'I', and the
>> digit '1', because they can be confused? What about the confusability
>> of "m" and "rn"? O and 0 are similar in some fonts. And case
>> insensitivity brings its own problems - is "ss" equivalent to "ß", and
>> is "ẞ" equivalent to either? Turkish distinguishes between "i", which
>> upper-cases to "İ", and "ı", which upper-cases to "I".
>> We already have interminable debates about letter similarities across
>> scripts. I'm sure everyone agrees that Cyrillic "и" is not the same
>> letter as Latin "i", but we have "AАΑ" in three different scripts.
>> Should they be considered equivalent? I think not, because in any
>> non-trivial context, you'll know whether the program's been written in
>> Greek, a Slavic language, or something using the Latin script. But
>> maybe you disagree. Okay; are "BВΒ" all to be considered equivalent
>> too? What about "СC"? "XХΧᚷ"? They're visually similar, but they're
>> not equivalent in any other way. And if you're going to say things
>> should be considered equivalent solely on the basis of visuals, you
>> get into a minefield - should U+200B ZERO WIDTH SPACE be completely
>> ignored, allowing "AB" to be equivalent to "A\u200bB" as an
> I said 1 better than 2 better than 3
> Maybe you also want to add:
> Special cases aren't special enough to break the rules.
> Although practicality beats purity.
> followed by
> Errors should never pass silently.
> IOW setting out 1 better than 2 better than 3 does not necessarily imply
> completely achievable
No; I'm not saying that. I'm completely disagreeing with #1's value. I
don't think the language interpreter should concern itself with
visually-confusing identifiers. Unicode normalization is about
*equivalent characters*, not confusability, and I think that's as far
as Python should go.
Python-ideas mailing list
Code of Conduct: http://python.org/psf/codeofconduct/
While working on some object-oriented projects in Java, I noticed that Python does not have an @override decorator for methods that are derived from a parent class but overridden for the unique purposes of a base class. With the creation of the @abstractmethod decorator, the override decorator could follow in clearly distinguishing the logic and design between parent/base classes.
Why I would think an override decorator might be useful:
For other people reading the code, it would be great to distinguish between methods that are unique to a class and methods that are inherited from a parent class. Not all methods inherited might be overridden, so keeping track of which inherited methods are overridden and which are not would be nice.
With the advent of static typing from mypy:
Having the decorator could corroborate the fact that the given method overrides the parent method correctly (correct name + parameter list).
When the parent class changes, such as the name or parameter list of an abstract method, the children classes should be updated as well. mypy could easily target the methods that need to be altered with the correct method signature.
If you don’t have an override decorator and overrode a parent method, then there could be some error complaining about this. This would be extremely useful to prevent accidental errors.
There is some interest for this as expressed on Stack Overflow (http://stackoverflow.com/questions/1167617/in-python-how-do-i-indicate-im-o… <http://stackoverflow.com/questions/1167617/in-python-how-do-i-indicate-im-o…>) and some people have also made packages for this, but having it in the standard distribution would be nice. Thoughts?