There's a PR to the peps proposal here:
https://github.com/python/peps/pull/242
The full text of the current proposal is below. The motivation for this is
that for complex decorators, even if the type checker can figure out what's
going on (by taking the signature of the decorator into account), it's
sometimes helpful to the human reader of the code to be reminded of the
type after applying the decorators (or a stack thereof). Much discussion
can be found in the PR. Note that we ended up having `Callable` in the type
because there's no rule that says a decorator returns a function type (e.g.
`property` doesn't).
This is a small thing but I'd like to run it by a larger audience than the
core mypy devs who have commented so far. There was a brief discussion on
python-ideas (my original
<https://mail.python.org/pipermail/python-ideas/2017-April/045548.html>,
favorable reply
<https://mail.python.org/pipermail/python-ideas/2017-May/045550.html> by
Nick, my response
<https://mail.python.org/pipermail/python-ideas/2017-May/045557.html>).
Credit for the proposal goes to Naomi Seyfer, with discussion by Ivan
Levkivskyi and Jukka Lehtosalo.
If there's no further debate here I'll merge it into the PEP and an
implementation will hopefully appear in the next version of the typing
module (also hopefully to be included in CPython 3.6.2 and 3.5.4).
Here's the proposed text (wordsmithing suggestions in the PR please):
+Decorators
+----------
+
+Decorators can modify the types of the functions or classes they
+decorate. Use the ``decorated_type`` decorator to declare the type of
+the resulting item after all other decorators have been applied::
+
+ from typing import ContextManager, Iterator, decorated_type
+ from contextlib import contextmanager
+
+ class DatabaseSession: ...
+
+ @decorated_type(Callable[[str], ContextManager[DatabaseSession]])
+ @contextmanager
+ def session(url: str) -> Iterator[DatabaseSession]:
+ s = DatabaseSession(url)
+ try:
+ yield s
+ finally:
+ s.close()
+
+The argument of ``decorated_type`` is a type annotation on the name
+being declared (``session``, in the example above). If you have
+multiple decorators, ``decorated_type`` must be topmost. The
+``decorated_type`` decorator is invalid on a function declaration that
+is also decorated with ``overload``, but you can annotate the
+implementation of the overload series with ``decorated_type``.
+
--
--Guido van Rossum (python.org/~guido <http://python.org/%7Eguido>)
Hi folks,
Over on https://github.com/pypa/python-packaging-user-guide/pull/305#issuecomment-3…
we're looking to update the theming of packaging.python.org to match
that of the language documentation at docs.python.org.
Doing that would also entail updating the documentation of the
individual tools and services (pip, pypi, setuptools, wheel, etc) to
maintain consistency with the main packaging user guide, so Jon has
tentatively broken the theme out as a (not yet published anywhere)
"pypa-theme" package to make it easier to re-use across multiple
projects.
The question that occurred to me is whether or not it might make more
sense to instead call that package "psf-docs-theme", to reflect that
it's intended specifically for projects that are legally backed by the
PSF, and that general Python projects looking for a nice,
high-contrast, theme should consider using an org independent one like
Alabaster instead.
Thoughts? Should we stick with pypa-theme as the name? Switch to
psf-docs-theme? Publish both, with pypa-theme adding PyPA specific
elements to a more general psf-docs-theme?
Cheers,
Nick.
P.S. In case folks aren't aware of the full legal arrangements here:
in addition to the informal "Python Packaging Authority" designation,
there's also a formally constituted PSF Packaging Working Group that
provides the legal connection back to the PSF. That means the
relationship between PyPA and the PSF ends up being pretty similar to
the one between python-dev and the PSF, where there's no direct PSF
involvement in day to day development activities, but the PSF provides
the legal and financial backing needed to sustainably maintain popular
community-supported software and services.
Part of my rationale for suggesting the inclusion of "psf" in the
package name is to make it clear that the intent would be to create a
clear and distinctive "trade dress" for the documentation of directly
PSF backed projects:
https://en.wikipedia.org/wiki/Trade_dress#Protection_for_electronic_interfa…
Future requests to use the theme (beyond CPython and the PyPA) could
then be run through the PSF Trademarks committee, as with requests to
use the registered marks.
Whereas if we go with pypa-theme, then that would just be a
non-precedent-setting agreement between PyPA and CPython to share a
documentation theme, without trying to define any form of
documentation trade dress for the PSF in general.
--
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
Currently when you add a new token you need to change a couple of files:
* Include/token.h
* _PyParser_TokenNames in Parser/tokenizer.c
* PyToken_OneChar(), PyToken_TwoChars() or PyToken_ThreeChars() in
Parser/tokenizer.c
* Lib/token.py (generated from Include/token.h)
* EXACT_TOKEN_TYPES in Lib/tokenize.py
* Operator, Bracket or Special in Lib/tokenize.py
* Doc/library/token.rst
It is possible to generate all this information from a single source.
Proposed in [1] patch uses Lib/token.py as an initial source. But maybe
Lib/token.py also should be generated from some file in general format?
Some information can be derived from Grammar/Grammar, but not all.
Needed also a mapping between token strings ('(' or '>=') and names
(LPAR, GREATEREQUAL). Can this be added in Grammar/Grammar or a new file?
There is a related problem, the tokenize module uses three additional
tokens not used by the C tokenizer. It modifies the content of the token
module after importing it, that is not good. [2] One of solutions is
making a copy of tok_names in tokenize before modifying it, but this
doesn't work, because third-party code search tokenize constants in
token.tok_names. Other solution is adding tokenize specific constants to
the token module. Is this good to expose in the token module tokens not
used in the C tokenizer?
Non-terminal symbols are generated automatically, Lib/symbol.py from
Include/graminit.h, and Include/graminit.h and Python/graminit.c from
Grammar/Grammar by Parser/pgen. Is it worth to generate Lib/symbol.py by
pgen too? Can pgen be implemented in Python?
See also similar issue for opcodes. [3]
[1] https://bugs.python.org/issue30455
[2] https://bugs.python.org/issue25324
[3] https://bugs.python.org/issue17861
Hi,
Would you be ok to backport ssl.MemoryBIO and ssl.SSLObject on Python
2.7? I can do the backport.
https://docs.python.org/dev/library/ssl.html#ssl.MemoryBIO
Cory Benfield told me that it's a blocking issue for him to implement
his PEP 543 -- A Unified TLS API for Python 2.7:
https://www.python.org/dev/peps/pep-0543/
And I expect that if a new cool TLS API happens, people will want to
use it on Python 2.7-3.6, not only on Python 3.7. Security evolves
more quickly that the current Python release process, and people wants
to keep their application secure.
>From what I understood, he wants to first implement an abstract
MemoryBIO API (http://sans-io.readthedocs.io/ like API? I'm not sure
about that), and then implement a socket/FD based on top of that.
Maybe later, some implementations might have a fast-path using
socket/FD directly.
He described me his PEP and I strongly support it (sorry, I missed it
when he posted it on python-dev), but we decided (Guido van Rossum,
Christian Heimes, Cory Benfield and me, see the tweet below) to not
put this in the stdlib right now, but spend more time on testing it on
Twisted, asyncio, requests, etc. So publishing an implementation on
PyPI was proposed instead. It seems like we agreed on a smooth plan
(or am I wrong, Cory?).
https://twitter.com/VictorStinner/status/865467388141027329
I'm quite sure that Twisted will love MemoryBIO on Python 2.7 as well,
to implement TLS, especially on Windows using IOCP. Currently,
external libraries (C extensions) are required.
I'm not sure if the PEP 466 should be amended for that? Is a new PEP
really needed? MemoryBIO/SSLObject are tiny. Nick (Coghlan): what do
you think?
https://www.python.org/dev/peps/pep-0466/
Victor
This is a side issue, do I don't want to go too long with it. But *NO* we
can't always give permission. The problem isn't how permissive PSF might
like to be in the abstract, but trademark law itself. Trademark is "enforce
it or lose it" ... Even passively allowing dilutive derivatives would cause
us to lose control of the mark, i.e. we would not have legal authority to
prohibit the misleading uses we actually care about preventing.
On May 29, 2017 2:32 AM, "Greg Ewing" <greg.ewing(a)canterbury.ac.nz> wrote:
M.-A. Lemburg wrote:
> In my role as PSF TM committee member, it's often painful to have to
> tell community members that they cannot use e.g. really nice looking
> variants of the Python logo for their projects. Let's not add more
> pain.
>
But it's always within the PSF's power to give that community
member permission to use that variant if they ask, is it not?
So you don't actually have to tell anyone that they can't
use anything.
--
Greg
_______________________________________________
Python-Dev mailing list
Python-Dev(a)python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: https://mail.python.org/mailman/options/python-dev/mertz%40g
nosis.cx
One of the main reasons we are stuck with an old libffi fork in CPython
is because the newer versions do not support protection from calling
functions with too few/many arguments:
https://docs.python.org/3/library/ctypes.html?highlight=ctypes#calling-func…
There are a number of caveats here, including "this only works on
Windows", but since it is documented we cannot just remove the behaviour
without a deprecation period.
I'd like to propose a highly-accelerated deprecation period for this
specific feature, starting in CPython 3.6.2 and being "completed" in
3.7.0, when we will hopefully move onto a newer libffi.
In general, the "feature" is a misfeature anyway, since calling a native
function with incorrect arguments is unsupported and a very easy way to
cause information leakage or code execution vulnerabilities. There may
be an argument for removing the functionality immediately, but honestly
I think changing libffi in a point release is higher risk.
Once the special protection is removed, most of these cases will become
OSError due to the general protection against segmentation faults. Some
will undoubtedly fall through the cracks and crash the entire
interpreter, but these are unavoidable (and really ought to crash to
avoid potential exploits).
Does anyone have any reasons to oppose this? It already has votes from
another Windows expert and the 3.6/3.7 Release Manager, but we wanted to
see if anyone has a concern we haven't thought of.
Cheers,
Steve
Hi all,
I work at Canonical as part of the engineering team developing Ubuntu
and Snapcraft [1] and I'm a long time Python fan :-)
We've created snaps, a platform that enables projects to directly
control delivery of software updates to users. This video of a
lightning talk by dlang developers at DConf2017 [2] shows how they've
made good use of snaps to distribute their compiler. They found the
release channels particularly useful so their users can track a
specific release.
Is there someone here who'd be interested in doing the same for Python?
[1] https://snapcraft.io/
[2] https://www.youtube.com/watch?v=M-bDzr4gYUU
[3] https://snapcraft.io/docs/core/install
[4] https://build.snapcraft.io/
--
Regards, Martin.