On behalf of the Python development community and the Python 3.7 release
team, I'm happy to announce the availability of Python 3.7.0b3. b3 is
the third of four planned beta releases of Python 3.7, the next major
release of Python, and marks the end of the feature development phase
for 3.7. You can find Python 3.7.0b3 here:
https://www.python.org/downloads/release/python-370b3/
Among the new major new features in Python 3.7 are:
* PEP 538, Coercing the legacy C locale to a UTF-8 based locale
* PEP 539, A New C-API for Thread-Local Storage in CPython
* PEP 540, UTF-8 mode
* PEP 552, Deterministic pyc
* PEP 553, Built-in breakpoint()
* PEP 557, Data Classes
* PEP 560, Core support for typing module and generic types
* PEP 562, Module __getattr__ and __dir__
* PEP 563, Postponed Evaluation of Annotations
* PEP 564, Time functions with nanosecond resolution
* PEP 565, Show DeprecationWarning in __main__
* PEP 567, Context Variables
Please see "What’s New In Python 3.7" for more information.
Additional documentation for these features and for other changes
will be provided during the beta phase.
https://docs.python.org/3.7/whatsnew/3.7.html
Beta releases are intended to give you the opportunity to test new
features and bug fixes and to prepare their projects to support the
new feature release. We strongly encourage you to test your projects
with 3.7 during the beta phase and report issues found to
https://bugs.python.org as soon as possible.
While the release is feature complete entering the beta phase, it is
possible that features may be modified or, in rare cases, deleted up
until the start of the release candidate phase (2018-05-21). Our goal
is have no ABI changes after beta 3 and no code changes after rc1.
To achieve that, it will be extremely important to get as much exposure
for 3.7 as possible during the beta phase.
Attention macOS users: there is a new installer variant for
macOS 10.9+ that includes a built-in version of Tcl/Tk 8.6. This
variant is expected to become the default version when 3.7.0 releases.
Check it out! We welcome your feedback. As of 3.7.0b3, the legacy
10.6+ installer also includes a built-in Tcl/Tk 8.6.
Please keep in mind that this is a preview release and its use is
not recommended for production environments.
The next planned release of Python 3.7 will be 3.7.0b4, currently
scheduled for 2018-04-30. More information about the release schedule
can be found here:
https://www.python.org/dev/peps/pep-0537/
--
Ned Deily
nad(a)python.org -- []
Currently the repository contains bundled pip and setuptools (2 MB
total) which are updated with every release of pip and setuptools. This
increases the size of the repository by around 2 MB several times per
year. There were total 37 updates of Lib/ensurepip/_bundled, therefore
the repository contains up to 70 MB of unused blobs. The size of the
repository is 350 MB. Currently blobs takes up to 20% of the size of the
repository, but this percent will likely grow in future, because they
where added only 4 years ago.
Wouldn't be better to put them into a separate repository like Tcl/Tk
and other external binaries for Windows, and download only the recent
version?
Hi,
My name is Julia Hiyeon Kim.
My suggestion is to change the syntax for creating an empty set and an empty dictionary as following.
an_empty_set = {}
an_empty_dictionary = {:}
It would seem to make more sense.
Warm regards,
Julia Kim
Python 3.6.5 is now available. 3.6.5 is the fifth maintenance release of
Python 3.6, which was initially released in 2016-12 to great interest.
Detailed information about the changes made in 3.6.5 can be found in its
change log. You can find Python 3.6.5 and more information here:
https://www.python.org/downloads/release/python-365/
See the "What’s New In Python 3.6" document for more information about
features included in the 3.6 series. Detailed information about the
changes made in 3.6.5 can be found in the change log here:
https://docs.python.org/3.6/whatsnew/changelog.html#python-3-6-5-final
Attention macOS users: as of 3.6.5, there is a new additional installer
variant for macOS 10.9+ that includes a built-in version of Tcl/Tk 8.6.
This variant is expected to become the default variant in future
releases. Check it out!
The next maintenance release is expected to follow in about 3 months,
around the end of 2018-06.
Thanks to all of the many volunteers who help make Python Development and
these releases possible! Please consider supporting our efforts by
volunteering yourself or through organization contributions to the Python
Software Foundation:
https://www.python.org/psf/
--
Ned Deily
nad(a)python.org -- []
There is a feature request and patch to propagate the float.is_integer() API through rest of the numeric types ( https://bugs.python.org/issue26680 ).
While I don't think it is a good idea, the OP has been persistent and wants his patch to go forward.
It may be worthwhile to discuss on this list to help resolve this particular request and to address the more general, recurring design questions. Once a feature with a marginally valid use case is added to an API, it is common for us to get downstream requests to propagate that API to other places where it makes less sense but does restore a sense of symmetry or consistency. In cases where an abstract base class is involved, acceptance of the request is usually automatic (i.e. range() and tuple() objects growing index() and count() methods). However, when our hand hasn't been forced, there is still an opportunity to decline. That said, proponents of symmetry requests tend to feel strongly about it and tend to never fully accept such a request being declined (it leaves them with a sense that Python is disordered and unbalanced).
Raymond
---- My thoughts on the feature request -----
What is the proposal?
* Add an is_integer() method to int(), Decimal(), Fraction(), and Real(). Modify Rational() to provide a default implementation.
Starting point: Do we need this?
* We already have a simple, traditional, portable, and readable way to make the test: int(x) == x
* In the context of ints, the test x.is_integer() always returns True. This isn't very useful.
* Aside from the OP, this behavior has never been requested in Python's 27 year history.
Does it cost us anything?
* Yes, adding a method to the numeric tower makes it a requirement for every class that ever has or ever will register or inherit from the tower ABCs.
* Adding methods to a core object such as int() increases the cognitive load for everyday users who look at dir(), call help(), or read the main docs.
* It conflicts with a design goal for the decimal module to not invent new functionality beyond the spec unless essential for integration with the rest of the language. The reasons included portability with other implementations and not trying to guess what the committee would have decided in the face of tricky questions such as whether Decimal('1.000001').is_integer()
should return True when the context precision is only three decimal places (i.e. whether context precision and rounding traps should be applied before the test and whether context flags should change after the test).
Shouldn't everything in a concrete class also be in an ABC and all its subclasses?
* In general, the answer is no. The ABCs are intended to span only basic functionality. For example, GvR intentionally omitted update() from the Set() ABC because the need was fulfilled by __ior__().
But int() already has real, imag, numerator, and denominator, why is this different?
* Those attributes are central to the functioning of the numeric tower.
* In contrast, the is_integer() method is a peripheral and incidental concept.
What does "API Parsimony" mean?
* Avoidance of feature creep.
* Preference for only one obvious way to do things.
* Practicality (not craving things you don't really need) beats purity (symmetry and foolish consistency).
* YAGNI suggests holding off in the absence of clear need.
* Recognition that smaller APIs are generally better for users.
Are there problems with symmetry/consistency arguments?
* The need for guard rails on an overpass doesn't imply the same need on a underpass even though both are in the category of grade changing byways.
* "In for a penny, in for a pound" isn't a principle of good design; rather, it is a slippery slope whereby the acceptance of a questionable feature in one place seems to compel later decisions to propagate the feature to other places where the cost / benefit trade-offs are less favorable.
Should float.as_integer() have ever been added in the first place?
* Likely, it should have been a math module function like isclose() and isinf() so that it would not have been type specific.
* However, that ship has sailed; instead, the question is whether we now have to double down and have to dispatch other ships as well.
* There is some question as to whether it is even a good idea to be testing the results of floating point calculations for exact values. It may be useful for testing inputs, but is likely a trap for people using it other contexts.
Have we ever had problems with just accepting requests solely based on symmetry?
* Yes. The str.startswith() and str.endswith() methods were given optional start/end arguments to be consistent with str.index(), not because there were known use cases where code was made better with the new feature. This ended up conflicting with a later feature request that did have valid use cases (supporting multiple test prefixes/suffixes). As a result, we ended-up with an awkward and error-prone API that requires double parenthesis for the valid use case: url.endswith(('.html', '.css')).
https://bugs.python.org/issue33141 points out an interesting issue with
dataclasses and descriptors.
Given this code:
from dataclasses import *
class D:
"""A descriptor class that knows its name."""
def __set_name__(self, owner, name):
self.name = name
def __get__(self, instance, owner):
if instance is not None:
return 1
return self
@dataclass
class C:
d: int = field(default=D(), init=False)
C.d.name is not set, because d.__set_name__ is never called. However, in
this case:
class X:
d: int = D()
X.d.name is set to 'd' when d.__set_name__ is called during type.__new__.
The problem of course, is that in the dataclass case, when class C is
initialized, and before the decorator is called, C.d is set to a Field()
object, not to D(). It's only when the dataclass decorator is run that I
change C.d from a Field to the value of D(). That means that the call to
d.__set_name__(C, 'd') is skipped. See
https://www.python.org/dev/peps/pep-0487/#implementation-details for
details on how type.__new__ works.
The only workaround I can think of is to emulate the part of PEP 487
where __set_name__ is called. I can do this from within the @dataclass
decorator when I'm initializing C.d. I'm not sure how great this
solution is, since it's moving the call from class creation time to
class decorator time. I think in 99+% of cases this would be fine, but
you could likely write code that depends on side effects of being called
during type.__new__.
Unless anyone has strong objections, I'm going to make the call to
__set_name__ in the @datacalss decorator. Since this is such a niche use
case, I don't feel strongly that it needs to be in today's beta release,
but if possible I'll get it in. I already have the patch written. And if
it does get in but the consensus is that it's a bad idea, we can back it
out.
Eric
Hi Python-dev,
I'm one of the core attrs contributors, and I'm contemplating applying an
optimization to our generated __init__s. Before someone warns me python-dev
is for the development of the language itself, there are two reasons I'm
posting this here:
1) it's a very low level question that I'd really like the input of the
core devs on, and
2) maybe this will find its way into dataclasses if it works out.
I've found that, if a class has more than one attribute, instead of
creating an init like this:
self.a = a
self.b = b
self.c = c
it's faster to do this:
self.__dict__ = {'a': a, 'b': b, 'c': c}
i.e. to replace the instance dictionary altogether. On PyPy, their core
devs inform me this is a bad idea because the instance dictionary is
special there, so we won't be doing this on PyPy.
But is it safe to do on CPython?
To make the question simpler, disregard the possibility of custom setters
on the attributes.
Thanks in advance!
Hi Python Devs
I recently started testing Jedi with Python 3.7. Some tests broke. I
realized that one of the things that changed in 3.7 was the use of
argument clinic in methods like str.replace.
The issue is that the text signature doesn't contain a return annotation.
>>> str.replace.__text_signature__
'($self, old, new, count=-1, /)
In Python < 3.7 there was a `S.replace(old, new[, count]) -> str` at
the top of the __doc__. T
If the __text_signature__ was `'($self, old, new, count=-1, /) -> str`
a lot of tools would be able to have the information again.
Is this intentional or was this just forgotten? I'd like to note that
this information is insanely helpful (at least for Jedi) to pick up
type information. I really hope this information can make it back into
3.7, since it was there in earlier versions.
If you lack don't have time I might have some. Just give me some instructions.
~ Dave
PS: Don't get me wrong, I love argument clinic/inspect.signature and
am generally in favor of using it everywhere. It helps tools like
jedi, pycharm and others get accurate information about a builtin
function.
On Mar 24, 2018, at 16:13, Steve Dower <steve.dower(a)python.org> wrote:
> Or we could just pull the right version directly from PyPI? (Note that updating the version should be an explicit step, as it is today, but the file should be identical to what’s on PyPI, right? And a urlretrieve is easier than pulling from a git repo.)
I think the primary original rationale for having the pip wheel and its dependencies checked into the cpython repo was so that users would be able to install pip even if they did not have an Internet connection. But perhaps that requirement can be relaxed a bit if we say that the necessary wheels are vendored into all of our downloadable release items, that is, included in the packaging of source release files (the various tarballs) and the Windows and macOS binary installers. The main change would likely be making ensurepip a bit smarter to download if the bundled wheels are not present in the source directory. Assuming that people building from a cpython repo need to have a network connection if they want to run ensurepip, at least for the first time, is probably not an onerous requirement.
--
Ned Deily
nad(a)python.org -- []
I'd like to start a discussion around practices for vendoring package
dependencies. I'm not sure python-dev is the appropriate venue for this
discussion. If not, please point me to one and I'll gladly take it there.
I'll start with a problem statement.
Not all consumers of Python packages wish to consume Python packages in the
common `pip install <package>` + `import <package>` manner. Some Python
applications may wish to vendor Python package dependencies such that known
compatible versions are always available.
For example, a Python application targeting a general audience may not wish
to expose the existence of Python nor want its users to be concerned about
Python packaging. This is good for the application because it reduces
complexity and the surface area of things that can go wrong.
But at the same time, Python applications need to be aware that the Python
environment may contain more than just the Python standard library and
whatever Python packages are provided by that application. If using the
system Python executable, other system packages may have installed Python
packages in the system site-packages and those packages would be visible to
your application. A user could `pip install` a package and that would be in
the Python environment used by your application. In short, unless your
application distributes its own copy of Python, all bets are off with
regards to what packages are installed. (And even then advanced users could
muck with the bundled Python, but let's ignore that edge case.)
In short, `import X` is often the wild west. For applications that want to
"just work" without requiring end users to manage Python packages, `import
X` is dangerous because `X` could come from anywhere and be anything -
possibly even a separate code base providing the same package name!
Since Python applications may not want to burden users with Python
packaging, they may vendor Python package dependencies such that a known
compatible version is always available. In most cases, a Python application
can insert itself into `sys.path` to ensure its copies of packages are
picked up first. This works a lot of the time. But the strategy can fall
apart.
Some Python applications support loading plugins or extensions. When
user-provided code can be executed, that code could have dependencies on
additional Python packages. Or that custom code could perform `sys.path`
modifications to provide its own package dependencies. What this means is
that `import X` from the perspective of the main application becomes
dangerous again. You want to pick up the packages that you provided. But
you just aren't sure that those packages will actually be picked up. And to
complicate matters even more, an extension may wish to use a *different*
version of a package from what you distribute. e.g. they may want to adopt
the latest version that you haven't ported to you or they may want to use
an old versions because they haven't ported yet. So now you have the
requirements that multiple versions of packages be available. In Python's
shared module namespace, that means having separate package names.
A partial solution to this quagmire is using relative - not absolute -
imports. e.g. say you have a package named "knights." It has a dependency
on a 3rd party package named "shrubbery." Let's assume you distribute your
application with a copy of "shrubbery" which is installed at some packages
root, alongside "knights:"
/
/knights/__init__.py
/knights/ni.py
/shrubbery/__init__.py
If from `knights.ni` you `import shrubbery`, you /could/ get the copy of
"shrubbery" distributed by your application. Or you could pick up some
other random copy that is also installed somewhere in `sys.path`.
Whereas if you vendor "shrubbery" into your package. e.g.
/
/knights/__init__.py
/knights/ni.py
/knights/vendored/__init__.py
/knights/vendored/shrubbery/__init__.py
Then from `knights.ni` you `from .vendored import shrubbery`, you are
*guaranteed* to get your local copy of the "shrubbery" package.
This reliable behavior is highly desired by Python applications.
But there are problems.
What we've done is effectively rename the "shrubbery" package to
"knights.vendored.shrubbery." If a module inside that package attempts an
`import shrubbery.x`, this could fail because "shrubbery" is no longer the
package name. Or worse, it could pick up a separate copy of "shrubbery"
somewhere else in `sys.path` and you could have a Frankenstein package
pulling its code from multiple installs. So for this to work, all
package-local imports must be using relative imports. e.g. `from . import
x`.
The takeaway is that packages using relative imports for their own modules
are much more flexible and therefore friendly to downstream consumers that
may wish to vendor them under different names. Packages using relative
imports can be dropped in and used, often without source modifications.
This is a big deal, as downstream consumers don't want to be
modifying/forking packages they don't maintain. Because of the advantages
of relative imports, *I've individually reached the conclusion that
relative imports within packages should be considered a best practice.* I
would encourage the Python community to discuss adopting that practice more
formally (perhaps as a PEP or something).
But package-local relative imports aren't a cure-all. There is a major
problem with nested dependencies. e.g. if "shrubbery" depends on the
"herring" package. There's no reasonable way of telling "shrubbery" that
"herring" is actually provided by "knights.vendored." You might be tempted
to convert non package-local imports to relative. e.g. `from .. import
herring`. But the importer doesn't allow relative imports outside the
current top-level package and this would break classic installs where
"shrubbery" and "herring" are proper top-level packages and not
sub-packages in e.g. a "vendored" sub-package. For cases where this occurs,
the easiest recourse today is to rewrite imported source code to use
relative imports. That's annoying, but it works.
In summary, some Python applications may want to vendor and distribute
Python package dependencies. Reliance on absolute imports is dangerous
because the global Python environment is effectively undefined from the
perspective of the application. The safest thing to do is use relative
imports from within the application. But because many packages don't use
relative imports themselves, vendoring a package can require rewriting
source code so imports are relative. And even if relative imports are used
within that package, relative imports can't be used for other top-level
packages. So source code rewriting is required to handle these. If you
vendor your Python package dependencies, your world often consists of a lot
of pain. It's better to absorb that pain than inflict it on the end-users
of your application (who shouldn't need to care about Python packaging).
But this is a pain that Python application developers must deal with. And I
feel that pain undermines the health of the Python ecosystem because it
makes Python a less attractive platform for standalone applications.
I would very much welcome a discussion and any ideas on improving the
Python package dependency problem for standalone Python applications. I
think encouraging the use of relative imports within packages is a solid
first step. But it obviously isn't a complete solution.
Gregory