Could you please add a link to the email where the PEP was accepted?
On 2015-05-16 10:12 PM, chris.angelico wrote:
> changeset: 5854:f876276ce076
> user: Chris Angelico <rosuav(a)gmail.com>
> date: Sun May 17 12:12:19 2015 +1000
> Apply Chris's changes, including an acceptance mark
> pep-0485.txt | 6 +++---
> 1 files changed, 3 insertions(+), 3 deletions(-)
> diff --git a/pep-0485.txt b/pep-0485.txt
> --- a/pep-0485.txt
> +++ b/pep-0485.txt
> @@ -3,7 +3,7 @@
> Version: $Revision$
> Last-Modified: $Date$
> Author: Christopher Barker <Chris.Barker(a)noaa.gov>
> -Status: Draft
> +Status: Accepted
> Type: Standards Track
> Content-Type: text/x-rst
> Created: 20-Jan-2015
> @@ -391,9 +391,9 @@
> The most common use case is expected to be small tolerances -- on order of the
> default 1e-9. However there may be use cases where a user wants to know if two
> fairly disparate values are within a particular range of each other: "is a
> -within 200% (rel_tol = 2.0) of b? In this case, the string test would never
> +within 200% (rel_tol = 2.0) of b? In this case, the strong test would never
> indicate that two values are within that range of each other if one of them is
> -zero. The strong case, however would use the larger (non-zero) value for the
> +zero. The weak case, however would use the larger (non-zero) value for the
> test, and thus return true if one value is zero. For example: is 0 within 200%
> of 10? 200% of ten is 20, so the range within 200% of ten is -10 to +30. Zero
> falls within that range, so it will return True.
> Python-checkins mailing list
I foud "semi official github mirror" of cpython.
I want to use it as upstream of our project (Translating docs in Japanese).
But it doesn't have tags.
Is the repository stable enough for forking project like us? Or should we
Could you mirror tags too?
INADA Naoki <songofacandy(a)gmail.com>
With os.scandir() now in the Python 3.5 stdlib, I just thought I'd let
folks know that I've released the scandir module version 1.0. So this
is now basically a copy-n-paste of the C code that went into CPython
3.5's posixmodule.c with the necessary changes to make it work or
Python 2.x (2.6+).
You can use the following import to pick os.scandir/os.walk if on
Python 3.5+ or the scandir module version otherwise:
from os import scandir, walk
from scandir import scandir, walk
I've tested it and it all looks good and performs well, but please let
me know if you have any issues!
* PyPI: https://pypi.python.org/pypi/scandir
* Github project: https://github.com/benhoyt/scandir
Would love to hear any success/speedup stories, too!
On 6 May 2015 at 07:46, Greg Ewing <greg.ewing(a)canterbury.ac.nz> wrote:
> Another problem with the "core" idea is that
> you can't start with an event loop that "just does
> scheduling" and then add on other features such
> as I/O *from the outside*. There has to be some
> point at which everything comes together, which
> means choosing something like select() or
> poll() or I/O completion queues, and build that
> into the heart of your event loop. At that point
> it's no longer something with a simple core.
Looking at asyncio.queues, the only features it needs are:
2. asyncio.futures.Future - creating a standalone Future
locks.Event in turn only needs the other 3 items. And you can ignore
get_event_loop() as it's only used to get the default loop, you can
pass in your own.
And asyncio.futures only uses get_event_loop (and _format_callback)
Futures require the loop to support:
So, to some extent (how far is something I'd need to code up a loop to
confirm) you can build the Futures and synchronisation mechanisms with
an event loop that supports only this "minimal interface".
Essentially, that's my goal - to allow people who want to write (say)
a Windows GUI event loop, or a Windows event loop based of
WaitForXXXObject, or a Tkinter loop, or whatever, to *not* have to
write their own implementation of synchronisation or future objects.
That may mean lifting the asyncio code and putting it into a separate
library, to make the separation between "asyncio-dependent" and
"general async" clearer. Or if asyncio's provisional status doesn't
last long enough to do that, we may end up with an asyncio
implementation and a separate (possibly 3rd party) "general"
Before the Python 3.5 feature freeze, I should step-up and
formally reject PEP 455 for "Adding a key-transforming
dictionary to collections".
I had completed an involved review effort a long time ago
and I apologize for the delay in making the pronouncement.
What made it a interesting choice from the outset is that the
idea of a "transformation" is an enticing concept that seems
full of possibility. I spent a good deal of time exploring
what could be done with it but found that it mostly fell short
of its promise.
There were many issues. Here are some that were at the top:
* Most use cases don't need or want the reverse lookup feature
(what is wanted is a set of one-way canonicalization functions).
Those that do would want to have a choice of what is saved
(first stored, last stored, n most recent, a set of all inputs,
a list of all inputs, nothing, etc). In database terms, it
models a many-to-one table (the canonicalization or
transformation function) with the one being a primary key into
another possibly surjective table of two columns (the
key/value store). A surjection into another surjection isn't
inherently reversible in a useful way, nor does it seem to be a
common way to model data.
* People are creative at coming up with using cases for the TD
but then find that the resulting code is less clear, slower,
less intuitive, more memory intensive, and harder to debug than
just using a plain dict with a function call before the lookup:
d[func(key)]. It was challenging to find any existing code
that would be made better by the availability of the TD.
* The TD seems to be all about combining data scrubbing
(case-folding, unicode canonicalization, type-folding, object
identity, unit-conversion, or finding a canonical member of an
equivalence class) with a mapping (looking-up a value for a
given key). Those two operations are conceptually orthogonal.
The former doesn't get easier when hidden behind a mapping API
and the latter loses the flexibility of choosing your preferred
mapping (an ordereddict, a persistentdict, a chainmap, etc) and
the flexibility of establishing your own rules for whether and
how to do a reverse lookup.
P.S. Besides the core conceptual issues listed above, there
are a number of smaller issues with the TD that surfaced
during design review sessions. In no particular order, here
are a few of the observations:
* It seems to require above average skill to figure-out what
can be used as a transform function. It is more
expert-friendly than beginner friendly. It takes a little
while to get used to it. It wasn't self-evident that
transformations happen both when a key is stored and again
when it is looked-up (contrast this with key-functions for
sorting which are called at most once per key).
* The name, TransformDict, suggests that it might transform the
value instead of the key or that it might transform the
dictionary into something else. The name TransformDict is so
general that it would be hard to discover when faced with a
specific problem. The name also limits perception of what
could be done with it (i.e. a function that logs accesses
but doesn't actually change the key).
* The tool doesn't self describe itself well. Looking at the
help(), or the __repr__(), or the tooltips did not provide
much insight or clarity. The dir() shows many of the
_abc implementation details rather than the API itself.
* The original key is stored and if you change it, the change
isn't stored. The _original dict is private (perhaps to
reduce the risk of putting the TD in an inconsistent state)
but this limits access to the stored data.
* The TD is unsuitable for bijections because the API is
inherently biased with a rich group of operators and methods
for forward lookup but has only one method for reverse lookup.
* The reverse feature is hard to find (getitem vs __getitem__)
and its output pair is surprising and a bit awkward to use.
It provides only one accessor method rather that the full
dict API that would be given by a second dictionary. The
API hides the fact that there are two underlying dictionaries.
* It was surprising that when d[k] failed, it failed with
transformation exception rather than a KeyError, violating
the expectations of the calling code (for example, if the
transformation function is int(), the call d["12"]
transforms to d and either succeeds in returning a value
or in raising a KeyError, but the call d["12.0"] fails with
a TypeError). The latter issue limits its substitutability
into existing code that expects real mappings and for
exposing to end-users as if it were a normal dictionary.
* There were other issues with dict invariants as well and
these affected substitutability in a sometimes subtle way.
For example, the TD does not work with __missing__().
Also, "k in td" does not imply that "k in list(td.keys())".
* The API is at odds with wanting to access the transformations.
You pay a transformation cost both when storing and when
looking up, but you can't access the transformed value itself.
For example, if the transformation is a function that scrubs
hand entered mailing addresses and puts them into a standard
format with standard abbreviations, you have no way of getting
back to the cleaned-up address.
* One design reviewer summarized her thoughts like this:
"There is a learning curve to be climbed to figure out what
it does, how to use it, and what the applications [are].
But, the [working out the same] examplea with plain dicts
requires only basic knowledge." -- Patricia
I haven't run the test suite in awhile. I am in the midst of running it on
my Mac running Yosemite 10.10.3. Twice now, I've gotten this popup:
I assume this is testing some server listening on localhost. Is this a new
thing, either with the Python test suite or with Mac OS X? (I'd normally be
hidden behind a NAT firewall, but at the moment I am on a miserable public
connection in a Peet's Coffee, so it takes on slightly more importance...)
I've also seen the Crash Reporter pop up many times, but as far as I could
tell, in all cases the test suite output told me it was expected. Perhaps
tests which listen for network connections should also mention that, at
least on Macs?