[re-directing back to python-dev]
On 04/28/2013 08:42 PM, Davis Silverman wrote:
> as a not super experienced python developer, when i see Season('AUTUMN') it looks like im creating an a Season object. I
> understand your resoning, that it acts like a boolean singleton, however, i feel it would confuse many, and isn't worth
> it. From what i see, its meant to be a lookup sort of thing, but it doesnt really feel like one.
> Or am i completely wrong about something?
As far as you are concerned, you are creating a Season object. That it happens to be a pre-existing Season object is an
implementation detail, and the whole thing would work just as well if it did indeed create a brand new object (you
couldn't use `is` then, but in general `is` shouldn't be used anyway and I see no reason why `is` should be encouraged
with Enums; use `==`).
- the first few integers (up to 256 now, I think) are pre-created by the interpreter; when you do `int('7')` you are
not getting a brand-new, never before used, integer 7 object, you're getting a cached integer 7 object.
- all booleans (yup, both of them ;) are pre-created; when you ask for a True or a False, you are not getting a brand
Since `is` is discouraged, both of those cases could go the other way (always creating a brand new object) and properly
written programs would continue to run just fine -- slower, but fine.
Enums are the same: they could return brand new instances every time, and programs using `==` to compare will keep on
working. That they don't is an implementation detail.
The real guarantee with enums is that once you have one created you'll only ever see the original values; so
X = 1
Y = 2
Z = 3
will only have three possible elements, X, Y, and Z, and X will always have the value of 1, Y will always have the vale
of 2, and Z will always have the value of 3. If you try and get `Last("W")` you'll trigger an exception, just like you
do with `int("house")`.
On 3 May 2013 08:34, "guido.van.rossum" <python-checkins(a)python.org> wrote:
> changeset: 4870:26947623fc5d
> user: Guido van Rossum <guido(a)python.org>
> date: Thu May 02 14:11:08 2013 -0700
> Add time(), call_at(). Remove call_repeatedly(). Get rid of
add_*_handler() return value.
> pep-3156.txt | 80 +++++++++++++++++++++------------------
> 1 files changed, 43 insertions(+), 37 deletions(-)
> diff --git a/pep-3156.txt b/pep-3156.txt
> --- a/pep-3156.txt
> +++ b/pep-3156.txt
> @@ -252,13 +252,12 @@
> implementation may choose not to implement the internet/socket
> methods, and still conform to the other methods.)
> -- Resource management: ``close()``.
> +- Miscellaneous: ``close()``, ``time()``.
> - Starting and stopping: ``run_forever()``, ``run_until_complete()``,
> ``stop()``, ``is_running()``.
> -- Basic callbacks: ``call_soon()``, ``call_later()``,
> - ``call_repeatedly()``.
> +- Basic callbacks: ``call_soon()``, ``call_later()``, ``call_at()``.
> - Thread interaction: ``call_soon_threadsafe()``,
> ``wrap_future()``, ``run_in_executor()``,
> @@ -303,8 +302,8 @@
> Required Event Loop Methods
> -Resource Management
> - ``close()``. Closes the event loop, releasing any resources it may
> hold, such as the file descriptor used by ``epoll()`` or
> @@ -313,6 +312,12 @@
> again. It may be called multiple times; subsequent calls are
> +- ``time()``. Returns the current time according to the event loop's
> + clock. This may be ``time.time()`` or ``time.monotonic()`` or some
> + other system-specific clock, but it must return a float expressing
> + the time in units of approximately one second since some epoch.
> + (No clock is perfect -- see PEP 418.)
Should the PEP allow event loops that use decimal.Decimal?
> Starting and Stopping
> @@ -362,17 +367,27 @@
> ``callback(*args)`` to be called approximately ``delay`` seconds in
> the future, once, unless cancelled. Returns a Handle representing
> the callback, whose ``cancel()`` method can be used to cancel the
> - callback. If ``delay`` is <= 0, this acts like ``call_soon()``
> - instead. Otherwise, callbacks scheduled for exactly the same time
> - will be called in an undefined order.
> + callback. Callbacks scheduled in the past or at exactly the same
> + time will be called in an undefined order.
> -- ``call_repeatedly(interval, callback, **args)``. Like
> - ``call_later()`` but calls the callback repeatedly, every
> - ``interval`` seconds, until the Handle returned is cancelled or
> - the callback raises an exception. The first call is in
> - approximately ``interval`` seconds. If for whatever reason the
> - callback happens later than scheduled, subsequent callbacks will be
> - delayed for (at least) the same amount. The ``interval`` must be > 0.
> +- ``call_at(when, callback, *args)``. This is like ``call_later()``,
> + but the time is expressed as an absolute time. There is a simple
> + equivalency: ``loop.call_later(delay, callback, *args)`` is the same
> + as ``loop.call_at(loop.time() + delay, callback, *args)``.
It may be worth explicitly noting the time scales where floating point's
dynamic range starts to significantly limit granularity.
> +Note: A previous version of this PEP defined a method named
> +``call_repeatedly()``, which promised to call a callback at regular
> +intervals. This has been withdrawn because the design of such a
> +function is overspecified. On the one hand, a simple timer loop can
> +easily be emulated using a callback that reschedules itself using
> +``call_later()``; it is also easy to write coroutine containing a loop
> +and a ``sleep()`` call (a toplevel function in the module, see below).
> +On the other hand, due to the complexities of accurate timekeeping
> +there are many traps and pitfalls here for the unaware (see PEP 418),
> +and different use cases require different behavior in edge cases. It
> +is impossible to offer an API for this purpose that is bullet-proof in
> +all cases, so it is deemed better to let application designers decide
> +for themselves what kind of timer loop to implement.
> Thread interaction
> @@ -656,12 +671,9 @@
> - ``add_reader(fd, callback, *args)``. Arrange for
> ``callback(*args)`` to be called whenever file descriptor ``fd`` is
> - deemed ready for reading. Returns a Handle object which can be used
> - to cancel the callback. (However, it is strongly preferred to use
> - ``remove_reader()`` instead.) Calling ``add_reader()`` again for
> - the same file descriptor implies a call to ``remove_reader()`` for
> - the same file descriptor. (TBD: Since cancelling the Handle is not
> - recommended, perhaps we should return None instead?)
> + deemed ready for reading. Calling ``add_reader()`` again for the
> + same file descriptor implies a call to ``remove_reader()`` for the
> + same file descriptor.
> - ``add_writer(fd, callback, *args)``. Like ``add_reader()``,
> but registers the callback for writing instead of for reading.
> @@ -669,8 +681,7 @@
> - ``remove_reader(fd)``. Cancels the current read callback for file
> descriptor ``fd``, if one is set. If no callback is currently set
> for the file descriptor, this is a no-op and returns ``False``.
> - Otherwise, it removes the callback arrangement, cancels the
> - corresponding Handle, and returns ``True``.
> + Otherwise, it removes the callback arrangement and returns ``True``.
> - ``remove_writer(fd)``. This is to ``add_writer()`` as
> ``remove_reader()`` is to ``add_reader()``.
> @@ -704,11 +715,7 @@
> - ``add_signal_handler(sig, callback, *args). Whenever signal ``sig``
> - is received, arrange for ``callback(*args)`` to be called. Returns
> - a Handle which can be used to cancel the signal callback.
> - (Cancelling the handle causes ``remove_signal_handler()`` to be
> - called the next time the signal arrives. Explicitly calling
> - ``remove_signal_handler()`` is preferred.)
> + is received, arrange for ``callback(*args)`` to be called.
> Specifying another callback for the same signal replaces the
> previous handler (only one handler can be active per signal). The
> ``sig`` must be a valid sigal number defined in the ``signal``
> @@ -777,11 +784,12 @@
> -The various methods for registering callbacks (e.g. ``call_soon()``
> -and ``add_reader()``) all return an object representing the
> -registration that can be used to cancel the callback. This object is
> -called a Handle (although its class name is not necessarily
> -``Handle``). Handles are opaque and have only one public method:
> +The various methods for registering one-off callbacks
> +(``call_soon()``, ``call_later()`` and ``call_at()``) all return an
> +object representing the registration that can be used to cancel the
> +callback. This object is called a Handle (although its class name is
> +not necessarily ``Handle``). Handles are opaque and have only one
> +public method:
> - ``cancel()``. Cancel the callback.
> @@ -1354,10 +1362,6 @@
> Open Issues
> -- A ``time()`` method that returns the time according to the function
> - used by the scheduler (e.g. ``time.monotonic()`` in Tulip's case)?
> - What's the use case?
> - A fuller public API for Handle? What's the use case?
> - Should we require all event loops to implement ``sock_recv()`` and
> @@ -1410,6 +1414,8 @@
> - PEP 3153, while rejected, has a good write-up explaining the need
> to separate transports and protocols.
> +- PEP 418 discusses the issues of timekeeping.
> - Tulip repo: http://code.google.com/p/tulip/
> - Nick Coghlan wrote a nice blog post with some background, thoughts
> Repository URL: http://hg.python.org/peps
> Python-checkins mailing list
I write as a python lover for over 13 years who's always wanted
something like PEP 428 in Python.
I am concerned about the caching of stat() results as currently defined
in the PEP. This means that all behaviour built on top of stat(), such
as p.is_dir(), p.is_file(), p.st_size and the like can indefinitely hold
on to stale data until restat() is called, and I consider this
Perhaps in recognition of this, p.exists() is implemented differently,
and it does restat() internally (although the PEP does not document
If this behaviour is maintained, then at the very least this makes the
API more complicated to document: some calls cache as a side effect,
others update the cache as a side effect, and others, such as lstat(),
don't cache at all.
This also introduces a divergence of behaviour between os.path.isfile()
and p.is_file(), that is confusing and will also need to be documented.
I'm concerned about scenarios like users of the library polling, for
example, for some file to appear, and being confused about why the
arguably more sloppy poll for p.exists() works while a poll for
p.is_file(), which expresses intent better, never terminates.
In theory the caching mechanism could be further refined to only hold
onto cached results for a limited amount of time, but I would argue this
is unnecessary complexity, and caching should just be removed, along
Isn't the whole notion that stat() need to be cached for performance
issues somewhat of a historical relic of older OS's and filesystem
performance? AFAIK linux already has stat() caching as a side-effect of
the filesystem layer's metadata caching. How does Windows and Mac OS
fare here? Are there benchmarks proving that this is serious enough to
complicate the API?
If the ability to cache stat() calls is deemed important enough, how
about a different API where is_file(), is_dir() and the like are added
as methods on the result object that stat() returns? Then one can hold
onto a stat() result as a temporary object and ask it multiple questions
without doing another OS call, and is_file() etc. on the Path object can
be documented as being forwarders to the stat() result just as p.st_size
is currently - except that I believe they should forward to a fresh,
uncached stat() call every time.
I write directly to this list instead raising it to Antoine Pitrou in
private just because I don't want to make extra work for him to first
receive my feedback and the re-raise it on this list. If this is wrong
or disrespectful, I apologize.
We may not want to /completely/ disallow subclassing. Consider:
--> class StrEnum(str, Enum):
... '''string enums for Business Basic variable names'''
--> class Vendors(StrEnum):
EnumError: subclassing not allowed
My point is that IntEnum, StrEnum, ListEnum, FloatEnum are all "subclasses" of Enum. To then have a subclass of
that, such as Season(StrEnum), is subclassing a subclass.
Now, if we do want to completely disallow it, we can ditch IntEnum and force the user to always specify the mixin
--> class Season(str, Enum):
--> class Names(str, Enum):
But that's not very user friendly... although it's not too bad, either.
One consequence of the way it is now (IntEnum, StrEnum, etc., are allowed) is that one can put methods and other
non-Enum item in a base class and then inherit from that for actual implemented Enum classes.
--> class StrEnum(str, Enum):
... def describe(self):
... print("Hi! I'm a %s widget!" % self.value)
--> class Season(StrEnum):
... spring = 'green'
... summer = 'brown'
... autumn = 'red'
... winter = 'white'
--> class Planet(StrEnum):
... mars = 'red'
... earth = 'blue'
Hi! I'm a brown widget!
Hi! I'm a blue widget!
Hey everybody how are you all :)
I am an intermediate-level python coder looking to get help out. I've
been reading over the dev guide about helping increase test coverage
And also the third-party code coverage referenced in the devguide page:
I'm seeing that according to the coverage tool, two of my favorite
libraries, urllib/urllib2, have no unit tests? Is that correct or am I
reading it wrong?
If that's correct it seems like a great place perhaps for me to cut my
teeth and I would be excited to learn and help out here.
And of course any thoughts or advice for an aspiring Python
contributor would be appreciated. Of course the dev guide gives me
plenty of good info.
A musician must make music, an artist must paint, a poet must write,
if he is to be ultimately at peace with himself.
- Abraham Maslow
[creating new thread]
On 04/29/2013 01:30 AM, Steven D'Aprano wrote:
> On Sun, Apr 28, 2013 at 11:50:16PM -0700, Ethan Furman wrote:
>> In other words, currently:
>> class Color(Enum):
>> red = 1
>> green = 2
>> blue = 3
>> class MoreColor(Color):
>> cyan = 4
>> magenta = 5
>> yellow = 6
>> black = 7
>> MoreColor.red is Color.red # True
>> But as soon as:
>> type(Color.red) is Color # True
>> type(MoreColor.red) is MoreColor # True
> I don't believe this is correct. As I understand it, the proposal is the
> weaker guarantee:
> isinstance(Color.red, Color) # True, possibly using __instancecheck__
Words from Guido:
On 04/23/2013 08:11 AM, Guido van Rossum wrote:
> I gotta say, I'm with Antoine here. It's pretty natural (also coming
> from other languages) to assume that the class used to define the
> enums is also the type of the enum values. Certainly this is how it
> works in Java and C++, and I would say it's the same in Pascal and
> probably most other languages.
On 04/25/2013 02:54 PM, Guido van Rossum wrote:
> I don't know what's going on, but it feels like we had this same
> discussion a week ago, and I still disagree. Disregarding, the C[i]
> notation, I feel quite strongly that in the following example:
> class Color(Enum):
> red = 1
> white = 2
> blue = 3
> orange = 4
> the values Color.red etc. should be instances of Color. This is how
> things work in all other languages that I am aware of that let you
> define enums.
On 04/25/2013 03:19 PM, Guido van Rossum wrote:
> I suppose you were going to propose to use isinstance() overloading,
> but I honestly think that Color.red.__class__ should be the same
> object as Color.
On 04/25/2013 03:37 PM, Guido van Rossum wrote:
> TBH I had a hard time getting over the fact that even though the class
> said "a = 1", C.a is not the integer 1. But I did get over it.
> Hopefully you can get over *this* weirdness.
[and from the summary thread]
On 04/28/2013 01:02 PM, Guido van Rossum wrote:
> On Sun, Apr 28, 2013 at 12:32 PM, Ethan Furman wrote:
>> - should enum items be of the type of the Enum class? (i.e. type(SPRING)
>> is Seasons)
> IMO Yes.
First of all, hi, I'm new to this list.
Following the enum discussions on this list I am kind of confused about
how enums and their respective instances, i.e. the values, should behave
in "normal" context.
I apologize beforehand for the mass of "questions" because the following
contains really many discussed things but they all kinda depend on one
another, and for terminology which I might've missed.
Considering we have:
red = 1
white = 2
other = "undefined"
idle = 0
busy = 1
idling = idle
ideling = 0
together with the premises:
1. type(State.busy) == State
2. type(State) == enum
3. isinstance(State.idle, State)
4. State.idle is State.idle
which should mostly be agreed on (if I didn't misinterpret).
How would an enum instance (e.g. State.busy) behave in normal Python
expressions? Should these instances just wrap their values and provide
some simple overhead useful for enums?
I'll just note down a few examples of how I think it could work and
express a few thoughts on them:
1. State.busy == 1
2. State.busy == Color.red
3. int(State.Busy) is 1
4. isinstance(State.busy, int)
6. State.busy is not 1
7. State.busy is not Color.red
8. State.idle in State
9. 0 in State # True, False or raise?
10. State.idling is State.idle
11. State.idle == State.idling
12. State.idle is not State.idling
1. & 2.
Considering that enum components contain a value this value should be
accessable and comperable. If it wasn't then there would be no real need
to assign a value to it because everything must be compared using the is
operator anyway. In this case an entirely new syntax could be
implemented but since I think this is not what we want I'll skip this.
Futhermore, if enum instances should compare equal to their values but
unequal to each other this creates weird circumstances like:
Color.red == 1 == State.busy -but- Color.red != State.busy == 1
Similar to 1. an enum instance's value /should/ be accessable as an
expression by itself when needed besides simple operations like
compatisons. This might be really tricky because the object in question
is still of type State and some existing uses might break even though I
can't think of any.
What would certainly be a problem in this case though is that repr()
would not reveal the value but the actual enum instance. This could be
problematic when the enum instance's value is not a standard type or
when repr() and str() differ and repr() is what's needed.
Combines 1. and 2. in class relation. It could be argued that the actual
value should be an instance of its type subclassed by the enum class
(State) which is in turn a subclass of enum.
Going further on this it could also allow various things like class
EnumWithInt(enum, int) which would only allow integers as its enum
identifiers. This goes slightly into the IntEnum direction but with more
Related to 3. in that this also allows attribute access to the enum
instance's value. Considering that the enum instance should already be
an instance of its value type as well this kinda speaks for itself.
6. & 7.
Obviously, these can't be true even though they compare equal (see 1.
Check if an enum instance is a member of State. Since in this case the
same can also be achieved by using isinstance or by comparing the
instance's type, this is mostly interesting for subclassing enums
because, but I won't that cover here because it would probably be too much.
Analog to 8. this could either return just work, simply return False
because this object is obviously not a member of State or raise an
exception because of a type mismatch (not an enum instance). Also
interesting for subclassing enums.
See premise 4. on this.
11. & 12.
Even though these compare equal and even if their value object is
actually the same they themseves should /not/ be the same object.
Someone needs to find a use case for this though because as of now I can
only think of this being Pythonic.
So, the most critical part of this would probably be 3. to 7. regarding
the value types. I'd love to read some thoughts and comments on that.
[ Note: I already asked this on
http://stackoverflow.com/questions/15917502 but didn't get any
The description of tempfile.NamedTemporaryFile() says:
| If delete is true (the default), the file is deleted as soon as it is
In some circumstances, this means that the file is not deleted after the
program ends. For example, when running the following test under
py.test, the temporary file remains:
| from __future__ import division, print_function, absolute_import
| import tempfile
| import unittest2 as unittest
| class cache_tests(unittest.TestCase):
| def setUp(self):
| self.dbfile = tempfile.NamedTemporaryFile()
| def test_get(self):
| self.assertEqual('foo', 'foo')
In some way this makes sense, because this program never explicitly
closes the file object. The only other way for the object to get closed
would presumably be in the __del__ destructor, but here the language
references states that "It is not guaranteed that __del__() methods are
called for objects that still exist when the interpreter exits." So
everything is consistent with the documentation so far.
However, I'm confused about the implications of this. If it is not
guaranteed that file objects are closed on interpreter exit, can it
possibly happen that some data that was successfully written to a
(buffered) file object is lost even though the program exits gracefully,
because it was still in the file objects buffer and the file object
never got closed?
Somehow that seems very unlikely and un-pythonic to me, and the open()
documentation doesn't contain any such warnings either. So I
(tentatively) conclude that file objects are, after all, guaranteed to
But how does this magic happen, and why can't NamedTemporaryFile() use
the same magic to ensure that the file is deleted?
»Time flies like an arrow, fruit flies like a Banana.«
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6 02CF A9AD B7F8 AE4E 425C